I've been thinking a lot lately about what I will argue is a a fundamental problem facing traditional computational approaches in solving some of the really interesting problems (e.g. for me the neurobiological basis for consciousness, but also protein folding/drug design, factoring large numbers, etc), namely that traditional linear approaches to solving problems one micro-computation at a time will never model a protein or a brain.
While thinking/Googling/doodling about the question I hit upon a couple of interesting articles: Low-Power Chips to Model a Billion Neurons and The CIA and Jeff Bezos Bet on Quantum Computing.
Logicization of mathematics and the development of modern digital computers have obviously allowed powerful investigations. The Furber article pulls together some of the most salient pieces of information that demonstrate that all of this firepower is dwarfed by the firepower of the brain and also starts to get to the how of it - namely parallelization.
However, I'm not sure that building the model suggested by Furber is the right approach. Getting rid of "the expectation of deterministic operation" is important, but even if you're getting the "right" outputs (accurately modeling a system) you may not understand how you achieved that result. Put it another way: if we have a situation where (a) we've built a system "empirically, not just following the theory" (either in the form of a quantum computer or with massively parallelized analog circuits) and (b) it can solve the problems that we pose to it, I ask: do we understand the system any better for having built the model?
The answer, I suppose, is that once you have a model, you can start poking and picking at it. But it is likely that a model system of sufficient complexity to recapitulate the entire system under study just brings you back around to the same problem you started out with. Or, more likely still, that you'll never actually get the system to do anything meaningful because you don't know how to set the initial state conditions out of an impossibly large set of state spaces to choose from.
Quantum computers offer the theoretical possibility of exploring this vast state space, if and when such computers are invented. (Reading about quantum computers veered me off to the Church–Turing thesis, which has the amusing property of being " 'a somewhat vague intuitive one'. Thus, the thesis, although it has near-universal acceptance, cannot be formally proven.")
Well, I'm on OB call tonight and I keep getting interrupted, so I'll get back to this at some point soon. But I'm pretty happy with what I've laid out so far.