Quick little article from Arnold Kling reviewing Jeff Hawkins' "On Intelligence". I picked up the book a couple of weeks ago but haven't had a chance to start reading it (perhaps on the Thanksgiving flight back home?). Kling was enamored of it -
Hawkins argues for the centrality of patterns in thought. The brain constantly forms patterns. It communicates internally through patterns. This facility with patterns is what Hume described as "compounding, transposing, augmenting or diminishing" our sensory data. In fact, Hawkins would argue that we experience our sensations as patterns. For example, what we see is necessarily a combination of the patterns predicted by our brain and the signals sent from our retina.
...I think that if I were trying to build a learning machine, my strategy would not focus on hardware. I would try to build a machine that could develop and manipulate a database of analogies. I would probably try to come up with (or borrow from someone else in the field) an "algebra" for working with analogies that the computer can use in order to construct and test new patterns. That would be an old-fashioned approach, in Hawkins' view. However, after reading On Intelligence, I think that the key to getting any machine to learn is to give it a variety of both stimuli to absorb and tasks to perform. Moreover, it is important to have the machine synthesize its knowledge, rather than use a separate program for each task.
While I'm sympathetic to Hawkins' angle of attack here (well, at least given how Kling presents it), I fear that both are woefully underestimating the value / degree to which human intelligence is predicated on a priori knowledge that simply isn't learned via trial / error. Here - relative to Kling - I'm far more sympathetic to Kant rather than Hume (from the Wikipedia entry for Kant) -
Hume's conclusions, Kant realized, rested on the premise that knowledge is empirical at its root. The problem that Hume identified was that basic principles like cause and effect cannot be empirically derived. Kant's goal, then, was to find some way to derive cause and effect without relying on empirical knowledge. Kant rejects analytical methods for this, arguing that analytic reasoning can't tell you anything that isn't already self-evident. Instead, Kant argued that we would need to use synthetic reasoning. But this posed a new problem - how can one have synthetic knowledge that is not based on empirical observation - that is, how can we have synthetic a priori truths.
Kant did not have any trouble showing that we do have synthetic a priori truths. After all, he reasoned, geometry and Newtonian physics are synthetic a priori knowledges and are fundamentally true. The issue was showing how one could ground synthetic a priori knowledge for a study of metaphysics. This led to his most influential contribution to metaphysics - the abandonment of the quest to try to know the world in itself, instead acknowledging that there is no way to determine whether something is experienced the way it is because that's the way it is, or because the faculties we have with which to perceive and experience are constructed such that we experience it in a given way.
Modulo a better understanding of how humans perform their own synthesis, I fear that a machine will almost never pass many of the anthropomorphic tests for intelligence - Turing's included. That's not to say that "artificial intelligence" can't create progressively "smarter" machines that solve non-human-type problems - I'm perfectly comfortable with the notion that many types of intelligence can / do exist. Instead, we're going to build human-like artificial intelligence only after doing a LOT more research into the basis of human intelligence first.