Artificial Intelligence or Intelligence Augmentation

In one of my networks, a recent conversation has been on Artificial Intelligence (AI) vs Intelligence Augmentation (IA). I’m a fan of both, but my focus is more on the IA side. It triggered some thoughts that I penned to them and thought I’d share here [notes to clarify inserted with square brackets like this]:

As context, I’m an AI ‘groupie’, and was a grad student at UCSD when Rumelhart and McClelland were coming up with PDP (parallel distributed processing, aka connectionist or neural networks). I personally was a wee bit enamored of genetic algorithms, another form of machine learning (but a bit easier to extract semantics, or maybe just simpler for me to understand ;).

Ed Hutchins was talking about distributed cognition at the same time, and that remains a piece of my thinking about augmenting ourselves. We don’t do it all in our heads, so what can be in the world and what has to be in the head?  [the IA bit, in the context of Doug Engelbart]

And yes, we were following fuzzy logic too (our school was definitely on the left-coast of AI ;).  Symbolic logic was considered passe’! Maybe that’s why Zadeh [progenitor of fuzzy logic] wasn’t more prominent here (making formal logic probabilistic may have seemed like patching a bad core premise)?  And I managed (by hook and crook, courtesy of Don Norman 😉 to attend an elite AI convocation held at an MIT retreat with folks like McCarthy, Dennett, Minsky, Feigenbaum, and other lights of both schools.  (I think Newell were there, but I can’t state for certain.)  It was groupie heaven!

Similarly, it was the time of emergence of ‘situated cognition’ too (a contentious debate with proponents like Greeno and even Bill Clancy while old school symbolics like Anderson and Simon argued to the contrary).  Which reminds me of Harnad’s Symbol Grounding problem, a much meatier objection to real AI than Dreyfuss’ or the Chinese room concerns, in my opinion.

I do believe we ultimately will achieve machine consciousness, but it’s much further out than we think. We’ll have to understand our own consciousness first, and that’s going to be tough, MRI and other such research not withstanding. And it may mean simulating our cognitive architecture on a sensor equipped processor that must learn through experimentation and feedback as we do. e.g. taking a few years just to learn to speak! (“What would it take to build a baby?” was a developmental psych assignment I foolishly attempted 😉

In the meantime, I agree with Roger Schank (I think he was at the retreat too) that most of what we’re seeing, e.g. Watson, is just fast search, or pattern-learning. It’s not really intelligent, even if it’s doing it like we do (the pattern learning). It’s useful, but it’s not intelligent.

And, philosophically, I agree with those who have stated that we must own the responsibility to choose what we take on and what we outsource. I’m all for self-driving vehicles, because the alternative is pretty bad (tho’ could we do better in driver training or licensing, like in Germany?).  And I do want my doctor augmented by powerful rote operations that surpass our own abilities, and also by checklists and policies and procedures, anything that increases the likelihood of a good diagnosis and prescription.  But I want my human doctor in the loop.  We still haven’t achieved the integration of separate pattern-matching, and exception handling, that our own cognitive processor provides.

The post Artificial Intelligence or Intelligence Augmentation appeared first on Learnlets.

Link to original post