Check out the new book by one of our favorite authors Peter Psichogios

Leading from the Front Line: Learn How to Create Exceptional Customer Experiences.

Click here to learn more about Peter's new book!

The Age of Cognition Deux: Sweet Dreams and Thinking Machines

In Part I we delved into the miraculous muck that is the human brain. This time, we’re going to grapple with the disquieting possibility that we can make conscious beings from the intricate twistings and stampings of metal and sand. It’s an outrageous idea, no less a thing of awe than golems of Jewish folklore or the myth of Frankenstein. If you don’t feel an involuntary shudder at the possibility of success then you are, well, not quite human.

But it can be tough to gauge whether this is a real possibility or the kind of goofy, gothic tale we tell each other around camp fires at night. Here are three things we actually know:

A) We’ve been trying to figure how to build artificial general intelligence (or AGI) for over half a century and all we’ve got to show for it is this lousy t-shirt. At least, that’s how I interpret the writings of physicist David Deutsch:

Small museum at the Google New York office: Wikimedia photo by Marcin Wichary

[N]o brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

Just in case we didn’t get the message behind his cut, he squeezes in a healthy dose of lemon juice, saying, “I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors.”

That’s pretty bold coming from someone in his field, where — you could argue — no major discoveries have been made since quantum physics started giving Einstein conniptions. In that field, everybody’s still bumping around in the pitch blackness when it comes to dark energy and nobody has come close to untangling string theory.   What’s more, the field of physics has been around, in one way or another, for millennia as opposed to six measly decades! ( In fairness, though, I should mention Deutsch is no pretender, having made considerable contributions to the field of quantum computation, even winning the Dirac Prize).

Except for the too-cool-for-school Higgs-boson discovery in physics, AI has had way better showbiz events, from the thumping of players on Jeopardy to Big Blue‘s bruising of World Chess Champion Garry Kasparov. But those were computers designed to excel in just one area, not the general intelligence machines to which Deutsch is referring. He insists that “an AGI is qualitatively, not quantitatively, different from all other computer programs.”

Maybe so, but such leaps don’t tend to occur all at once. There are intermediary steps. The question is whether the field’s  staggerings thus far have taken it in useful directions.

B) Weak AI is looking buffer than ever these days, and it’s getting smarter too. Weak AIs are computer programs that are “smart” in specific ways but are otherwise complete idiots. That’s why they purportedly have no aspirations toward general intelligence. Weak AIs are not capable of  talking their way through any Turing (or even Loebner) tests anytime soon, but some of them are uttering some interesting baby-like gurgles.

Even leaving aside Watson and Big Blue,  we see weak AIs getting cannier via two key developments: first, the ever faster speeds (and declining costs) of microprocessors, which allow engineers to throw  more power at hard problems such as the game of chess; second, the new designs that mimic — if only clumsily  – the workings of the brain.  In regard to the latter, greater emphasis is now being given to computer architectures, softwares and firmwares that allow computers to learn on their own. The New York Times reports about some of these new machines:

They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.

The very creation of technologies such as “artificial nerves cells” shows that we are, like it or not, getting closer to building machines that can make sense of the world on their own. Even without the fancy new hardware, quite a lot can be done just by tapping into advanced “neural net” technology, which has been a around a long time now.  For instance, Google has created programs that can learn to see things they’ve not been “instructed” about in advance, such the characteristics of a cat. I suppose it’s the equivalent of a child learning to recognize and identify “kitty.”

Source: Technology Review, which reports, “This composite image represents the ideal stimulus that Google’s neural network recognizes as a cat face.”

C) The arms race has heated up, causing governments and private enterprises to throw gobs of money into the maw of AI and brain research.

Some of this AI race is being driven by the dramatic increase in mobile devices.  Writing in Psychology Today,  Ben Hayden states:

I predict that the release of Siri on the iPhone 4Gs will someday be considered a milestone in the history of artificial intelligence.  Not because it is some kind of new advance in the Turing Test, but because it puts artificial voice-responding agents into the hands of the general consumer, and that’s pretty cool.

I think Dr. Hayden understates the reason this will be a milestone. It’s less the “cool” factor than the fact that we (and I’m speaking for all the fat-fingered folk out there) increasingly want voice-activated mobile devices that immediately deliver whatever information, or even analysis, we request. We want a search function that is even better than what Google traditionally provides and a reporting function that delivers accurate information in commonsense ways. In short, we want a handheld AI that really works.

To its credit, Google gets it. If it rests on it’s Google-search laurels, it’s going to be superseded by something better, something a lot more like the smart voice-activated computers we’ve seen in countless sci-fi films and television shows.

So, the future of search and voice-activated personal assistants will have a strong influence on the future of AI.  I think that’s why Google dropped $400 million on the artificial intelligence firm DeepMind. It may seem like big bucks to the rest of us, but I’d say it’s quite a modest investment in the company’s future.

It isn’t only Google that’s throwing money at AI. IBM, of course, is all in, and it’s interesting that companies such as Facebook and Amazon are also making significant AI investments.  A lot of this is being done to analyze and predict consumer behavior. The Motley Fool reports,

[P]redicting what a user wants before they order it, or adding too much artificial intelligence to a mobile app in order to deliver better ads could make users feel a bit uneasy. Because of this, you can expect all of them to release AI integration cautiously. But make no mistake, artificial intelligence and machine learning will be a big part of how technology companies improve the way they earn their profits in the near future.

Investment companies are also in the game, naturally hoping AIs can be awesome stock pickers. And the U.S. government is also betting big on AI, with the Central Intelligence Agency announcing last year that it would be investing in Narrative Science, a startup that reportedly “uses computers to make sense of data and present it in prose.” Meanwhile, the  Defense Advanced Research Projects Agency  is associated with the government’s BRAIN initiative. One possible military application of such research is, of course, building better AIs.

The Bottom Line: Weak AI is thriving whereas AGI is seen by some as an utter bust. The latter point of view may be overstated because, ultimately, we don’t know what it’s going to take to build an AGI. If such a machine is possible (and desirable), it’s architecture will probably incorporate a lot of the lessons learned over the last six decades, including those coming from neuroscience. To me, what’s more interesting than AGI (because it’s more certain) is that the arms race in advancing “weak AI” is likely to turn out some amazing products and services in the near future.

Language-based interfaces are going to get better fast, and regular folk like you and me will increasingly be able to tap into super-computer-level AIs. This is going to reshape all kinds of industries, from medicine and marketing to robotics and manufacturing. In fact, I think innovations linked to neuroscience and AI will be among the top two or three drivers of business and society over the next decade.  I’ll discuss some specific applications in the third part of this series.


Link to original post

0 Comments

Leave a reply

©2016 Human Capital League Your business online - made simple!

Log in with your credentials

or    

Forgot your details?