J003-Content-Trend-on-the-Horizon-The-Learning-Machine_SQMachine learning is in the news a lot these days, but what does that even mean?

Certainly computers are good at crunching numbers, storing and retrieving bits of information, searching and sorting, comparing and detecting patterns. They’re faster than the human brain and they’re arguably capable of storing more information. They can be programmed with algorithms by which they can analyze past events and predict future ones. They can play chess, recognized by most as a game of strategy that requires intelligent reasoning, and since the 1980s have been able to beat the best human players, earmarked by the infamous victory of IBM’s Deep Blue over Garry Kasparov in the late ‘90s.

But do they really learn the same way we do? Let’s delve deeper into the world of neural networks and AML (Advanced Machine Learning) and examine some predictions from real people regarding just how far computers are capable of going and the ways in which they may never be able to emulate our “grey matter.”

Before we try to answer the question about whether computers can really learn, we have to back up and ask a few other, foundational questions: What is “learning?” In order to learn, must you be able to think? And if so, what is “thinking?”  For that matter, what is a “computer?”

That last question might seem silly, but the term “computer” was originally used in the early 1600s and applied to people, not machines (someone who does computations – duh). There is some disagreement as to exactly when and by whom the non-human computer was invented, but Charles Babbage is generally credited with creating the concept, in 1822. His mechanical computing machines, the Difference Engine and the Analytical Engine, were the ancestors of today’s modern electronic computers. The first binary programmable machine was the Z1, built by Konrad Zuse in the 1930s.

Once computers became more sophisticated and were capable of doing many of the things that humans can do (often much more quickly and efficiently), it was inevitable that those humans would exercise one quality machines don’t (at least yet) have – their imaginations – and dream of building machines capable of thinking, and even feeling, in the same way we do.  And that road inevitably leads to the scariest possibility of all.

Machines that are sentient (able to feel, perceive and experience have self-awareness and emotion) has been the subject of science fiction for many decades. From Isaac Asimov’s The Brain to HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey to my personal favorite, Jane from Orson Scott Card’s Ender series. And we can’t forget WOPR from that classic 80s movie, WarGames. Of course, there have been many, many more, some of which – like Star Trek’s voice-controlled computer on board the Enterprise who sounds amazingly like Majel Barrett – that don’t have names.

Speaking of Star Trek: as if it weren’t tricky enough to deal with computers that hold intelligent conversations from within the confines of their somewhat traditional hardware form factors, the logical next step is the robotic computer that takes on the physical form of a human being, such as Lieutenant Data, every Trekkie’s favorite Android. He is, after all, so much friendlier (and funny) than Battlestar Galactica’s cylons and Transformers’ Megatron.  If you’re more into the idea of hybrid IT, there’s always the half-human, half-machine Borg and other fictional cybernetic organisms.

But back to the real world: The idea of artificial intelligence as a serious field of study began to take hold in the 1940s and became a reality in the 1950s. The term AI is generally recognized to have been first used by John McCarthy in 1955. As an interesting aside, that was also the year in which some of the people whose innovations brought us to the technological state where we are today, such as Bill Gates and Steve Jobs, were born.

McCarthy’s credentials were impressive; he taught at Princeton, Stanford, Dartmouth and MIT. He advanced the theory that machines can be said to have beliefs in a paper called Ascribing Mental Qualities to Machines. Although he died in 2011 at the age of 84, his web page is still alive on the Stanford.edu site with links to many of his lectures, articles and papers that earned him the title of “Father of AI.”

McCarthy’s 1955 project proposed to find a way to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”  Certainly we have come a long way in some of those areas in the subsequent 60 years. Computers are speaking to us in English (and other human languages) and the technology isn’t confined to high dollar military and academic environments (or spaceships); Cortana, Siri, Google Now (who really needs a better name) and other “digital personal assistant” engines are proliferating in the consumer space.

A few years after McCarthy’s team first tackled this mission, Arthur Samuel implemented one of the first uses of machine learning – on an IBM 701 that had only 9 KB of memory – with a checkers-playing program. That was in the late 1950s. Almost forty years later, IBM was still at it and Deep Blue, an RS/6000 SP2 “supercomputer” built on the concept of massively parallel processing with thirty-two RISC CPUs and 512 special chess processors, declared victory over Kasparov.

The problem (or perhaps the good thing) with Deep Blue was that its artificial intelligence was very narrow. It was like a savant – very, very good at one particular thing but with no abilities outside its specialty. There was no danger of Deep Blue taking over the world, unless that world was being played out on a chess board.

Human level AI requires a broader, more generalized intelligence. Humans can, not only learn to deal with a huge number of very different problems, from how to cook a turkey to how to build a vehicle that will travel to Mars – we’re also able to apply what we’ve learned in one situation to other situations.  Creating a machine that can think, and even outthink us, in a narrow field is relatively easy. We’re already using such technology in everyday life, even if we don’t think of it as AI. Cars that detect when we’re drifting out of our lanes and correct it, planes that fly on autopilot, spam filters that analyze your mail and decide which messages you don’t want to see, medical devices that determine how much medication to dispense, and so many more.

On the other hand, creating a machine that can plan, reason, grasp complicated concepts and think abstractly like a person of even average intellectual capability is much harder. Some would argue that it’s impossible. In the sci-fi movies, the robots always combine superintelligence and superhuman strength that makes them formidable opponents. However, it’s easier to write about it than to actually do it (which is probably fortunate for us).

Before we can devise a computer that thinks the way we do, we must first understand exactly how we think. And the real challenge isn’t in making computers that can do the really difficult things – like out-playing the world chess champion or perform quantum calculations. It’s the things that most people can do easily that are so hard for a computer, such as looking at a picture of a cloud and seeing an animal shape in it, or “getting” the sarcasm in a seemingly complimentary comment, or telling a story with poignancy that will make you cry. Computers are very, very good at collecting and storing data, and they can sort and analyze it and detect patterns – but computers lack one important element that we humans have (in greater or lesser degrees): imagination. And in order to learn, we often have to imagine.

We can keep making computers with more and more powerful processors and more and more memory, making them capable of doing more and more complex tasks faster and faster, but that won’t make them “smarter” in the human sense. Thus many AI scientists believe the only way to emulate human thinking and learning is to emulate the human brain. Since the brain works through biological neural networks, the goal is to build artificial neural networks to create a computer “brain” that thinks like we do.

Despite their shortcomings in comparison to the wild predictions of science fiction writers (at least so far), we have come a very long way in the field of AI – further than most people realize. We have robots performing surgery, cars that don’t just assist you when you’re driving but drive themselves, factories once staffed by human assembly line workers that now “employ” robots.  In fact, the big fear isn’t so much that machines will turn against us, conquer and kill or enslave us, but that they’ll simply take all our jobs. If you’re middle-aged or older, you can probably think of many jobs that existed when you were a kid, that have been completely or mostly replaced by technology: switchboard operators, typists, file clerks, mail sorters and handlers, farm workers and many more.

Is this a good thing or a bad one? Does it mean we’ll all be living in poverty because we can’t find work, or does it mean the cost savings will make it possible for us to spend our time enjoying ourselves and taking advantage of the capabilities of all this awesome tech? As physicist Niels Bohr said, prediction is very difficult, especially if it’s about the future.

One thing is a good bet: AI is a trend that’s not going away, and it’s likely to take us places we can’t even currently imagine – even with our superior human imaginations.