Home Tech What Defines Artificial Intelligence? The Complete WIRED Guide

What Defines Artificial Intelligence? The Complete WIRED Guide

Call us


Artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach. 

AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. 

He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power



Source link