Will Machines Become More Intelligent Than Humans?
One common definition describes intelligence as the ability of someone or something to achieve goals in a wide variety of environments. We can compare how well computers and humans are able to meet this definition.
Computers start with many advantages. They have better memories, they can quickly gather information from numerous digital sources, they can work continuously without the need for sleep, they don’t make mathematical errors, and they are better at multitasking and thinking several steps ahead than humans. This makes them superior to humans at achieving some goals, such as calculating complex mathematical problems or sorting through large amounts of data. However, most AI systems are specialized for very specific applications.
Humans, on the other hand, can use imagination and intuition when approaching new tasks in new situations. This makes humans more readily able to apply their intelligence to a variety of environments, such as walking along unfamiliar trails. This is something machines often struggle with.
Intelligence can also be defined in other ways, such as the possession of a group of traits, including the ability to reason, represent knowledge, plan, learn, and communicate. Many AI systems possess some of these traits, but no system has yet acquired them all.
Scholars have designed tests to determine if an AI system has human-level intelligence. One example is the Turing Test, in which an interviewer exchanges messages with two players in different rooms. One player is a human, while the other is a machine. To pass the test, the machine must make the interviewer believe that it is the human player. Some AI systems can do this successfully but only over short periods of time.
As AI systems grow more sophisticated, they may become better at translating capabilities to different situations the way humans can. This would mean the creation of “artificial general intelligence” or “true artificial intelligence,” a primary goal among some researchers. Theoretically, this could result in artificial intelligence that transcends human intelligence. The term “singularity” is sometimes used to describe a situation in which an AI system develops agency and grows beyond human ability to control it. So far, experts continue to debate when—and whether—this is likely to occur.
Several milestones highlight the advancement of artificial intelligence relative to human intelligence: