Scientist says artificial intelligence could kill us in about 200 years

Scientist says artificial intelligence could kill us in about 200 years
Scientist says artificial intelligence could kill us in about 200 years
--

In a new scientific study, an English radio astronomer questions whether artificial intelligence is one of the reasons why we have never found other intelligent beings in the universe, while studying the length of time it would take a civilization to be wiped out by “runaway” Artificial Intelligence.

The “Fermi Paradox” captures this idea: in an infinite universe, how is it possible that there are no other civilizations sending us radio signals? Could it be the case that once civilizations develop artificial intelligence, it’s a short route to oblivion for most of them? These kinds of large-scale events are called “Big Filters,” and artificial intelligence is one of the most popular topics of speculation about them. Could the inevitable development of artificial intelligence (AI) create the eerie ‘Great Silence’ we hear from the universe?

The “Great Filter” theory holds that other civilizations, potentially many, have existed throughout the history of the universe, but all disappeared before they had a chance to come into contact with Earth. In 1998 the economist Robin Hanson argued that the emergence of life, intelligence and advanced civilization is a much rarer phenomenon than the statistics suggest.

THE Michael Garrett is radio astronomer at the University of Manchester and director of the Jodrell Bank Center for Astrophysics, with extensive involvement in the search for extraterrestrial intelligence (SETI). Essentially, while his research interests are eclectic, he is a highly specialized and accredited version of the people in TV series or movies who listen to the universe for signs of other cultures. In this paper, which was evaluated by the same professors and published in the Journal of the International Academy of Astronauticscompares theories about artificial super-intelligence with specific observations using radio astronomy.

Garrett explains in his paper that scientists are becoming increasingly concerned as time goes by without hearing any sign of other intelligent life. “This ‘Great Silence’ presents something of a paradox when contrasted with other astronomical findings that suggest the universe is hospitable to the emergence of intelligent life”, he writes. “The concept of the ‘great filter’ is often used – this is a universal barrier and an insurmountable challenge preventing the widespread emergence of intelligent life,” he notes.

There are countless possible “Great Filters”, from climate extinction to a devastating global pandemic. Any number of events could stop a global civilization from becoming multiplanetary. For people who follow and believe in Big Filter theories more ideologically, putting people on Mars or the moon is a way to reduce risk. “The longer we stay alone on Earth, the more likely it is that a Great Filter event will wipe us out,” the theory goes.

Today, artificial intelligence is not capable of approaching human intelligence. But, Garrett writes, AI is doing tasks that humans previously didn’t think computers could do. If this path leads to a so-called general artificial intelligence (GAI) – a key distinction that means a algorithm that can think and synthesize ideas in a truly human way, coupled with incredible computing power- we could really be in trouble. And in this paper, Garrett follows a chain of hypothetical ideas to a possible conclusion. How long would it take a civilization to be wiped out by uncontrolled AI?

Unfortunately, in Garrett’s scenario, it only takes 100-200 years. Coding and developing artificial intelligence is a single-minded task involving and accelerated by data and processing power, he explains, compared to the messy and multidimensional task of space travel and colonization. “We see this split today with the influx of researchers in IT fields compared to the dearth in the life sciences. Every day on Twitter, loudmouth billionaires talk about how great and important it is to colonize Mars, but we still don’t know how humans will even survive the trip without getting shredded by cosmic radiation. Pay no attention to that man behind the curtain,” says the radio astronomer.

Garrett looks at a number of specific hypothetical scenarios and uses huge assumptions. It assumes that life exists in our Galaxy and that artificial intelligence and GAI are “natural evolutions” of these civilizations. It uses the already hypothetical Drake equation, a way of quantifying the possible number of other planetary civilizations, which has several variables that we have no concrete idea about.

The mixed hypothetical argument nevertheless reaches a strong conclusion: the need for heavy and ongoing regulation of artificial intelligence. Garrett points out that there is a fear of putting artificial intelligence in a regulatory framework in terms of losing productivity.

In Garrett’s model, civilizations only have a few hundred years in the age of artificial intelligence before they disappear from the map. In relation to distance and the very long course of cosmic time, such tiny margins of time mean next to nothing. They drop to zero, which, he says, matches SETI’s current success rate of 0 percent. “Without practical regulation, there is every reason to believe that Artificial Intelligence could pose a significant threat to the future course of not only our own technical civilization but all technical civilizations,” explains Garrett.

With information from Popular Mechanics

The article is in Greek

Tags: Scientist artificial intelligence kill years

-

NEXT Eurovision 2024: That’s how much Marina Satti’s clothes cost – The 1,574 euro boots!