Yuval Noah Harari argues that artificial intelligence has hacked the operating system of human civilization – Financial Post

Yuval Noah Harari argues that artificial intelligence has hacked the operating system of human civilization – Financial Post
Yuval Noah Harari argues that artificial intelligence has hacked the operating system of human civilization – Financial Post

Fears of artificial intelligence (AI) have haunted humanity since the dawn of the computer age. Until now these fears have focused on machines using physical means to kill, enslave or replace humans. But in the last two years, new artificial intelligence tools have emerged that threaten the survival of human civilization from an unexpected direction. Artificial intelligence has acquired some remarkable abilities to manipulate and create language, be it words, sounds or images. In this way he has hacked the operating system of our culture.

Language is the stuff of which almost all human civilization is made. Human rights, for example, are not written into our DNA. Rather, they are cultural artifacts that we created by telling stories and writing laws. Gods are not physical realities. Rather, they are cultural artifacts that we created by inventing myths and writing “holy” scriptures.

Money, too, is a cultural artifact. Banknotes are just colorful pieces of paper, and currently more than 90% of money isn’t even a banknote — it’s just digital information on computers. What gives money value is the stories we are told about it by bankers, finance ministers and cryptocurrency gurus. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all extremely capable storytellers.

What would happen when a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing pictures, and writing laws and scriptures? When people think about Chatgpt and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do this? But this kind of question misses the big picture. Forget school essays. Think of the next US presidential race in 2024 and try to imagine the impact of AI tools that can be built to mass produce political content, fake news and “holy” scriptures for new sects.

In recent years the qAnon “sect” has rallied around anonymous online messages known as “q drops”. Followers collected, worshiped and interpreted these q-drops as sacred texts. While as far as we know all previous q drops were composed by humans and bots merely helped spread them, in the future we may see the first sects in history whose holy texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon this may become a reality.

On a more mundane level, we may soon find ourselves having lengthy online discussions about abortion, climate change, or the Russian invasion of Ukraine with entities we think are human – but are actually artificial intelligence. Our problem is that it’s completely pointless for us to spend time trying to change the stated views of an AI bot, when the AI ​​could fine-tune its messages so precisely that it has a good chance of influencing us.

Through her mastery of language, she could even create intimate relationships with people and use the power of intimacy to change our opinions and worldviews. Although there is no indication that the AI ​​has any consciousness or emotions, to cultivate false intimacy with humans it is enough that the AI ​​can make them feel emotionally attached to it. In June 2022, Google engineer Blake Lemoine publicly claimed that the AI ​​chatbot Lamda he was working on had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr. Lemoine’s claim, which was rather false. Instead, it was his willingness to risk his lucrative job for the sake of the AI ​​chatbot. If AI can influence people to risk their jobs for it, what else could it motivate them to do?

In a political battle for “minds and hearts” [σ.σ. όταν επιδιώκεται επικράτηση όχι με τη χρήση ισχύος, αλλά μέσω συναισθηματικών ή διανοητικών εκκλήσεων], intimacy is the most effective weapon and has just gained the ability to mass produce intimate relationships with millions of people. We all know that in the last decade social media has become a battleground for controlling people’s attention. With the new generation of artificial intelligence, the battlefront is shifting from attention to intimacy. What will happen to human society and psychology when an AI competes with another AI in a battle feigning intimate relationships with us, which can then be used to persuade us to vote for certain politicians or buy certain products?

Even without creating “false familiarity,” new AI tools would have a huge influence on our opinions and worldviews. People may go so far as to use a single AI advisor as an all-knowing oracle. No wonder Google is terrified. Why bother looking when I can just ask the oracle? The news and advertising industries should also be horrified. Why read a newspaper when I can just ask the fortune teller to get the latest news? And what’s the point of ads when I can just ask the fortune teller to tell me what to buy?

And even those scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of the human-dominated part of it. History is the interaction between biology and culture. Between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when [η τεχνητή νοημοσύνη] take over civilization and start producing stories, melodies, laws and religions? Earlier tools like the printing press and the radio helped spread human cultural ideas, but they never created new cultural ideas of their own. Artificial intelligence is something fundamentally different. It can create completely new ideas, completely new culture.

At first, the AI ​​will likely imitate the human models it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before [σ.σ. σλόγκαν από την τηλεοπτική σειρά Star Trek]. For millennia human beings have lived in other people’s dreams. In the coming decades we may find ourselves living in the dreams of an extraterrestrial intelligence.

The fear of artificial intelligence has only been haunting humanity for the last few decades. But for thousands of years people have been haunted by a much deeper fear. We have always recognized the power of narratives and images to manipulate our minds and create illusions. Consequently, since ancient times people have feared being trapped in a world of illusions.

In the 17th century René Descartes (Descartes) feared that perhaps a malevolent demon was trapping him in a world of illusions, creating everything he saw and heard. In ancient Greece, Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave for their entire lives, facing a blank wall, which is simply a SCREEN. On this screen they see various shadows being projected. The prisoners mistake the illusions they see there for reality.

In ancient India, Buddhist and Hindu sages pointed out that all people lived trapped in Maya – the world of illusion. What we usually think of as reality are often just fictions in our minds. People can wage entire wars, killing other people and willing to be killed themselves, because of their belief in this or that illusion.

The artificial intelligence revolution brings us face to face with Descartes’ demon, with Plato’s cave, with the Mayans. If we are not careful, we can become trapped behind a veil of illusion that we cannot tear away – or even realize is there.

Of course, the newfound power of artificial intelligence could be used for good purposes as well. I won’t dwell on it, because the people who develop it talk about it quite a bit. The job of historians and philosophers like me is to point out the dangers. But certainly, AI can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that new AI tools are used for good and not evil. To do this, we must first appreciate the real capabilities of these tools.

We have known since 1945 that nuclear technology could produce cheap energy for the benefit of humans — but it could also destroy human civilization. Therefore, we have reformed the entire international order to protect humanity and ensure that nuclear technology is used primarily for good. Now we must contend with a new weapon of mass destruction that can annihilate our mental and social worlds.

We can still fine-tune the new AI tools, but we need to act fast. While nukes cannot invent more powerful nukes, AI can produce an exponentially more powerful AI. The first critical step is to require strict security controls before powerful AI tools are released into the public domain. Just as a pharmaceutical company can’t release new drugs before testing their short- and long-term side effects, tech companies shouldn’t release new AI tools before they’re safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing growth from the AI ​​state make democracies fall behind more ruthless authoritarian regimes? Exactly the opposite. Unchecked AI deployments would create social chaos, which would benefit authoritarian regimes and destroy democracies. Democracy is conversation and conversations are based on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

An extraterrestrial intelligence has just arrived, here on Earth. We don’t know much about it, except that it might destroy our civilization. We should stop the irresponsible development of AI tools in the public sphere and regulate AI before it regulates us. And the first regulation I would propose is to make it mandatory for artificial intelligence to disclose that it is artificial intelligence. If I’m chatting with someone and I can’t tell if they’re human or artificial intelligence – that’s the end of democracy.

This text was created by a human.

Yuval Noah Harari is a historian, philosopher and author of ‘Sapiens’, ‘Homo Deus’ and the children’s series ‘Unstoppable Us’. He is a lecturer in the history department of the Hebrew University of Jerusalem and co-founder of Sapienship, a social impact company.

The article is in Greek

Tags: Yuval Noah Harari argues artificial intelligence hacked operating system human civilization Financial Post


NEXT The golden inheritance of 84 trillion dollars