Showing posts with label Yuval Noah Harari. Show all posts
Showing posts with label Yuval Noah Harari. Show all posts

Saturday, September 28, 2024

Pulling Back the Silicon Curtain; The New York Times, September 10, 2024

Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain

Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari

"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...

The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.

The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.

Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...

“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...

Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."

Wednesday, September 4, 2024

Yuval Noah Harari: What Happens When the Bots Compete for Your Love?; The New York Times, September 4, 2024

Yuval Noah Harari. Mr. Harari is a historian and the author of the forthcoming book “Nexus: A Brief History of Information Networks From the Stone Age to AI,” from which this essay is adapted., The New York Times; Yuval Noah Harari: What Happens When the Bots Compete for Your Love?

"Democracy is a conversation. Its function and survival depend on the available information technology...

Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots. When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.

Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg introduced print to Europe, the first best sellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they made possible the rise not only of modern democracy but also of modern totalitarianism.

Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users. Before the rise of A.I., it was impossible to create fake humans, so nobody bothered to outlaw doing so. Soon the world will be flooded with fake humans.

A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots."

Sunday, August 25, 2024

‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world; The Guardian, August 24, 2024

 , The Guardian; ‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world

"Would having even more information make things better – or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history – AI. Some leading entrepreneurs, such as the American investor Marc Andreessen, believe that AI will finally solve all of humanity’s problems. On 6 June 2023, Andreessen published an essay titled Why AI Will Save the World, peppered with bold statements such as: “I am here to bring the good news: AI will not destroy the world, and in fact may save it.” He concluded: “The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.”

Others are more sceptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction. Last year, close to 30 governments – including those of China, the US and the UK – signed the Bletchley declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”. By using such apocalyptic terms, experts and governments have no wish to conjure a Hollywood image of rebellious robots running in the streets and shooting people. Such a scenario is unlikely, and it merely distracts people from the real dangers.

AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control."

Sunday, August 5, 2018

Interview: Yuval Noah Harari: ‘The idea of free information is extremely dangerous’; The Guardian, August 5, 2018

Andrew Anthony, The Guardian; Interview: Yuval Noah Harari: ‘The idea of free information is extremely dangerous’

"Why is liberalism under particular threat from big data?
Liberalism is based on the assumption that you have privileged access to your own inner world of feelings and thoughts and choices, and nobody outside you can really understand you. This is why your feelings are the highest authority in your life and also in politics and economics – the voter knows best, the customer is always right. Even though neuroscience shows us that there is no such thing as free will, in practical terms it made sense because nobody could understand and manipulate your innermost feelings. But now the merger of biotech and infotech in neuroscience and the ability to gather enormous amounts of data on each individual and process them effectively means we are very close to the point where an external system can understand your feelings better than you. We’ve already seen a glimpse of it in the last epidemic of fake news.

There’s always been fake news but what’s different this time is that you can tailor the story to particular individuals, because you know the prejudice of this particular individual. The more people believe in free will, that their feelings represent some mystical spiritual capacity, the easier it is to manipulate them, because they won’t think that their feelings are being produced and manipulated by some external system...

You say if you want good information, pay good money for it. The Silicon Valley adage is information wants to be free, and to some extent the online newspaper industry has followed that. Is that wise?
The idea of free information is extremely dangerous when it comes to the news industry. If there’s so much free information out there, how do you get people’s attention? This becomes the real commodity. At present there is an incentive in order to get your attention – and then sell it to advertisers and politicians and so forth – to create more and more sensational stories, irrespective of truth or relevance. Some of the fake news comes from manipulation by Russian hackers but much of it is simply because of the wrong incentive structure. There is no penalty for creating a sensational story that is not true. We’re willing to pay for high quality food and clothes and cars, so why not high quality information?"

Sunday, March 19, 2017

Yuval Noah Harari: ‘Homo sapiens as we know them will disappear in a century or so’; Guardian, March 19, 2017

Andrew Anthony, Guardian; 

Yuval Noah Harari: ‘Homo sapiens as we know them will disappear in a century or so’



"Is being compassionate and empathetic a major flaw in human evolution? Is psychopathy the future for our species?
Dominic Currie, reader

No, I don’t think so. First of all, if it is, then it’s going to be quite a terrible future. But even if we leave aside the moral aspect and just look at it from a practical aspect, then human power comes from cooperation, and psychopaths are not very good at cooperation. You need empathy and compassion, you need the ability to understand and to sympathise with other people in order to cooperate with them effectively. So even if we leave aside all moral issues, still I don’t think that empathy is bad for us or that psychopaths are the future of humankind."