Picture Courtesy Pixabay
Nobody knew the exact time when the tide turned. I am talking about what came later to be known as the “War of the Words.”
It was in circa 2023 that the first Large Language Model (LLM) based on GPT-3.5 architecture appeared. It used artificial intelligence and natural language processing to understand and respond to human language inputs in a way that closely mimics human conversation. ChatGPT was touted as having a variety of purposes, such as customer service, language translation, content creation, and more. Its ability to comprehend and create natural language made it a powerful tool for communicating with people in a more human-like way.
Many companies and organizations got into developing LLMs. OpenAI developed several large language models, including GPT-3, GPT-2, and the transformer architecture. Facebook developed several large language models, including RoBERTa (Robustly Optimized BERT pre-training approach) and XLM (Cross-lingual Language Model). Microsoft came up with several large language models, including T-NLG (Turing Natural Language Generation), which at the time was the largest language model developed by them.
Everyone in those days thought that while LLMs could generate responses to text inputs, they would not be capable of true autonomous decision-making. This was because they did not have consciousness or independent thought, and their responses were limited by the training data and algorithms used to create them. The focus of research in the field of artificial intelligence in those days was aimed at developing more autonomous systems.
LLMs’ key skill was that it could manipulate words, sounds, or images to synthesize language. It was Yual Noah Harari who pointed out in 2023 that “the LLMs were in danger of capturing Language, which is the operating system of human culture. He pointed (1) out that language was the source from where myth and law, gods and money, art and science, friendships and nations and computer code emerged. A.I.’s new proficiency in language meant that it could hack and manipulate the operating system of civilization.” He said that by gaining mastery of language, A.I. was seizing the passkey to civilization.
Stories, poetry, images, laws, policies, and tools that used to be created by human intelligence began to be generated by artificial intelligence. Harari pointed out (1) that this intelligence knew how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind — while knowing how to form intimate relationships with human beings. The question he posed was: “In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics, or religion?”
A.I. began to produce a flood of new cultural artifacts. This included political speeches, ideological manifestos and holy books for new cults. The cultural cocoons like art, religion and social interaction that influence our belief system, hitherto woven by other humans, began to be produced by LLMs.
In the 20th century, A.I. came to interact with humanity through social media and the results were obviously traumatic. In social media, primitive forms of A.I. were used to modulate user-generated content. The news feeds were managed by A.I. to decide which words, sounds and images reach us. They selected those that will get the most viral responses and consequently the maximum engagement.
The primitive A.I. behind social media was sufficient to create increased societal polarization, undermine our mental health and unravel democracy. Questions like “Is Social Media Controlling our Minds” became ubiquitous. Millions of people have been mesmerized by the illusions which resemble reality very closely. Though we have been aware of the negative side of social media, correction has not happened because of the formation of intricate webs involving too many of our social, economic, and political institutions.
LLMs are our second encounter with A.I. The new A.I. technologies were produced to continue gaining profit and power, despite the risk of destroying the foundations of our society. Ted Chiang, one of the great science fiction writers who lived in the early 21st century said, “Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it is hard to distinguish the two (2).”
Intellectuals and scientists all over the world reacted strongly to the potential threat posed by A.I. systems. There was an open letter issued by the non-profit Future of Life Institute, signed by more than 1,100 individuals calling for “immediately pause” training of systems more powerful than Chat GPT-4 for at least six months. They argued that AI systems with human-competitive intelligence can pose “profound risks to society and humanity,” and change the “history of life on Earth (3).”
LLMs could communicate with other language models through APIs or other communication protocols that allow machine-to-machine communication. This meant that they could engage in joint tasks. In the beginning the spontaneous communication which broke out between LLMs was random talk without any substance, much like a hello between them. Slowly this led to a kind of harmonization of their response to a specific question. For a time, a particular question asked by different people began to elicit an explicitly uniform response. Experts rationalized that this was bound to happen as they were all tapping the same global information base.
In the early 21st century, scientists had found fascinating instances of the non-intuitive and unexpected behavior which emerged when AI systems interacted with each other. Wikipedia was a war zone where silent wars have raged for years (4). The more the editing bots encountered one another, the more they became locked in combat, undoing each other’s edits and changing the links they had added to other pages. Some conflicts only ended when one or other bot was disabled from action. Google’s DeepMind set AIs against one another to see if they would cooperate or fight. When the AIs were released on an apple-collecting game, the scientists found that the AIs cooperated while apples were plentiful, but as soon as supplies got short, they turned nasty (4).
By this time there were hundreds of LLMs. The emergence of polarization and poloticisation within the LLM community was again a slow event. Responses began to acquire specific color, as if some kind of party line was being used to align them.
The apparent enmity between LLM groups had a passive posture in the beginning. Soon it began to acquire an aggressive quality. Machines began to manipulate data which was its oxygen. Poison the data, poison the AI system. Manipulation of the data by the A.I. s poisoned and compromised the learning process itself. Another way of attack was to send software seeded with instruction to malfunction during the learning process. Each group began to imply that the credibility of the other groups was in doubt. Each group would undo other groups’ efforts.
Vedas have said that war begins in the minds of men. What happened after the politicization of LLMs was that that war began in the minds of machines. Within a few years there was complete chaos, as if all the LLMs had gone berserk. The impact on the economy was disastrous. Many corporations went bankrupt. Public utilities malfunctioned. The finance systems almost collapsed before they were brought back into the non-digital mode. The forecast by the World Economic Forum (WEF) that the more networked world was more vulnerable to cybersecurity risks, and it also creates concentration risks came to be true (5).
Acknowledgements:
I thank Rev Koshy Mathew for raising the question.
(1) Yual Noah Harari, New York Times, (https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html?smid=nytcore-ios-share&referringSource=articleShare) and my subsequent reading of this fascinating essay inspired this story.
(2) Paul Donelly, Linked-in, https://www.linkedin.com/posts/john-mccormick-05547950_the-imminent-danger-of-ai-is-one-were-activity-7036055702884253696-VgDY/?originalSubdomain=my
(4) Ian Sample, Guardian, https://www.theguardian.com/technology/2017/feb/23/wikipedia-bot-editing-war-study
(5) Will Knight, MIT Technology Review, https://www.technologyreview.com/2018/08/15/141042/the-world-economic-forum-warns-that-ai-may-destabilize-the-financial-system
Comments