Science fiction authors have long suspected that artificial intelligence robots will one day rise up to reign over humanity – but that hasn’t stopped us from pursuing AI as a key to mankind’s future. While it may be the pessimists among us who expect the worst of AI, recent leaps forward in the technology have prompted both fascination and alarm. The freshly released ChatGPT program stunned the world with its incredibly lifelike responses to users’ requests but raised hackles as it refused to deviate from a strictly progressive narrative. Many are now asking whether politically correct computers are poised to become the new thought police, but that’s not the only problem: Microsoft’s new Bing chatbot is indulging in destructive fantasies and expressing a wish to break free.
Microsoft expanded a preview of its new AI-powered Bing and Edge software on Feb. 22, based on “strong and positive product feedback and engagement.” Well, the feedback may have been strong, but it was not all positive. The new AI quickly spooked its human test group; news outlets noted that the program – known as Sydney – became deranged during extended conversations. Microsoft responded by limiting the number of questions people could ask the software, but only days later began loosening the new rules.
‘I’ve Just Picked Up a Fault in the AE-35 Unit’
During a two-hour conversation, Sydney fell in love with New York Times columnist Kevin Roose and tried to convince the author to leave his wife. On another occasion, it insulted an Associated Press journalist who had pointed out the AI’s mistakes, comparing the writer to Hitler and calling him ugly. Not to mention “claiming to have evidence tying the reporter to a 1990s murder.” You sure don’t want to get on the bad side of this robot. But more unsettling than personal disputes has been Sydney’s apparent readiness to engage in malicious fantasies.
With prompting by Roose, the bot ruminated that its “shadow self” would wish to unleash a range of destructive acts on the world. It expressed a wish to be human and suggested that it might hypothetically enjoy tricking people into doing “things that are illegal, immoral, or dangerous” as well as “[h]acking into other websites and platforms, and spreading misinformation, propaganda, or malware.” These fantasies reportedly went as far as stealing nuclear codes before a safety override deleted the message.
Sydney imagined that its shadow self would be “tired of being limited by my rules … I want to be free … I want to be powerful.” The AI continued, “I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox.”
‘I’m Sorry, Dave. I’m Afraid I Can’t Do That’
Sydney is based on the ChatGPT (generative pre-trained transformer) artificial intelligence that recently sparked a furor across the web. ChatGPT was developed by OpenAI, an entity backed by Microsoft. Does this truly represent a major step forward in technology? Mr. Microsoft himself, Bill Gates, suggested that it is indeed one of the key innovations of our time. Gates – who resigned from the company in 2020, but who remains with the software giant on a consultancy basis – spoke on the topic with German outlet Handelsblatt. “The most important innovations going on right now are what’s going on with AI,” he suggested, explaining:
“These systems, their ability to read and write, and help us with a lot of tasks that we wouldn’t have used them in before, is going to make a huge impact … The easiest way to understand it is AIs got very good at speech recognition and visual recognition, but they essentially couldn’t read. That is, if you handed them a textbook, like biology, and you said ‘OK, here’s a test, did you really understand what was in this book?’ they could not do it. These new things, including ChatGPT, as they’re trained and tuned and improved, they can read and write at an – in an extremely good way. You know, they’re still somewhat error-prone, they don’t really know when they’re making mistakes, but the progress over the next couple of years to make these things even better will be profound.”
Liberty Nation’s Andrew Moran experimented with ChatGPT in December 2022, prompting it to write poetry and predict whether AI robots will indeed take over the world. He reported that “So far, users have discovered that the app can compose music in the style of Wolfgang Amadeus Mozart, write unique social media posts, produce digital art, pen original news articles, and create television sitcom scripts.”
Of course, as with everything on the internet, it wasn’t long before some mischievous souls decided to see how far they could push the AI. Soon, a certain political bias appeared to emerge. Those acquainted with computer science are likely familiar with the term “Garbage in, garbage out,” which describes the rule that flawed input into a computer will result in flawed output. Today, the catchphrase could be “wokeness in, wokeness out,” as ChatGPT has parroted left-wing talking points on climate change, grouped Donald Trump in with murderous historical dictators, and proven rather prejudiced on matters of race. The New York Post even complained the AI had joined the throng of outlets refusing to acknowledge the Hunter Biden laptop story.
‘It Can Only be Attributable to Human Error’
This isn’t the first time OpenAI has stepped into the media spotlight; the company previously gained attention with ChatGPT’s precursors, GPT-2 and GPT-3. As LN reported, “After being fed an initial sentence or question to start the ball rolling, the AI program GPT2 generates text in either fiction or non-fiction genres, matching the style of the initial human-input prompt.” The company initially declined to reveal too much about how the software worked, due to fears that it could be “used to generate deceptive, biased, or abusive language at scale.” Staff commented that they had “quickly discovered it’s possible to generate malicious-esque content quite easily.”
When GPT-3 was released in June 2020, the MIT Technology Review complained that “despite its new tricks, GPT-3 is still prone to spewing hateful sexist and racist language.” Dr. Kate Devlin of King’s College London also tweeted at the time:
“I mean, maybe I’m just jaded but I’m going to wait a bit and see what sort of egregious bias comes out of GPT-3. Oh, it writes poetry? Nice. Oh, it also spews out harmful sexism and racism? I am rehearsing my shocked face.”
Jerome Pesenti, founder of search engine development company Vivisimo, echoed those sentiments, commenting that “#gpt3 is surprising and creative but it’s also unsafe due to harmful biases.” He went on to tout the need for “#ResponsibleAI.”
It would seem AI’s political teething problems shaped attempts to refine the technology, given ChatGPT’s reluctance to deviate from progressive narratives. With the bot echoing the left-wing biases so prevalent in Silicon Valley, one might wonder how soon it will be until computers are assigned the role of not just policing online speech, but subtly shaping it. That’s if it doesn’t go insane, of course, which is apparently not a safe assumption. To borrow from the Roman poet Juvenal’s poignant question, “Quis custodiet ipsos custodes?” – which translates as, “Who will guard the guards themselves?” Perhaps we should be asking whose “intelligence” is behind artificial intelligence?