Mickey Mouse with a Bloody Knife, The Benefits of Eating Crushed Glass, Improving John Lennon, and other Realistic and Dangerous Nonsense

Evan Mills, Ph.D.
18 min readNov 3, 2023

--

While much is said — for those who listen — about the existing and impending harms of artificial intelligence (AI), more is said about its charms. Yet, according to some, “the Speed AI development is outpacing risk assessment”.

While thought leaders form Bill Gates to Lawrence Tribe caution of existential risks, others (summed up by Scientific American) express concerns about serious but relatively “minor” issues such as wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography, routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in multiple languages, and wage theft.

While AI arguably made its debut in society in the 1950s, it took until Halloween 2023 for the US government to implement high-profile checks and balances, when, as reported in the New York Times, President Biden warned of “risks that their systems could aid countries or terrorists to make weapons of mass destruction. The order also seeks to lessen the dangers of “deep fakes” that could swing elections or swindle consumers.” Vice President Kamala Harris added that “We have a moral, ethical and societal duty to make sure that A.I. is adopted and advanced in a way that protects the public from potential harm,….” While AI can inflict harms in split seconds, even this meager step on regulation has taken on the order of 70 years. Ars Technical reports that “it could take years before those protections are ultimately introduced, if ever”.

There is a common theme here: the potential harms exceed the potential benefits, and technologies are spreading far faster than governments can regulate them. What could possibly go wrong? Here are some examples of specific nasty or otherwise worrisome implementations I’ve noticed in my readings….

While it delights me to hear John Lennon’s salvaged voice rendered from a 1978 demo resurrected in 2023 from a degraded cassette tape, one music critic finds the “froggy” result to be a disrespectfully anticlimactic “last song” to be published by the Beatles. One pundit (clearly not a random sample) rated the song 211 out of all 214 songs recorded by the Beatles. I imagine Lennon himself would agree that the benefits of AI simply don’t outweigh its harms.

Articles:

:: Mickey with a bloody knife
:: Quality In, garbage Out: The benefits of eating crushed glass
:: Color blindness
:: To kill at scale
:: The End of Anonymity
:: Loss of Privacy in the Name of Privacy
:: Racial Revisions
:: “Millions of workers are training I models for pennies.”
:: Wait, that’s not my grandchild asking for help on the phone??
:: RiteAid makes the wrong move
:: Scientists say AI doesn’t substitute for integrity or accountability
:: Running fighter-jet sorties while you sleep: what could go wrong?
:: Lack of imagination
:: Not very swift, Donald
:: Heil, AI !
:: Dangerous hallucinations
:: Look before you speak
:: When real is fake
:: Deny. Defend. Depose.

:: Mickey with a bloody knife

Now, people with zero art skill (and zero judgment) can create images that can offend and harm, not to mention readily violating copyright rules, and ship them off to undeservedly large audiences …. in seconds.

As of October 2023 any child with an Instagram or Facebook Messenger account and no skill whatsoever at art can create and widely share emoji stickers of Mickey or Elmo with a knife or any other imaginable inappropriate image using copyrighted material. An article in Ars Technical noted that “The generations shared on X [formerly Twitter] include Mickey Mouse holding a machine gun or a bloody knife, the flaming Twin Towers of the World Trade Center, the pope with a machine gun, Sesame Street’s Elmo brandishing a knife, Donald Trump as a crying baby [some generations are are markably believable], Simpsons characters in skimpy underwear, Luigi with a gun, Canadian Prime Minister Justin Trudeau flashing his buttocks, and more.”

Filters set up to prevent such activity were quickly side-stepped. The article notes that “Generations like these have been possible in uncensored open source image models for over a year, but it’s notable that Meta publicly released a model that can create them without more strict safeguards in place through a feature integrated into flagship apps such as Instagram and Messenger.”

One observer accurately commented that “We really do live in the stupidest future imaginable.”

:: Quality In, garbage Out: The benefits of eating crushed glass

Back in the day, there was an expression “Garbage In, Garbage Out”, meaning bad inputs (to a computer or whatever) will probably translate into useless outputs. Apparently, AI is bringing us an unsettling variant on the idea: Quality In, Quality Out.

According to a Fall 2022 article by Ars Technica, a demo language model dubbed Galactica, which was trained on “48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias” and designed to “”store, combine and reason about scientific knowledge” was skilled at creating what the author calls “realistic nonsense”. One author quickly uses the tool to author a scientific paper entitled “”The benefits of eating crushed glass.”

While it was taken offline in just a few days (good) after ethical criticism. The platform’s chief scientist snottily quipped: “It’s no longer possible to have some fun by casually misusing it. Happy?”.

One must ask why it must be that society behaves in such a way as to put time and resources into a project like this before evaluating the ethical consequences. Today, a year later, I see that it is back online, but I will not venture beyond the homepage.

:: Color blindness

Efforts to turn AI for the good, backfired when an image generator was asked to reverse stereotypical race roles. It found it almost impossible to generate pictures of black doctors tending to white kids.

In a Fall 2023 article from NPR described efforts by a postdoctoral fellow at Johns Hopkins to flip the cultural stereotype of white saviors in providing health services in the developing world. They tried 350 times to enter instructions like “Black African doctors providing care for white suffering children” into AI programs designed to create photo-realistic images. According to the article, “the AI program almost always depicted the children as Black. As for the doctors, he estimates that in 22 of over 350 images, they were white.” In some cases cliche “African” clothing or adornments were applied to the white subjects.

Sometimes African animals like giraffes appeared alongside the doctors.

Adding insult to injury, in early 2024 Google (or was it “AI”?) overcompensated for these kinds of critiques such that — when sked to create images of 1943 German soldiers — it’s Gemini tool generated pictures of Nazis of many races. How woke can you get? Is this progress?

:: To kill at scale

Stanford University has noted that “A particularly visible danger is that AI can make it easier to build machines that can spy and even kill at scale.”

Oh, great idea. And you gotta love the self-validating acronym: Lethal autonomous weapon systems (LAWS). As of 2020, thirteen countries had discussed the issue but only one insignificant one among them — Belgium — had passed (real) laws against it.

We always know those cute robot dogs would soon take up arms. According to a May 2024 article by Ars Technica, it looks like that day has arrived. And, oh goodie, you can get one for $1600 (not including the gun) … but don’t despair, you can get one with a flame-thrower included. What could possibly go wrong? At least China doesn’t have them; oh, wait.

Let’s just hope they also bring in the paper in the morning.

:: The End of Anonymity

Imagine if someone could surreptitiously photograph you and immediately know who you are.

According to a story on NPR, PimEyes is considered one of the most powerful publicly available facial recognition tools online. “[I]t scans a face in a photo and crawls dark corners of the internet to surface photos many people didn’t even know existed of themselves in the background of restaurants or attending a concert.” It doesn’t ID the person, but it provides links to websites (like Facebook) believed to have pictures of the person … seriously aiding in identifying. Able to scan 900 million images across the web in one second, more than two years ago, the Washington Post described it as a “facial recognition website [that] can turn anyone into a cop — or a stalker.” A subsequent article in Ars Technica (October 2023) described how it seems to be used to track down children, and that multi-year efforts to block this use are only somewhat effective.

According to the NPR story, there are no federal laws in the US that govern this technology. What?

Fortunately, according to the CEO, concerns are overblown given that there are only “a few hundreds instances of people misusing the service for things like stalking or searching for children.” What?

PimEyes (unenforced) “rules” state that users should only search for themselves or people that consent. Ahem.

Even though they (used to) say “Do no evil,” Google developed something like this back around 2011 and decided it was too dangerous to release to the public.

Like almost any other AI technology that can do harms, it can also do good. PimEyes might help people who are blind or finding who else is posting one’s own image. As with most AI technologies, the harms seem to eclipse the benefits. Clearview AI already provides similar service to law enforcement, and many governments have now employed similar technology in public locations … but this version of the technology is available to anyone and everyone.

:: Loss of Privacy in the Name of Privacy

School is mum about female students’ faces grafted onto photos of naked women.

Ars Technica reports in November 2023 on wayward boys in a New Jersey high school using AI to splice female schoolmates’ pictures onto photos of nude women’s bodies and circulating them within the school, if not farther afield. Sadly, this is no surprise. According to reports the previous summer, thousands of such images have been created and shared online by others, noting that “This “explosion” of “disturbingly” realistic images could help normalize child sexual exploitation, lure more children into harm’s way, and make it harder for law enforcement to find actual children being harmed, experts told the Post.”

As with so many AI infractions, it took officials days to find out about the New Jersey incident. Meanwhile, the damage had been done.

To “protect” the victims’ privacy, school officials refused to reveal (anonymized) information on how many students were involved, whether non-students had seen the images, or whether disciplinary action was taken.

Sure this kind of misbehavior has been possible since scissors and glue were invented, but AI has made it vastly easier and vastly more realistic.

:: Racial Erasure

Generative AI is “a technology that has been shown to bake racist and sexist stereotypes into the imagery it creates.”

In a time seeming thankfully long-gone, all models were caucasian. Not so fast, in the digital world. In November 2023, The Guardian reported that a well-known fashion designer uploaded a modified photograph that made an early career 21-year-old Taiwanese model look like a white woman.

Adding insult to injury, the model had worked for free, ironically in exchange for the opportunity for visibility.

Also noted by The Guardian story, “[e]arlier this year, Levi’s announced that it would use computer-generated models on its website in an effort to include a diverse range of ethnicities. Critics of the decision said that if brands truly wished to enrich people of color, they should hire human models.”

:: “Millions of workers are training AI models for pennies.”

…So reads the headline of an article in Wired that should make anyone with a pulse think twice about AI.

Like so many things that are bad for us — think fossil fuels and nuclear power— AI needs subsidy to be viable.

It appears that millions of AI workers in the developing world are severely underpaid. According to the Wired article, a company called Appen is among those paying pennies per hour to people who “train” perform basic tagging tasks to help train AI models. They say that “Appen’s clients have included Amazon, Facebook, Google, and Microsoft, and the company’s 1 million contributors are just a part of a vast, hidden industry.” They typically earn well below $1/hour (one interviewee reports $0.33/hour), and in many parts of the developing world are also are tied up “on hold” for many more hours due to the need to wait and pounce on crowdsourced assignments and to chronic electrical grid unreliability problems. One worker in the article reports earning less than $2/day due to this situation.

The actively industry seeks out these “cost-effective” workers. According to the article, “[t]his creates an industry of irregular labor with no face-to-face resolution for disputes if, say, a client deems their answers inaccurate or wages are withheld.”

A July 2024 piece in the LA Times notes that Amazon admits to this “artificial artificial intelligence”, where the extraction of human labor is submerged. This includes so-called “data annotators”” data annotators working 10-hour days for less than $2 an hour performing repetitive, mind-numbing tasks with no opportunities for career progression.” As with so many other industries, lack of global consensus worker protections, exploitive AI jobs will run from country to country to keep ahead of local regulations.

:: Wait, that’s not my grandchild asking for help on the phone??

The bad guys are always one step ahead, now with VOICE theft.

My credit card company sent a warning this morning that I may get calls impersonating another person I know, saying they’re in trouble and asking for money. This kind of scam has of course been around for years, but has historically been strictly via written emails (from people stealing my email address from somewhere or other). They’re often sufficiently “off” that I’ve never been tricked, though no doubt scores of people have been….

Well, now, scammers are using AI to use voice samples from loved ones to create false verbal distress messages. That’s much harder to sort out!

:: Death by AI

AI is “improving” warfare, enabling software-based targeting and killing of people who might be enemies, or might not.

An article from NPR in December 2023 described how the Israelis are now using AI to pick targets and deploy deadly force, with vastly reduced human “reality checks” or other interventions.

AI helped guide a whopping 22,000 strikes in Gaza in the preceding 9 weeks, and more than 3500 locations in the last 13 days alone. About 18,000 Palestinians have been killed thus far, or 15-times the number killed by the Hamas attack that triggered the war. Among these are reports of at least 63 journalists (as of December 14, 2023), the vast majority Palestinian, an unheard-of number in previous wars.

An update from The Guardian in early April 2024 said AI had thus far been used to identify 37,000 specific human targets, with the system seemingly soothingly renamed “Lavender”.

Instructions to the algorithm allowed the killing of 15-20 civilians for each low-level human target. This allowed the use of inexpensive, non-precision “dumb” bombs. Said one intelligence officer said: “You don’t want to waste expensive [accurate] bombs on unimportant people.” This sheds light on why the death toll in the war has been so enormous.

Those old fogies among you will remember the Orwellian-named “Peacemaker” missile of the cold-war era. True to form, this new system is called “The Gospel”. One user stated that ““I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval.”

We know that many thousands of innocent civilians have been killed during this period of “precision” strikes. As Christmas approached, reports emerged from the Associated Press of a USAID contractor, his wife, and two young children killed in an Israeli strike.

According to one expert interviewed for the article, “ ‘AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy, and safety,’ warns Heidy Khlaaf, Engineering Director of AI Assurance at Trail of Bits, a technology security firm.”

We’ve heard about AI taking away civilian jobs. Well, here’s what that trend looks like on the battlefield. According to the article, “But the Gospel is much more efficient. He says a group of 20 officers might produce 50–100 targets in 300 days. By comparison, Mimran says he thinks the Gospel and its associated AI systems can suggest around 200 targets “within 10–12 days” — a rate that’s at least 50 times faster.”

One analyst warns that “Given the lack of explainability of the decisions output by AI systems, and the sheer scale and complexity of these models, it then becomes impossible to trace decisions to specific design points that can hold any individual or military accountable.”

:: Rite Aid makes the wrong move

It took more than 10 years, but the Federal Trade Commission finally (December 2023) drew the line against dangerous use of AI-powered facial “recognition”

According to the FTC, the retailer ignored a 2010 order, failing to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.

Intended to detect shoplifters, the system — a photo data base of tens-of-thousands of potential perpetrators — resulted in scores of people wrongly accused and traumatized in the process. Surprise! The false positives were directed disproportionately at non-white customers.

The ban is only temporary, based on arguably wishful thinking that the massive store chain will see the error in its ways and choose to self-police itself.

:: Scientists say AI doesn’t substitute for integrity or accountability

Scientists promote curbing the use of AI in producing or reviewing research documents, recognizing that humans must remain accountable and their oversight, expertise, and critical thinking should not be replaced by software.

Some corners of the scientific community have their wits about them when it comes to AI. On January 22, 2024, the European Geosciences Union issued statement to “ensure appropriate, mindful and ethical use of AI tools to prepare presentations and publications:

  • Human oversight, expertise and critical thinking should be applied to all research output, including scientific publications and presentations
  • AI tools cannot be included as authors of publications, including presentations and abstracts, since they can neither be held accountable for the content they create, nor can they manage copyright and license agreements (cf guidelines of the Committee on Publication Ethics)
  • Authors of scientific contributions have to disclose the use of AI tools in an appropriate section of the manuscript or presentations (e.g. Methods, Acknowledgements)
  • To ensure rigorous and transparent peer review, AI tools should not be applied to assess the quality of manuscripts. Reviewers are personally responsible and accountable for the manuscript evaluation they present in uploaded reports.

:: Running fighter-jet sorties while you sleep: what could go wrong?

The Defense Advanced Research Projects Agency (DARPA) is training fighter jets to dogfight autonomously. Oh joy.

According to Ars Technica’s reporting in April of 2023, by late the previous year more 21 flights had been flown. Human babysitters were on board but did not control the plane. This included “flying against a human-piloted F-16.”

:: Lack of imagination

In July 2024, Paul Martin was vexed. He simply asked two leading generative AI tools — DALL-E and Google Gemini — to draw him a picture of a car with square tires. As he mentioned, a kindergartener could do it. He surmised that this was simply because there were not enough similar images on the internet for the AI tools to plagiarize. Can something be intelligent without imagination?

:: Not very Swift, Donald

In one of the more eggregious misuses of generative AI, the Trump party promoted a faked endorsement from Taylor Swift for its 2024 presidential bid. Tens of millions of people saw this mis-information and it no doubt influenced votes. At least Swift quickly responded by trump-eting her true political choice (not Trump) to her followers. How do the stipulated benefits of AI counterbalance misuses such as this?

:: Heil AI !

A September 2024 article in the Washington Post describes efforts to revive and repackage Hitler’s voice (literally) and message using AI. What could possibly go wrong?

“Extremists are using artificial intelligence to reanimate Adolf Hitler online for a new generation, recasting the Nazi German leader who orchestrated the Holocaust as a “misunderstood” figure….

“…extremists brag that the AI-manipulated speeches [rendered in English] offer an engaging and effortless way to repackage Hitler’s ideas to radicalize young people.”

“Creating [one] video required only a few-second sample of Hitler’s speech taken from YouTube. Without AI, the spoofing would have demanded advanced programming capabilities.”

“Some seeking to spread the practice of making Hitler videos have hosted online trainings.”

“In a report published Friday, ISD researchers found that content glorifying, excusing or translating Hitler’s speeches into English has racked up some 25 million views across X, TikTok and Instagram since Aug. 13 [about 5 weeks].”

““It enables a new kind of emotional engagement that may be much more seductive to a new generation ….

“…pro-Hitler content in its dataset reached the largest audiences on X [formerly known as Twitter]”

:: Dangerous Hallucinations

New tech means new lingo, or at least new meanings for old lingo. AI can, apparently, “hallucinate”. What this word really means is screw up.

A September 2024 New York Times review of Apple’s forthcoming AI features for the iPhone finds that, it reasonably summarizes a 1,200-word article about the dangers of mercury in tuna. However ….

“Unfortunately, in its summary, Apple Intelligence recommended that people consume albacore, one of the species listed in the article as having the highest levels of mercury.”

Where are the guardrails. For every “bug” we catch, how many will we miss?

:: Look before you speak

An October 2024 piece in The Washington Post has a headline that kinda speaks for itself.

“Companies are pumping AI features into work products across the board. Most recently, Salesforce announced an AI offering called Agentforce, which allows customers to build AI-powered virtual agents that help with sales and customer service. Microsoft has been ramping up the capabilities of its AI Copilot across its suite of work products, while Google has been doing the same with Gemini. Even workplace chat tool Slack has gotten in on the game, adding AI features that can summarize conversations, search for topics and create daily recaps. But AI can’t read the room like humans can, and many users don’t stop to check important settings or consider what could happen when automated tools access so much of their work lives.”

Examples given included how an AI “assistant” took it upon themselves to “auto share” participants in a tech company’s high-stakes meeting with potential investors a confidential transcript of the investor’s conversation after the others had left — revealing some nefarious intentions. The deal died a quick death. In another example, an AI app included statements from Zoom meeting participants during periods where they had muted themselves to talk confidentially with co-workers.

Some AI bots take automated random screenshots, texts, and videos during teleconferences. Zoom’s analogous settings are defaulted to “on”, furnishing information to hosts.

With assistants like that, who needs corporate espionage? Presumably this kind of thing can (maybe) be avoided by getting the settings right. And that’s always easy, eh? ;)

:: When real is fake

Just when you might think that fake images being assumed true, now come true images that people think are fake.

In November 2024, Guardian writer John Naughton shared an “unbelieveable” photo of cars piled up following biblical floods in the Spain. He noted that some on social “called bullshit,” claiming that the photo was unbelieveable because it was indeed created by AI. Reasonable concern, but, in this case, it was real. Will AI bring people to not only doubt the veracity of dubious claims but also to doubt the veracity of reality?

:: Deny. Defend. Depose.

Is artificial intelligence being used to create artificial limits on healthcare?

In December 2024, historian Heather Cox Richardson commented on the role of AI in claims against health insurance giant United Healthcare. She stated that “[t]he lawsuit alleges that UnitedHealthcare uses artificial intelligence to deny claims from Medicare Advantage policyholders. The lawsuit claims that the company knowingly uses an algorithm that makes errors 90% of the time because it also knows that only about 0.2% of policy holders will appeal the decision to deny their claims. Last month the Senate Permanent Subcommittee on Investigations hammered UnitedHealth for dramatic increases in their denial rates for post-acute care between 2019 and 2022 as it switched to AI authorizations.”

--

--

Evan Mills, Ph.D.
Evan Mills, Ph.D.

Written by Evan Mills, Ph.D.

Energy & environment scientist, with 40 years experience developing and advancing climate change solutions. http://evanmills.lbl.gov

No responses yet