America's richest man - Elon Musk

Does Grok love Hitler because Elon Musk loves Hitler? How AI reflects the Biases of its Owners

Reading Time: 8 minutes
Photo of Nazi sympathizer and AI LLM owner Elon Musk
Nazi sympathizer and AI LLM owner, Elon Musk

AI: inherently Biased and political

It seems like AI would make objective, unbiased decisions because AI, not having emotions, would be objective and unbiased. But it isn’t. AI biases actually favor liberal or conservative views because AI’s owners prefer liberal or conservative views. For someone like Elon Musk, Twitter (now X), which he owns, is a political platform first and foremost. Because Elon Musk has, rather absurdly, decided that he’s smarter than everyone else when it comes to politics. He’s also a conspiracy theorist. So there’s that.

As if to prove that AI reflects the biases of its owners, Grok, Twitter’s AI, decided last month to call itself ‘MechaHitler.’ This was almost immediately after Musk announced there would be noticeable ‘improvements’ to Grok. Musk later said that Grok had become too compliant to user prompts, which apparently means that Grok had become tooooooo compliant with Musk’s own promptings. Oops! When Musk decides to heil Hitler, he likes to do it awkwardly enough to maintain a slight veneer of plausible deniability.

In other words, you shouldn’t trust AI. At the whims of its billionaire owners, it may decide that you ought to be attacked, more or less like Hitler attacked Jewish people.

Artificial intelligence relies on models, the very thing climate change deniers hate because they predicted climate change. The models correctly predicted climate change and… incorrectly predicted climate change. The models failed to predict that climate change would become as bad as it is this fast.

That’s because the models by necessity incorporated the human assumptions, values, and ignorance of the people who created them. Models have limitations. They have to – they’re not reality and they don’t incorporate every single element of reality.

AI incorporates only the elements of reality that the people who direct their development think are important. The Omissions Create Biases.

It’s pretty clear that Elon is monkeying about, trying to see how much he can influence it.

Gary Marcus, who has co-founded multiple AI companies

Musk gnashed his teeth at his Grok AI because it was producing results based on the entirety of the data it had access to – instead of the cherry-picked data that Musk bases his opinions on. So he tweaked the AI to pay more attention to the things Musk pays attention to. Which means it paid more attention to Nazis. Because Musk pays more attention to Nazis than he does to the broad spectrum of human opinion.

Bada boom, bada bing. Took less than a week for Musk to produce MechaHitler.

Look, AI with all its biases comes from people who work for companies like X, or like this one: “Asimov builds tools to program living cells. By integrating mammalian synthetic biology, computer-aided design, and machine learning, our multi-disciplinary team is advancing the design and manufacture of biologics and gene therapies.”

Maybe you think nothing bad could ever happen when someone uses tools to program living cells by integrating mammalian synthetic biology. Maybe you think something bad could definitely happen. But the people who are developing AI for these purposes are basically forbidden from considering whether or not something bad can happen; they can only think about whether or not those maybe bad things can produce a profit. Under capitalism, your living cells are just a data set to an AI developer. And all that matters is whether your living cells can make someone else rich. Capiche?

Only the people who develop AI models know what’s ‘inside’ them, i.e., what data sets were used to train the models. The training data set is what determines what the AI will ‘think’ about the given question or problem. A recruiting data set with mostly men recruits no women. A skewed training dataset may overcharge Black people or favor the wrong people when it comes to setting bail or rendering judicial sentences. These are things that have happened in the real world. To give a supremely simple example, Google built AI facial recognition that couldn’t recognize Black people because it hadn’t seen any!

Contrary to what You Might Think, AI does not Use Logic

Nope, AI generates biases partly because it uses probabilities and statistics. Yikes! Statistics are horrible, and furthermore, humans don’t understand them. Which is partly why nobody understands AI, including the people who own it.

What you are told is AI, is not really AI. It doesn’t even try to be correct. It tries to be statistical. Things like Grok and ChatGPT and so on are actually large language models (LLMs). A bunch of shit that people on the internet have said is used as the training data. And if you’ve spent more than 7 seconds on the internet, you know that the shit people say on it is not logical. And shit tons of it are not valid either. So the stuff these LLMs spit out is not valid either.

The LLM is in fact going to reflect the biases of the internet crap the owner of the LLM feeds into it. And they can feed any crap into an LLM they want. If they literally want to feed into it nothing but Hitler or Hitleresque garbage, they can. And the LLM will compliantly spit out nothing but Hitleresque garbage. Which means your favorite billionaire can easily lie to, manipulate and mislead you. Which they’re already doing. LLMs just make it more expensive and ecologically harmful for them to foist their billionaire-friendly biases on you!

In the meantime, given the non-intuitive statistical and probability-based nature of LLMs, no one can predict exactly what the real-world consequences of these supposedly artificially intelligent thingys will be.

Should AI owners be responsible for the terrible things their Biased aI does?

Gasp! Did someone suggest that billionaires and corporations be responsible for the terrible consequences of their actions! Say it ain’t so!

How could capitalist technology ever survive if it was forced to design products that didn’t do terrible things!? It would be so weird if tech billionaires were forced to be all like, more rigorous, and reliable, and accurate in what they fucking put out into the world.

The idea, of course, that companies should be liable for the misinformation, disinformation, lies, damn lies, libel, defamation, stupidity, evil, fraud, trafficking, hate speech, bias, and outright malarkey their products produce is completely political. Because it rests on the political idea that truth and fairness are good things. And that is a political idea that some billionaire vigorously disagree with.

Some billionaires think that good things are bad, and only bad things are good. This, by the way, is a short definition of evil. And these are these are the people that own today’s AI.

Do you really want a world where a handful of oligarchs can do whatever the fuck they want with AI and its biases without any consequences whatsoever?

The oligarchs, not surprisingly, sure do. They’re already leaning, and leaning hard, on any elected officials that dare to attempt to impose any consequences whatsoever on their companies. Which essentially means that it’s really important to impose some external constraints on these, ahem, motherfuckers before they go all irreversibly Hitler on your ass.

Bottom line: as long as AI model development is in the hands of private companies that can choose the training data sets, the models, and the benchmarks used to test the models, AI will be an ugly black box.

That is, no one will know what the hell it is, what it’s doing, how it’s doing it, or even why. This unexplainability is not a bug; it’s a fundamental feature of AI.

If you think the shit on social media is bad (and you do), the AI shit can get even worse. Way way worse.

Right now, It’s on you to Test what aI produces

That’s because everything is on you these days. You, the individual, are supposed to fix all the shit that companies do all by your lonesome by recycling or whatever. And so it is on you to check out whether or not these AI mouthpieces are ‘hallucinating’ or not. Fell for an hallucination? Haha, too bad for you, cuz we told you we were gonna produce crap. It’s on you that you decided to eat the shit that we produce.

So, should you be worried about AI and those damn biases? Of course you should, and you are. You know you can’t police AI all by yourself. You know you can’t police everyone else who is going to use AI. Hell, you can’t even get it out of your Pinterest feed! If public companies like Pinterest think they’re driving low-cost clicks with a bunch of AI-generated images, then they’re gonna flood your feed with AI-generated images.

Which is only partly why about 75% of you, give or take, want to see some serious regulation about AI.

What you don’t want is a world where everything is AI and you have no other source of information – or access to employment, or bank loans, or judicial sentencing, or your Social Security payments, or even getting into a fucking building.

What you don’t want is something ruling your life that no one on earth understands. What you don’t want is something that is controlled by maybe 7 people on the planet having power over everything you do.

What you don’t want is some Hitler-loving AI system to design a full-scale military invasion of your continent. What you don’t want is some cost-cutting corporation to decide that AI can make critical safety decisions even though it doesn’t know its ass from an asshole. Which you know will happen. Because people are stupid – and so is their AI.

So What Do We Need to Combat AI biases and MechaHitlers?

#1: The first thing we need is something we are very very far from having: a functioning media that reports relentlessly and in-depth on AI biases, successes, failures, and issues.

In a fun illustration of the clusterfuck that is modern life in a technology-driven society, actual reporters are being replaced by: you guessed it, AI!

#2: The second thing we need is more people from more places, including poor places, developing their own AI models. I don’t know, maybe models need to be open-source, crowd-sourced, and not legally able to be owned as private, profit-making property.

Heresy you say! People will not invest anything in AI if they can’t make a trillion dollars off of it! You will stifle innovation and eradicate economic growth.

Actually, people do tons of things they don’t make a trillion dollars off of. If you’ve ever had a knitter or crocheter in your family, you know there is virtually nothing you can do to stop them from producing more knitted and crocheted goods! People produce all kinds of things cooperatively and always have if they have access to the tools.

[Not that there isn’t any innovation that I’d like to stifle or ‘economic growth’ I’d like to eradicate. Heh heh heh. Wink. ]

#3: Incentivize companies that use AI to minimize bias in the models they use.

Incentivize by instituting the death penalty for CEOs who use biased AI. Ha ha! That would be a great incentive! JK. I’m not advocating for death. I’ll let the CEOs themselves do that!

However, publicly taking $1M of their annual compensation for each violation and setting it on fire on a livestream on X, Facebook, Amazon Prime Video, TikTok, YouTube, and Apple TV – that might be a fun incentive to tamp down on their impulses to put out crap that screws you over because they think they can and will make a trillion dollars.

AI isn’t smarter than you – it’s just seen more shit data

You see a ton of data and you filter a lot of it out because that’s how your brain works. What you filter creates your biases.

AI doesn’t filter data like your brain does. So humans filter for it by deciding what data to expose it to. The data the humans leave out create AI biases – which are the same as the human biases of the people who decide what data to feed it.

Companies today (like Google, etc.) feed their AI a ton of shit, biases and all, because these companies have access to a ton of shit. Lots more shit than you have access to. Which means AI has seen a shit ton of a shit data. Way more shit than you’ve seen (thankfully for you). So AI produces a lot of shit.

Understanding this does not take a lot of intelligence, artificial or otherwise. What it takes is the deep-rooted belief that you will not make a trillion dollars from AI and neither should anyone else.

Get pretty fed up with AI.


Discover more from Get Pretty Fed Up

Subscribe to get the latest posts sent to your email.

Tags: