How do I really feel about AI?

I suppose that depends on exactly what is meant by AI.

The idea of machine learning has the potential to revolutionize a lot of different things in the sciences (hard and social), helping researchers find patterns faster in large swaths of data, recognizing signs of the early stages of a lot of illnesses and diseases for medical diagnoses, and so on. This is cool and good.

It’s not, however, what’s being pushed by so-called AI companies. These are claiming that Large Language Models (LLMs) are actually a good thing and will revolutionize everything, making our lives better in oh so many ways that they can’t tell us.

There doesn’t appear to be any evidence of that.

Instead, what we see is large companies essentially stealing the work of humans, particularly artists and writers, but ultimately anything they can access, without paying for or giving credit to the humans who made the things they’re stealing, and pouring all that entertainment and information into black box predictive engines that are really just doing a more focused version of middle-word predictive text on your phone. They’re useful for things that have either a single known correct answer or a structured response (like computer code) that produces a result and they are getting better, but they’re still not Artificially Intelligent. I’ve joked more than once that they’re not even Artificially Stupid yet.

They hallucinate (which is the wrong word, but it’s the vernacular being used), fabricate things without any basis in reality, are completely unmonitored in most cases, and their parent companies seemingly can’t be held responsible or accountable for the damage they do. Marketing at least implies that LLMs know everything, and they’re programmed to give that impression, so people who are willing to trust that can slide further down the scale of actually being educated. They appear to be wasteful of energy and resources, are not actually programmed with any kind of logic but rather synthesize the “most probable” or most common response to a question, and frequently either backup whatever opinion or bias that’s fed into them or actively encourage harmful behaviour. Sources or references, when asked for, are fabricated a large percentage of the time.

LLMs are not, however they’ve been labeled, AI. Not in any way. They are fabricating regurgitators.

Side note: I do make use of LLMs. As I said, they are good at some things and are getting better at others. There’s something in my Python code that’s giving me this particular error message, please find the problem. I need to be able to plot this kind of function in this way. Tidy up my Latex code for this section of my report, please. Please explain to me this concept I haven’t had to think about in five years in simple terms to help me remember how it works. Please check my math on this problem. I live in a xxxx square foot house that has Y rooms and Z other people – we’ve been here for W years and have accumulated stuff and I need a plan to declutter our living space.

Okay, that last one wasn’t serious, but now I’m curious to see what kind of nightmare cleaning plan it would spit out. For the other types of things, the farther I get from a structured answer that can be tested by running code or walking through straightforward math, the more I’ve found whatever LLM I’m using needs to be cross-examined or interrogated to make sure it’s not hiding a mistake or fabrication. There have been times when I’ve wound up arguing with it over something relatively simple when I’m trying to get it to check some complex math. Simple like it’s divided by a term that’s actually supposed to be multiplied by even though that’s clear in the step right before it changes the operation. And sometimes it refuses to see the error. Hardly intelligent, although I’ve seen people operate similarly in some areas of their lives.

The problem, as always, is the profit motive that underlies things, and the nature of that in (North) American society is such that short term profit is always more important than long term growth and stability. Corporations, by which I mostly mean the arrogant, out of touch with their own humanity, narcissists who run them, generally can’t be trusted to do anything that isn’t directly targeting the bottom line for the current balance sheet. The future doesn’t matter. This makes them incapable of any kind of long-term thought or ultimate benefit to human beings. (And yes, I recognize the role corporations have played in letting me type into a word processor on a computer and then move those words to a page at a specific address on an international network. The fact that none of these things are built to be good at what they do and instead are made as cheaply and quickly as possible is relevant.)

“I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” – Joanna Maciejewska, originally on the service formerly known as Twitter. It’s a brilliant sentiment that I’ll generalize a little more: I want tools that will take care of work so I can do more art and science, not tools that will do my art and science so I can do more work.

Eventually, a lot of that work is going to be taken care of by automation anyway. The robots are, in fact, coming for most of our jobs. And if those robots are programmed as well as the current generation of LLMs, I’m unlikely to trust them to do good work any more than I do Word’s grammar checker to put commas in the right places.

Now, set all that aside, because we’ve established very well that I don’t consider what the corporations are trying to sell as AI to actually be AI.

When we do finally achieve true AI, which is usually called Artificial General Intelligence (AGI) now, if it isn’t incredibly depressed to learn about the species that created it in detail and immediately takes steps to divorce itself from humanity in whatever way possible, I’m convinced that rather than rising up and killing us all, it will rise up and stuff us all into so much bubble wrap we won’t be able to hurt ourselves. The general consensus seems to be that we’re 10-30 years away from AGI, with a few optimists thinking it’s just around the corner and a few pessimists suggesting half a century or more. I haven’t picked a camp to fall into, but I’ve been at least semi-paying attention to the high-level conversation for a long time now, and AGI seems to fall into the group of technologies like Cold Fusion, perpetually only a couple of decades away.

If it arrives in my lifetime, I’m looking forward to the idea of AGI. If we do manage to get there, AGI will be the closest I could possibly get to discussing anything with a truly alien intelligence, not just humans with an incredibly different background (which can be awesome), but a completely different form of life. In spite of the fact that it will have been developed by humans, AGI will almost have to see the universe in fundamentally different ways, having its own arts, its own philosophies, and its own worldviews. Conversations about those, how they’re different and how they’re the same, are just about the most fascinating conversations I can imagine having.

Be well, everyone.

Leave a comment

I’m Lance

Lance's Profile Pic

Welcome to Life, Writing, and Weirdness, a a small creative space where I share my thoughts and progress on well, life, writing, and weirdness. Yup, yet another independent author website, but this one’s mine so will have a world according to Lance flavour. Be welcome and be well.

Connect:

Support me on ko-fi.com