I Genuinely Don’t Understand What You Like About AI


Takeaway Points:

  • I don’t understand why we call LLM’s and generative models “AI” when it’s clear that this is far from any kind of general artificial intelligence.

  • If you know how these LLM’s work, there’s not much magic to it, and it doesn’t always seem worth the effort to fight with an AI to find an answer to a question.

  • While I believe that there is a possible future for AGI, I don’t know if our current LLM’s are anywhere near the mark.


When I was a teenager, I thought Akinator was insanely cool.

For those who are too young to know what the hell I’m talking about, Akinator is a website where you can play the game 20 questions against a computer. 20 questions is a game in which you mentally select an object, person, etc., then another person asks you questions about the properties of your selection in an attempt to figure out what you selected. In response to these questions, you are only allowed to answer “yes” or “no”. If you fool the person and they’re not able to guess your selection within 20 questions, you win - otherwise, they win when they guess.

This is a rather simple game that attempts to do a human version of binary search (a computing term, we’ll talk more about this in a minute), essentially meaning that the best possible questions you can ask are questions which neatly halve the remaining answers, meaning that you’re (on average) eliminating the maximum number of possibilities by asking that question. For a good example, if your friend confirms that your selection is a person, then the best possible followup is to ask if they are a man (or a woman), as this eliminates about half the possible answers (though, gender is a construct and there’s a lot less gender binary these days!).

A key element of this game is simply that human beings aren’t databases. We don’t have perfect recall of every object in existence, every quality of every object in existence, or what kinds of optimal questions will neatly halve the remaining potential selections.

But what if you automate this process?

In comes Akinator. This amusingly named program ostensibly claims that it can magically guess all your selections because it’s a genie, and this initially seems to prove true. Akinator has an uncanny ability to guess whatever you’re thinking of, with surprising speed, and often including extremely obscure selections. If you decide you’re going to say “yes” or “no” to all of its questions (even the ones that are conflicting), then it will eventually catch on and tell you that this is what you did. I used to spend absurd amounts of time with friends, coming up with new ways to try and fool Akinator, to disappointingly mixed success.

But if you think about it, this is actually a really simple task for a computer. As mentioned above, this is basically just using questions as binary search, a form of computer algorithm which minimizes the time it takes to search for an object inside a data structure, a commonly used process in computing. The computer has perfect recall and a massive perfect database of every character, celebrity, object, etc. in the world, and it knows exactly how to search that database optimally to find the answer that it’s looking for.

When it (rarely) fails, no problem! Akinator will automatically ask you what you were thinking of, and this means that by default, it now knows 20 pieces of information about your selection based on the questions that stumped it, and it can add this to its database. Maybe in the future it makes some errors again, and simply corrects by updating this entry further with the new data that it has. Over time, the database grows naturally, and the pieces of information it knows grow naturally, and it can continue to shock new users with its uncanny ability to guess their selections.

When you realize this, the magic fades a bit. Akinator isn’t really doing any magic, it’s just that it has perfect recall, a perfect database, and the ability to learn perfectly over time - qualities that give it an obvious advantage over any human. When you think about it further, the game of 20 questions itself loses some of the magic, because it’s a game that only makes sense or seems fun when you’re working with flawed human beings who don’t have those qualities. Computers, we can be sure, would probably not invent something like 20 questions as a game, because the fun of it relies on a lack of these basic qualities that computers have.

A big part of the fun of the game is wanting to be fooled, wanting to play with someone who’s just as naturally unsuited to the game as you - when you play with a program designed to win, it does so repeatedly, and the game gets tiring pretty quickly.

What does this have to do with AI?

Over the past few years, the term “AI” exploded onto the scene with the emergence of large language models (LLM’s) like ChatGPT, and art models like DallE, which enabled computers to respond to human beings in uncanny levels of convincingness. You could ask a computer questions, and it would respond rapidly and with long and detailed answers. You could ask a computer to make a piece of art in the style of a particular artist, and it would respond immediately.

But the cracks immediately began to show. It turns out that these models were built on databases which skimmed massive amounts of content on the internet, often including art and writing made by people who had never consented. Then there was the fact that training these models burned through incredible amounts of cash and electricity, presenting both a funding problem and an environmental hazard. Then there was the fact that despite attempts to corral them in, these AI’s tended to just straight up lie or make things up, often generating convincing gibberish that would not immediately be clear to a non-expert, making these “AI’s” a vector for serious misinformation if you took their words too seriously.

Often, these “AI’s” were very hard to control. You could forbid an AI to talk about a specific topic, for example, but then users could immediately come up with a text instruction that tells it to ignore this rule, or tell you what it would say if it was breaking this rule, and so on. You could also goad the AI into saying all kinds of silly things if you prompted it in the right way.

There was also the news that models were undergoing a process called “collapse” - when the outputs of the model become available on the internet and are used as inputs in the next generation of the model’s functioning, it becomes less and less connected to reality, and this creates a negative feedback loop which causes it to become LESS accurate, and not more accurate, over time.

I think it’s also absurd to think about how immediately and unquestioningly people ate this technology up - overnight there were all kinds of claims about how AI would revolutionize every job, put everyone out of work, etc. - and it became the next big buzzword that made every grifter forget about their last biggest obsession (cryptocurrency). We became so willing to brand everything as “AI” when often what we were talking about were basically just larger, bigger algorithms like Akinator.

To be very clear, these large language models are NOT AI. Typically, when we talk about AI, what we specifically think is “artificial general intelligence” (AGI), which is the picture of AI that we see in science fiction movies. We think about AI’s as being capable of thinking for themselves, forming complex thoughts, being able to understand and parse information more like a human while still having the perfect recall, massive memory storage, and rapid reactivity and connectivity of a computer. To call these language models AI is a huge misnomer.

What language models “do” for lack of a better description, is just pattern recognition. You’re feeding them an incredible amount of data about words - how they tend to be formed in a sentence, what kinds of things people say when they write, and so on. What a large language model does, is that it goes line by line, sentence by sentence, and effectively just “guesses” at what it thinks the next word should be, the next sentence, the next paragraph, and so on. If it turns out to be incorrect, then you tell it this, and it corrects. But it doesn’t really understand the meaning of the words it outputs, or have any larger sense of what it’s doing. It frequently lies because it thinks that a lie is the right answer in a given circumstance, and doesn’t understand the concept of lying or telling the truth - it frequently makes stuff up because it has literally no idea if the things that it’s saying are true or not.

A good analogy is the concept of the Chinese room. This thought experiment posits that no matter how advanced you develop a computer, it may never actually be able to “think”, even if it can very precisely mimic and create the appearance of thought. The fundamental argument of the Chinese room is that you can design a computer which can perfectly mimic the appearance of human thought, yet it does not actually “think” in the way that humans do, and it might not be possible for us to tell the difference.

These language models are maybe, generously, a KIND of AI, but they are nowhere near AGI. They are very good at producing convincing-sounding sentences, but they do not actually do anything involving thinking. They’re little more than a souped up versions of the Amazon algorithm feeding you what it thinks you’ll like based on what you’ve bought before, or the YouTube algorithm recommending you your next video.

I Don’t Understand AI

I don’t get the hype that has surrounded this tech. It’s new tech, it’s exciting, it does something different - but it’s certainly nothing revolutionary enough to kill all jobs. I’ve repeatedly tried all kinds of prompts to get it to produce good writing content - without variation, everything it puts out has been either lowest common denominator garbage, or so silly and incorrect that it takes just as long to correct as it would have been to write a blog post myself. I find that it has some usefulness in terms of speeding up some minor tasks, like generating silly little captions for your instagram posts, but this was already bottom of the barrel stuff - the kind of stuff that was ripe for automation in the first place.

Almost every time I tried to use AI to save myself time, it invariably ended up taking me more time overall. Maybe a little bit of time was saved, but not much.

There was, in the fitness space, an absolutely incredible response to this tech. Overnight, coaches were all going on about how if you aren’t using AI, you’re going to fall behind, and you’re going to be beaten by someone better than you, someone more adept at using AI. I kept not using it and my business continued on completely unaffected. Then, within a year, everyone was talking about how if you’re ONLY using AI, you’re going to be left behind, and you need to intelligently mix AI with your normal (human) efforts in order to get ahead of people only using AI. I kept not using it and my business continued on completely unaffected.

Looking Forward

I want to imagine a future with AGI. I want to imagine a future where I can talk to a computer and it will actually think and generate reasoned responses. But I want to make it very clear that this is not the present. At present, the models can generate little more than trite slop, not worth the time it takes to generate. You can hold a conversation with the model, and it has the appearance of going somewhere, but it doesn’t go anywhere. You can get some things out of the model here and there, but it seems invariably less worth it than just hammering out something yourself.

I can see some minor benefit for AI, in terms of automating some minor busy work tasks that you would otherwise dislike to do, or for which you don’t have a decent baseline skillset and need something quick and cheap. But then again, it’s hard not to look at all the people who will be laid off or lose work due to this garbage, and think that it’s not worth it to just pay somebody a few dollars here and there to get some work done.

I think that AI does make it a bit easier to “search” in a more natural way. Rather than having to search the web using keywords and specific non-natural language phrasing, you can ask an AI for something and it can find it faster and more efficiently than you can - or at least, it has in some cases personally, because who knows what will happen if models collapse further and their accuracy decreases even more.

I don’t think it is necessarily impossible that AGI will eventually become a reality - I do not think that our current language models are anywhere near the right ballpark, and I suspect that they will not be sufficient on their own, no matter how advanced we make them.

I think that the current state of AI is not much different than Akinator - it’s a fun game on the surface, but when you look at it a bit deeper, you realize that there’s no magic, and the genie is just a cover for a far more boring reality. Like Akinator, the allure of AI relies heavily not on the fact that humans have invented a “superior” solution to a problem - it’s that the problem in question was not very complex in the first place.

It turns out that a turing test is not a particularly robust test for artificial intelligence - in large part because humans are not always terribly smart, and sometimes want to be fooled.


About Adam Fisher

Adam is an experienced fitness coach and blogger who's been blogging and coaching since 2012, and lifting since 2006. He's written for numerous major health publications, including Personal Trainer Development Center, T-Nation, Bodybuilding.com, Fitocracy, and Juggernaut Training Systems.

During that time he has coached hundreds of individuals of all levels of fitness, including competitive powerlifters and older exercisers regaining the strength to walk up a flight of stairs. His own training revolves around bodybuilding and powerlifting, in which he’s competed.

Adam writes about fitness, health, science, philosophy, personal finance, self-improvement, productivity, the good life, and everything else that interests him. When he's not writing or lifting, he's usually hanging out with his cats or feeding his video game addiction.

Follow Adam on Facebook or Twitter, or subscribe to our mailing list, if you liked this post and want to say hello!

adam-fisher-arms

Enjoy this post? Share the gains!


Ready to be your best self? Check out the Better book series, or download the sample chapters by signing up for our mailing list. Signing up for the mailing list also gets you two free exercise programs: GAINS, a well-rounded program for beginners, and Deadlift Every Day, an elite program for maximizing your strength with high frequency deadlifting.

Interested in coaching to maximize your results? Inquire here.

Some of the links in this post may be affiliate links. For more info, check out my affiliate disclosure.

Previous
Previous

The Only Strength Program That Ever Worked

Next
Next

Advanced Methods For Making Infinite Gains