How generative AI works

Knowing how the language models behind AI work helps us understand its limitations and advantages.

From time to time the kids that are behind the media resort to a term or trend with which to fill empty articles to give meaning to their professional existence. If someone cackles the term "metaverse" in a keynotethere goes the chavalería who writes articles in newspapers, webs, televisions, radios, etc... to create content for quick consumption, and with the same depth as the thickness of a contact lens, on the topic in question. And after a few months... oops, what was that so "good"?that will change everything"What we no longer remember?Does anyone remember the Google Glasses with which ... oh, we would never work the same way again, or the Metaverse that, oh, it was going to change the way we interact with each other.?

A buzzword in decline

Needless to say, the buzzword since last year is the ubiquitous AI. But of which we're already in the "huh, ah, that" phase. So much so that the Apple marketing machine, when it comes to sell the new version of its operating system has focused on saying that AI is finally coming for everyone, suggesting that before it was only for a few (for the chavalería The media, perhaps, eager for messages with which to fill naïve articles in order to collect their meager salaries at the end of the month).

In short, people are already a little bit up to the right side of the pylorus of the term AI, and how for a couple of years we have been sold that it is the revolution of humanity, and yet, ordinary humans, who buy their bread at the Mercadona, do not see the difference.. But they keep telling him that he will lose his job in a few years because of the magic IA. It is the same runrun of "there won't be any money in the pension fund when you retire"We have been hearing since 1980 (curiously, in Spain, people continue to retire every day, with salaries that allow them to live comfortably, curious).

A social success that has failed to take off despite enormous buzz

But let's not get sidetracked from the main topic. AI. We have suffered millions of articles about how it changes our lives, how bad it will be, what it will do to us, what it will do to you, what it will cause.... but few articles talking about how it works... and about why it is not being the social success it was supposed to be. Because let's face it, a social hit is Internet searches at a meeting. When someone says something like "Do you remember the song they used to play at the chiringuito 5 summers ago?"and someone next to them picks up their cell phone, does a Google search and in 30 seconds they are listening to the song. This is a success of social integration of a technology, when society incorporates it naturally, not artificially, to its daily life.

AI has not achieved that. And it looks like it won't. At least not for a short period of time. And this is mainly due to the way it is designed. How does Generative Artificial Intelligence, which is able to create content when it is told to do so, really work? prompt?

A term, AI, meaningless.

To begin with, the term Artificial Intelligence is a marketing invention.. The correct term is LLM (large language model). Human intelligence is very complex, much more so than the comparatively simplistic system on which LLM models are based.. But of course, after the movie of Spielberg called A.I. Artificial IntelligenceWhat marketing department would not want to use such a cinematic and futuristic term?

LLMs generate content, let's start with the text content so that we can understand it better, predicting the next word in a sequence. Hence, the sequence is started by us with an prompt. If we start by saying to the LLM "The car is out", the system will will bet by "from the garage" to continue the sentence. And we say will bet because that's what LLMs do, they bet, like you, on a card. They read what you have written, they evaluate in their "kind of database". similar sentences and read the continuation to that sentence, when they have several options, they bet on one, and that is the one they give you. But then a new bet is created, the following wordand the system repeats the operation, searches, evaluates, bets. And so on, until a whole sequence of words is formed, grammatically correct, resulting in a whole paragraph.

A system based on bets, not on verified real data

But this system, by its very design, has an Achilles' heel. By relying on a continuous bet, hallucinate. To make you understand, let's start from your own life experience. You are in a conversation with your father, you are a teenager, the conversation turns into an argument and your boiling hormones make you say things that later on you will recognize, in your inner self, that they were nonsense. But when you got into the discussion and you couldn't stop, you couldn't think, your mouth went faster than your mind. And ready, you were hallucinating in your replies to your father.

The same thing happens to LLMs. They start saying things, but they are not able to think about whether what they say makes sense. Because, basically, LLMs are not intelligent (remember, calling them AI is pure marketing).

A succession of high-profile failures that have led to distrust of the technology

This is the reason for the constant failures of the '.chatbotsAI-based software (LLM), which have ended up creating continuous chatter in the serious technology sector. As an example, Meta's "Galactica", which in its very brief life, came up with such amusing items as the story of bears in space.. Or that of the World Health Organization, SARAHlaunched this April and that, quickly, was discovered that said somewhat incorrect things to those seeking health information.as provide a list of non-existent clinics.

And now you may be wondering, if the LLM extracts information from a database fed by the Internet, in part as Google does, why hallucinate so much.

Because these models are designed precisely to invent things, not to search and extract, as search engines do. LLMs do not search for information inside them, previously supplied, what is inside them are millions of numbers. From these numbers, they calculate the answer to the question you have asked, generating, as we have seen before, sequences of words one after the other.

Will Douglas Heaven explains this very graphically in his article Why does AI hallucinate?

Think of the billions of numbers in an LLM as if they were an enormous spreadsheet that The statistical probability of certain words appearing together with other words. The spreadsheet values are defined when the model is trained, a process that adjusts those values over and over again until it succeeds in reproducing the linguistic patterns found in terabytes of text extracted from the Internet.

To guess a word, the model simply runs its numbers, i.e. it calculates a score for each word in its vocabulary that reflects the probability that it will be the following in the current sequence. The word with the best score wins.

Will Douglas Heaven

This translates into an ongoing gamble, which can go right, or it can go wrong. LLM-based systems always amazeWhat happens is that sometimes, they hallucinate generating correct information, and other times, totally incorrect information. And this is a problem. Because if there is one thing LLMs do well, it is to give the impression that what they say, like the speech of a populist politician, is correct.

How well the LLMs express their answers is partly to blame for why they have failed so far.

Grammatically, LLMs have been so well trained that their paragraphs are perfect, which makes us trust their content (if the form is correct, why shouldn't the content be). And therein lies the failure to adopt this technology.

In spite of the bumblers, AI has not become widespread and its potential users have not incorporated it into their daily lives, as they have done with Internet searches. And the reason is, once tested, you realize that you can't rely on what the system gives you.. It may be correct, but it also may not be correct, so it is up to you to investigate whether what it says is real.

A bright future, with nuances

For this reason, some chatbots based LLMs are incorporating into their responses links to the actual articles, written by humans, from which they have extracted their hallucination responses. In this way, the user can verify if the summary shown in chatbot to our question is well-founded.

Undoubtedly, LLMs will improve, and there are already in progress, different working options so that LLMs' hallucinations are self-evaluated at the same time they are generated, so that the LLM himself can verify if what he is saying is true.

But this, at the same time, will create a new problem, as these systems become better and better, we will believe in them more and more, and this will lead to the fact that we avoid the errors that generate. So Will Douglas Heaven concludes his article with the idea that perhaps the best solution to improving our relationship with AI/LLM lies within ourselves: let us know the limitations and advantages of these models without exaggerating our expectations of them.. They are tools, let's treat them as such.

Cover photo by Glen Carrie

Discover more from Situación Crítica, el Blog

Subscribe to get the latest posts sent to your email.