In the winter of 2023, there is much fuss over the advent of the Large Language Model (LLM). Some are heralding the capabilities being shown as the "dawn of Artificial Intelligence", this time fer sure, Rocky!

Without any slight intended to the work of the successive waves of AI researchers over the years, cynicism is justified by the performance in this field. The public and the investors always seem to wind up disappointed after a few years.

Why is this? There's been good work done; techniques and software developed that improved the rest of the world in concrete easy to define ways. LISP, for example, did not fulfill the hyperbolic promises made for it; but has been a creative tool enabling much of our modern digital infrastructure.

How could that be a disappointment?

What is the goal of "AI research"? Even the people involved can't articulate exactly what they mean. The people working on "Full Self Driving" have their Levels of Vehicle Autonomy which go up to "zero interaction is required."

Despite the continued failure to deliver that much, many people are fully willing, eager even, to believe any promise made for ever more capable "AI." What is it we're actually after?

haltist expressed it well:

The main assumption of techno-optimism is that a large enough computer can do anything people can do and it can do it better. The goal of techno-optimism is to create a mechanical god that will rule the planet and scaling LLMs is a stepping stone to that goal.

"A mechanical God" is exactly what we want.

We have trouble articulating what we expect from AI, from simple embarrassment. We know it's unrealistic to expect a machine to become all knowing (or at least more knowledgeable than we are). We know it's unreasonable to expect a machine to be infallible (or at least make no mistakes that we can perceive).

We know it's unreasonable to expect a machine to imagine a thing that does not exist. Having done so, how can it labor make that thing exist in the real world? We're getting disturbingly close to that point when you count digital art.

The implicit promise of search engines was "this machine has read the entire Internet and will find the thing you want from it". That's what we wanted, no matter how unrealistic, and the search engines have delivered a product to meet that demand that is almost entirely unlike tea. But it's what we have.

LLM's specifically have upped the ante on that promise: they're supposed to have understood at least some of the meaning of what they read, and be able to summarize and paraphrase it. "Hallucinations" are decried but might they be a reasonable facsimile of imagination? If these things are intelligence, how can they back check these associations and remember their veracity, and assign them other values? Wild notions are so often worth exploring.

There's talk of "human reinforcement learning" layers; essentially imposed biases and blocks against "bad things". They've even got an acronym for it I can't be faffed to look up.

The resemblance to raising biological minds here becomes inescapable, if you ask me. Puppies, human children, etc; explore, imagine, acquire feedback from real world interactions, remember pain and pleasure, all those good things. After some fairly long period you have (hopefully) a functioning mind, suited to the body it is part of and the environment it lives in.

I don't see a lot of hope for the efforts of the current approaches to "oracle AI" being taken, because they do not seem to be accounting for such factors. Not to say they will be total failures; but I suspect that there will soon be widespread disappointment once again that "AI" has not delivered what people wanted.

The translator bots and chatbots that understand you're upset when you cuss at them, that's nice... but we wanted more.

I fear what we will be willing to accept as "Good enough".