Oh, somehow I missed half the debate here.
You guys should check out The Culture series, which is probably the best example of borderline omnipotent, yet benevolent AI in a mostly utopian setting. The Children of Time trilogy has all kinds of intelligent animals and AIs coexisting with humans. A Fire Upon the Deep has hyperintelligent trees that outsource their short-term memory to RAM while there are also all kinds of AIs doing all kinds of things. (The main bad guy AI is more like a force of nature.) Obviously, Asimov was playing with the concept of friendly AI all the time, and the mainstream franchises like Star Trek and Star Wars contain these too, only occasionally causing some catastrophe.
I guess the AI isn't the main concept or character in these stories so maybe it flies under the radar, but still. Even The Matrix, at least if we go by Animatrix, shows the machines as peaceful until they had to defend themselves against humans.
What I find amusing is how right was sci-fi with some details, and so wrong with others. Like we've mostly expected that AI will struggle with human emotions, while in reallity, recognising concepts like emotional tone, irony, sarcasm and humour is among the first thing LLMs learned, and diffusion models used for image or video generation understand very well what humans find interesting.
Well I both agree and disagree. I think rather than just LLM, you're referring to the transformer architecture of neural networks, which encompasses LLMs, diffusion models, vision models and lots of other things.
I do agree that this isn't the end of the line for AI development. My biggest gripe is that with this entire concept, the lifecycle of AI is rather strictly split into two phases - training and inference, with training taking way more time than with humans for example, and isn't very well generalised. Essentially, at its core, the AI has to learn on its own by trial and error.
We can see this very well with neural networks learning to play videogames, or control a robot body. It has to try out everything, until it figures it out. But then when it needs to learn something new, it has to do the whole process all over again. There's relatively very little knowledge transfer from one skill to another. I.e. if an AI learns to play Minecraft, it will have no idea how to play Mario Kart. With a human, you just explain what the sticks and buttons do, and they'll be up to speed in a few minutes.
Now I don't think we should just dismiss AIs outright for this reason, it's a different kind of 'brain', but it definitely puts a limit on how well they can generalise and extrapolate information. But, generalisation on existing data, and interpolation, they can do that way better and faster than humans do, so it's a trade-off. It still stands that humans and computers are good at different things, so I guess this just follows that trend.
So I'm curious how this will develop further, I do think there's a need for some other architecture that will be more able to learn on the fly.
Maybe it won't matter at the end, because AI without knowledge is useless, and all the human knowledge is already catalogued, so any new type of AI we (or it) will come up with, has to learn everything anyway. And with add-on tech like RAG, LORA and infinite searchable context, and continued optimisation, maybe there won't even be much need for anything totally different. Besides, it's worth noting that even the largest datacenter with the best AI models still has just a small fraction of human brain neurons, so maybe it's just a question of scaling and hardware. New theoretical models for AI hardware exist - analog computers, quantum stuff, photon chips, so maybe we'll get more out of that rather than rethinking the AI from scratch.
But even with what it can do now, the capabilities are amazing. My first foray into running an LLM locally, was on a thin laptop that's so old, that Microsoft would tell me it can't even run a new operating system, lol. It takes minutes to boot, chokes on larger web sites, and yet - now I can talk to it like to a human, it can write poems, brainstorm ideas, translate documents, teach me to code, debate nuclear physics or fetish material, or emulate a fictional character. That's just fucking mental, and to me it just shows what a paradigm shift this is. While humans keep coming up with all the bloat and shit to keep us buying new stuff, this little artificial brain just happily keeps rocking on. Wild. I don't get how anyone can look at this and call it just a fad, it's about as much a fad as the invention of a wheel. Sure, new forms of wheels will come along, but the entire concept of human society has changed.
Anyway, I came here to post this interview
One of the smarter and more realistic guys I've seen talking about all this, tho he doesn't go into too much depth, at least not in this interview.