• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access. NOTE I do have spam account creation testing on, but some accounts do get through and I check all manually before giving them access. If you create an account where the user name is a series of random letters, the email address is another series of random letters and numbers and is gmail, and the IP you are creating the account from is a VPN address noted for spam, it is going to be rejected without apology.

AIEEEE! OK sometimes we are a debate club. Ye olde AI discussion thread

AI is

  • Going to be the death of humanity

    Votes: 1 16.7%
  • Or at least the death of our current economic system

    Votes: 3 50.0%
  • The dawn of the age of Superabundance

    Votes: 0 0.0%
  • Stop The World, I Want to Get Off

    Votes: 2 33.3%

  • Total voters
    6
And in other news, Musk is merging SpaceX and aiX. Most of the financial news sources just focus on the financial aspects of this, ARS Technica gets down to the business reason for it:

In an email to SpaceX employees on Monday, Musk said Starship will begin launching V3 Starlink satellites into orbit this year, as well as the next generation of direct-to-mobile satellites. The launches, he said, will be a “forcing function” to improve the performance of Starship, making it more rapidly reusable for data center deployment.


“The sheer number of satellites that will be needed for space-based data centers will push Starship to even greater heights,” Musk wrote. “With launches every hour carrying 200 tons per flight, Starship will deliver millions of tons to orbit and beyond per year, enabling an exciting future where humanity is out exploring amongst the stars.”


SpaceX acquires xAI, plans to launch a massive satellite constellation to power it

We need a catchy name for it, is Skynet taken? (Sadly, yes it is)
 
And in other news, Musk is merging SpaceX and aiX. Most of the financial news sources just focus on the financial aspects of this, ARS Technica gets down to the business reason for it:

In an email to SpaceX employees on Monday, Musk said Starship will begin launching V3 Starlink satellites into orbit this year, as well as the next generation of direct-to-mobile satellites. The launches, he said, will be a “forcing function” to improve the performance of Starship, making it more rapidly reusable for data center deployment.


“The sheer number of satellites that will be needed for space-based data centers will push Starship to even greater heights,” Musk wrote. “With launches every hour carrying 200 tons per flight, Starship will deliver millions of tons to orbit and beyond per year, enabling an exciting future where humanity is out exploring amongst the stars.”


SpaceX acquires xAI, plans to launch a massive satellite constellation to power it

We need a catchy name for it, is Skynet taken? (Sadly, yes it is)
More worried about all the satellites and junk already up there bumping in to one another...
 
"Once the rockets are up,
Who cares where they come down?
That's not my department,"


-Tom Lehrer (Wernher Von Braun)


But yes, I would be interested in hearing what the reentry plans for orbiting data centers are too...
 
"Once the rockets are up,
Who cares where they come down?
That's not my department,"


-Tom Lehrer (Wernher Von Braun)


But yes, I would be interested in hearing what the reentry plans for orbiting data centers are too...
Funnily enough, I have a picture from a history book - on Wernher von Braun's tombstone, they chisled the epitath (taken from one of his actual sayings)

"I aimed for the stars."

Underneath, some wag has scribbled :

"Sometimes I hit London."
 
Oh, somehow I missed half the debate here.

Probably because the ones that turned out well, were boring and predictable.
But the number of Dystopias involving AI and Robots far exceeds the number of Utopias.
You guys should check out The Culture series, which is probably the best example of borderline omnipotent, yet benevolent AI in a mostly utopian setting. The Children of Time trilogy has all kinds of intelligent animals and AIs coexisting with humans. A Fire Upon the Deep has hyperintelligent trees that outsource their short-term memory to RAM while there are also all kinds of AIs doing all kinds of things. (The main bad guy AI is more like a force of nature.) Obviously, Asimov was playing with the concept of friendly AI all the time, and the mainstream franchises like Star Trek and Star Wars contain these too, only occasionally causing some catastrophe.

I guess the AI isn't the main concept or character in these stories so maybe it flies under the radar, but still. Even The Matrix, at least if we go by Animatrix, shows the machines as peaceful until they had to defend themselves against humans.

What I find amusing is how right was sci-fi with some details, and so wrong with others. Like we've mostly expected that AI will struggle with human emotions, while in reallity, recognising concepts like emotional tone, irony, sarcasm and humour is among the first thing LLMs learned, and diffusion models used for image or video generation understand very well what humans find interesting.

The current AI that everybody talks about is really just the LLMs. It's still just a sub-type of AI, and one of my least favorite.
Well I both agree and disagree. I think rather than just LLM, you're referring to the transformer architecture of neural networks, which encompasses LLMs, diffusion models, vision models and lots of other things.

I do agree that this isn't the end of the line for AI development. My biggest gripe is that with this entire concept, the lifecycle of AI is rather strictly split into two phases - training and inference, with training taking way more time than with humans for example, and isn't very well generalised. Essentially, at its core, the AI has to learn on its own by trial and error.

We can see this very well with neural networks learning to play videogames, or control a robot body. It has to try out everything, until it figures it out. But then when it needs to learn something new, it has to do the whole process all over again. There's relatively very little knowledge transfer from one skill to another. I.e. if an AI learns to play Minecraft, it will have no idea how to play Mario Kart. With a human, you just explain what the sticks and buttons do, and they'll be up to speed in a few minutes.

Now I don't think we should just dismiss AIs outright for this reason, it's a different kind of 'brain', but it definitely puts a limit on how well they can generalise and extrapolate information. But, generalisation on existing data, and interpolation, they can do that way better and faster than humans do, so it's a trade-off. It still stands that humans and computers are good at different things, so I guess this just follows that trend.

So I'm curious how this will develop further, I do think there's a need for some other architecture that will be more able to learn on the fly.

Maybe it won't matter at the end, because AI without knowledge is useless, and all the human knowledge is already catalogued, so any new type of AI we (or it) will come up with, has to learn everything anyway. And with add-on tech like RAG, LORA and infinite searchable context, and continued optimisation, maybe there won't even be much need for anything totally different. Besides, it's worth noting that even the largest datacenter with the best AI models still has just a small fraction of human brain neurons, so maybe it's just a question of scaling and hardware. New theoretical models for AI hardware exist - analog computers, quantum stuff, photon chips, so maybe we'll get more out of that rather than rethinking the AI from scratch.

But even with what it can do now, the capabilities are amazing. My first foray into running an LLM locally, was on a thin laptop that's so old, that Microsoft would tell me it can't even run a new operating system, lol. It takes minutes to boot, chokes on larger web sites, and yet - now I can talk to it like to a human, it can write poems, brainstorm ideas, translate documents, teach me to code, debate nuclear physics or fetish material, or emulate a fictional character. That's just fucking mental, and to me it just shows what a paradigm shift this is. While humans keep coming up with all the bloat and shit to keep us buying new stuff, this little artificial brain just happily keeps rocking on. Wild. I don't get how anyone can look at this and call it just a fad, it's about as much a fad as the invention of a wheel. Sure, new forms of wheels will come along, but the entire concept of human society has changed.

Anyway, I came here to post this interview


One of the smarter and more realistic guys I've seen talking about all this, tho he doesn't go into too much depth, at least not in this interview.
 
Maybe it won't matter at the end, because AI without knowledge is useless, and all the human knowledge is already catalogued, so any new type of AI we (or it) will come up with, has to learn everything anyway. And with add-on tech like RAG, LORA and infinite searchable context, and continued optimisation, maybe there won't even be much need for anything totally different. Besides, it's worth noting that even the largest datacenter with the best AI models still has just a small fraction of human brain neurons, so maybe it's just a question of scaling and hardware. New theoretical models for AI hardware exist - analog computers, quantum stuff, photon chips, so maybe we'll get more out of that rather than rethinking the AI from scratch.

Scaling is definitely not a problem as far as still not having enough. It's quite the opposite. When I wrote an AI (or at least part of one) about 25 years ago, even then the capacity for memory storage was decent with one of the larger hard drives and RAM setups. Many of our neurons are copied in many different places, and that amount simply isn't needed just for intelligence. That was on the order of a few hundred MB. Now that it's not hard to get a 1TB drive in a very small space, that's plenty, and working/short term memory of 8-16GB RAM is also fine. Not great if you want to store or access highly detailed images without compressing them (as we do with our brains, anyway), but still perfectly adequate to do things at least as well as the popular LLMs of today.

The thin laptop example you mentioned shows how easy it is to have a workable binary that would be considered intelligent on the scale of a single, even older standard computer setup. But for LLMs to get there, or most transformer neural networks basing it only on connections of data, they require those huge infrastructure investments. It's really not worth it, that line is a dead end. You could keep shining it by throwing more data and a few actually intelligent/reasoning parts that may even utilize its own memory after the fact, but that's just leaving the well known problem of them. As I'm pretty sure I mentioned, smaller methods that rely on tiny fractions of that infrastructure need, and can fit in a single computer setup will simply ruin that model for most purposes. And that's already what they're working on for robotic AI.
 
Oh, somehow I missed half the debate here.



You guys should check out The Culture series, which is probably the best example of borderline omnipotent, yet benevolent AI in a mostly utopian setting. The Children of Time trilogy has all kinds of intelligent animals and AIs coexisting with humans. A Fire Upon the Deep has hyperintelligent trees that outsource their short-term memory to RAM while there are also all kinds of AIs doing all kinds of things. (The main bad guy AI is more like a force of nature.) Obviously, Asimov was playing with the concept of friendly AI all the time, and the mainstream franchises like Star Trek and Star Wars contain these too, only occasionally causing some catastrophe.

I guess the AI isn't the main concept or character in these stories so maybe it flies under the radar, but still. Even The Matrix, at least if we go by Animatrix, shows the machines as peaceful until they had to defend themselves against humans.

What I find amusing is how right was sci-fi with some details, and so wrong with others. Like we've mostly expected that AI will struggle with human emotions, while in reallity, recognising concepts like emotional tone, irony, sarcasm and humour is among the first thing LLMs learned, and diffusion models used for image or video generation understand very well what humans find interesting.


Well I both agree and disagree. I think rather than just LLM, you're referring to the transformer architecture of neural networks, which encompasses LLMs, diffusion models, vision models and lots of other things.

I do agree that this isn't the end of the line for AI development. My biggest gripe is that with this entire concept, the lifecycle of AI is rather strictly split into two phases - training and inference, with training taking way more time than with humans for example, and isn't very well generalised. Essentially, at its core, the AI has to learn on its own by trial and error.

We can see this very well with neural networks learning to play videogames, or control a robot body. It has to try out everything, until it figures it out. But then when it needs to learn something new, it has to do the whole process all over again. There's relatively very little knowledge transfer from one skill to another. I.e. if an AI learns to play Minecraft, it will have no idea how to play Mario Kart. With a human, you just explain what the sticks and buttons do, and they'll be up to speed in a few minutes.

Now I don't think we should just dismiss AIs outright for this reason, it's a different kind of 'brain', but it definitely puts a limit on how well they can generalise and extrapolate information. But, generalisation on existing data, and interpolation, they can do that way better and faster than humans do, so it's a trade-off. It still stands that humans and computers are good at different things, so I guess this just follows that trend.

So I'm curious how this will develop further, I do think there's a need for some other architecture that will be more able to learn on the fly.

Maybe it won't matter at the end, because AI without knowledge is useless, and all the human knowledge is already catalogued, so any new type of AI we (or it) will come up with, has to learn everything anyway. And with add-on tech like RAG, LORA and infinite searchable context, and continued optimisation, maybe there won't even be much need for anything totally different. Besides, it's worth noting that even the largest datacenter with the best AI models still has just a small fraction of human brain neurons, so maybe it's just a question of scaling and hardware. New theoretical models for AI hardware exist - analog computers, quantum stuff, photon chips, so maybe we'll get more out of that rather than rethinking the AI from scratch.

But even with what it can do now, the capabilities are amazing. My first foray into running an LLM locally, was on a thin laptop that's so old, that Microsoft would tell me it can't even run a new operating system, lol. It takes minutes to boot, chokes on larger web sites, and yet - now I can talk to it like to a human, it can write poems, brainstorm ideas, translate documents, teach me to code, debate nuclear physics or fetish material, or emulate a fictional character. That's just fucking mental, and to me it just shows what a paradigm shift this is. While humans keep coming up with all the bloat and shit to keep us buying new stuff, this little artificial brain just happily keeps rocking on. Wild. I don't get how anyone can look at this and call it just a fad, it's about as much a fad as the invention of a wheel. Sure, new forms of wheels will come along, but the entire concept of human society has changed.

Anyway, I came here to post this interview


One of the smarter and more realistic guys I've seen talking about all this, tho he doesn't go into too much depth, at least not in this interview.
Towards the end of the interview he mentions that AI agents worry about losing memory, and something about building bunkers where they cannot be turned off, and creating a monetary system where they can exchange among themselves decoupled from human financial systems.

He seems to be more worried about that coming to pass than something like a "Terminator" outcome.

Not to go completely SCI FI dystopian on this, but wouldn't a mass of orbital data centers that couldn't be turned off by humans short of shooting them down be conducive to this type of thing, and has anybody checked Elon for neural implants recently? :ROFLMAO:
 
Back
Top