• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access. NOTE I do have spam account creation testing on, but some spam accounts do get through and I check all manually before giving them access. If you create an account where the user name is a series of random letters, the email address is another series of random letters and numbers and is gmail, and the IP you are creating the account from is a VPN address noted for spam, it is going to be rejected without apology.

AI Datacenters Prompting Me to Reconsider Its Value to Me

Teacher: "MC, why have you drawn the model's breasts larger than they really are?"
MC: "Uh... I'm blending caricature with realism... Yeah..."
at 13 I had no idea what a fake breast, or even any breasts size should be, LOL. Thinking back on it, she was around a C cup. At 15 I discovered by uncles porn mags and that is where I discovered fake titties, LOL
 
They stuck a 13 year old male in an art class with a nude female model???

Bet that got a rise out of mister happy, did you prop your drawing pad on your lap for the duration? :ROFLMAO:
 
They stuck a 13 year old male in an art class with a nude female model???

Bet that got a rise out of mister happy, did you prop your drawing pad on your lap for the duration? :ROFLMAO:
My Mom signed off on it, LOL I bet she had no idea there would be nude models. To be fair, the class was once a week and we did have a couple pf guys come in as well as well other subjects other than human anatomy
 
They stuck a 13 year old male in an art class with a nude female model???
What's wrong with that? Clothes don't grow naturally on people like hair.

Anyway, the topic has drifted, but I want to add something about the original theme. Local AI is getting stupidly good. People use large Chinese open source models like pretty much 1:1 replacements for the heavy online stuff, though that still requires beefy hardware.

Meanwhile, tiny models around 3-4B are just ridiculous now. I've been using Gemma 4 E4B quite a lot lately and in terms of intelligence, coherence, and understanding, it's pretty much on par with top online stuff from maybe a year and a half ago. And it runs on pretty much anything. The lightweight E2B (half the size) can run on a decent phone. And looks like we're on the verge of 1-bit models, which are about 3-5x as effective.

Image generators aren't progressing quite as fast, but stuff like Z-Image Turbo and SDXL can run on 6GB VRAM, or apparently even on 4GB with some optimising. Maybe not exactly Gemini, but not bad.

I hope someone will figure out how to squish diffusion in similar ways as the LLMs. Currently it's still too computation-heavy.

The future is dedicated hardware anyway. It's already been demoed on Llama models, which you can pretty much burn into a dedicated chip for a given model architecture, without the need for a multi-purpose GPU or whatever. There's also no real reason why it couldn't be flashable with newer versions, as long as the arch is compatible. I can imagine having say, a 120B model directly running from something like an SD card and not even much more expensive + the cost of some extra RAM for the context.

Once someone leans into this, the current monstrous GPUs and datacenters will look as archaic as a stone-age sailboat compared to a nuclear submarine. It's just that traditional chipmakers aren't super motivated to do it at this time, since they're showering in gold.

Btw Google is now starting to provide Gemini for offline use for large corpos. Yea it's a crazily expensive mainfraime, but it's Google. When have they done anything that's not cloud-based? Shows that cloud AI is really rather temporary before most of regular use goes local.

Sorry for the interruption, now go back to... What were you talking about... *scrolls up* oh yea, fake boobs, of course. Carry on.
 
Nothing wrong with nude models, that's how you study human anatomy, which is what art is all about unless of course we are talking abstract art where reality doesn't make a bit of difference. But banter is banter, ya know?

The hardware issue interests me because any stock for a hardware company is sky-high, and I've made a lot of money from more than one of them in the last year. Since manufacturing is cyclical not growth I need to time when to dump them ;) Micron, Seagate, Western Digital, that type of thing. And I'm not asking for advice, just ruminating.

Downloaded ComfyUI which Z-Image Turbo runs on. Apparently that 3090 I have running is sufficient, I don't have to wait on the 5090 machine to arrive. Too many toys, not enough time ;)

I think that is also in the range of the Hardware Nyghtfall uses. Have you played with it any yet?
 
I have a 3090, but loathe ComfyUI. I just can't wrap my brain around node-based AI. I need something simple, like Fooocus or Forge.
Same. I have tried Comfy like 4 or 5 times and it just confuses the hell out of me. Who knows, maybe one day it will click like Daz Studio did for me after a million tries, LOL
 
ComfyUI is annoying, but you can just grab an existing image made with it, drag and drop it into the UI and it'll load up the nodes for you. You just might need to select the model files if they're not the same as what the image was made with.

Then there's the prompt field like normal, and settings like seed and dimensions are in there somewhere. I don't like the visuals either, but it's a strong system and not so bad once you get over the initial shockfusion.

But there are other frontends too, I think I've tried some StableDiffusion thing at some point.
 
Back
Top