• Registration is now open. It usually takes me a couple of hours or less to validate accouts. If you are coming over from VoD or DR and use the same user name I'll give you Adult access automatically. Anybody else I will contact by DM about Adult access. NOTE I do have spam account creation testing on, but some spam accounts do get through and I check all manually before giving them access. If you create an account where the user name is a series of random letters, the email address is another series of random letters and numbers and is gmail, and the IP you are creating the account from is a VPN address noted for spam, it is going to be rejected without apology.

Does goofy Python ASCII art count?

TailsWin

Well-known member
Something for the physics nerds like @xj900uk and @Drac Morbis lol

So I've been torturing ChatGPT for a couple hours to spit out this thing for me... A cute little ASCII black hole simulation.

Features:
- gravitational lens distortion (well, logarithmic approximation with 2-dimensional angular mirroring effect to be exact. Rough but it works)
- altering trajectory of passing objects depending on distance
- the accretion disk grows and distortion effect increases as objects pass by (resets when it gets too large)
- in case of a head-on collision, object gets gradually pulled in, broken to pieces and 'consumed'
- hilarious texts, because I never grew up beyond the level of an 8yo
- random selection of ascii objects

Otherwise it's just objects flying from left to right... So kinda like a terminal screensaver I guess. Idk if I wanna do much more with it. (No, it doesn't play music.)

View attachment blackhole_s.mp4
(Just some clips thrown together, my screen recorder has some issue with the bottom part of the screen, but objects fly off to the bottom too)

Something fancier:
View attachment blackhole_deathstar_s.mp4

Still goofing around with it

b3.png
 
I'm finna talk a bit about some roadblocks I encountered while making this, because they are good examples of how AI (LLM in this case) works, how it 'thinks' differently from a human, and how you can talk to it to get some results.

So the first idea was to have a black hole simulator similar to the 1970's mainframe simulations, but simplified and on a 2D grid using ASCII instead of pixels or rays or whatever.

ChatGPT quickly came up with the basics: The grid layout, defining the black hole, its accretion disk, the object, basic motion etc. I couldn't even do that, so I can't complain there lol.

The main part was gravitational lens distortion of the object as it passes by the black hole. In reality, what happens is that light gets bent and twisted around the black hole, creating the funky effect. I really had no idea how to even approach that, so I let the AI cook. It quickly came up with some sort of a distortion algorithm, which basically means moving the individual characters of the object independently of each other. I didn't quite grasp the implications at first, but it looked cool, so I ran with it. (Also, AI being its helpful self, added editable variables like distortion factor as a multiplier, so I really couldn't complain.)

Minor issues kept coming up, which were easy to fix. Such as when we added a mirror effect, it was only on the vertical axis, and I had to remind GPT that it should be on both axes (diagonally). Easy.

Issue #1

So the distortion as the basis was fine, but it looked off, because it wasn't increasing and decreasing with the distance to the black hole like I'd expect.

At first I was just prompting things like "this doesn't look right, try something else" in regards to the distance, but that didn't go anywhere, and GPT just kept apologising and doing the same thing over and over, so we were stuck.

The breakthrough was when I asked GPT to list me some options, how to approximate the gravity on a 2D grid, irrespectively of the rest of the project.

IIRC it gave me 3 options:
- linear increase in pull (or something else, not sure anymore)
- logarithmic increase
- inverse square root law

I suppose I could've asked for more, but logarithmic sounded like it should look about right. (I.e. little to no effect at distance, then sudden increase as object and black hole get closer.) So we tried that, and it looked great. It's not a real sim, just a fun effect, so that's what I was going for. Funnily enough, at some points earlier, the AI even said it's trying to avoid abrupt increases, so it was aiming for the opposite of what I wanted. On an 80 x 40 grid, 'accurate' will look wrong, so the effect needs to be exaggerated.

But out of curiosity, I tried the other two as well. The first option was evidently what GPT was getting stuck on, which is why we couldn't get anywhere. And inverse square root might work in 3D with more accurate simulation, but not in simplified 2D.

The conclusion to this is: AI can get stuck, and then run in circles. We've all seen that. You need to try something totally different to make it snap out of it. If you don't know what, ask for options on the specific issue.

Issue #2

As we were adding more stuff, I came up with the idea that the black hole could be able to 'consume' the object. In practice, the distortion animation should play out forwards and backwards (so just like when the object is just passing through), but the object would never reappear.

Sounds simple? Well the problem is, the distortion effect is not just an optical illusion like with a real black hole. In this ASCII thingy, the object itself is getting 'physically' distorted. Therefore, once you remove/destroy the object, there's no animation to play anymore. So what I kept getting is the distortion animation would start, and at some point would just suddenly pop out of existence.

Again, we couldn't get around this. It sounds simple, but AI couldn't even understand the issue properly. When I finally managed to explain it, and even when I asked for options like before, it kept coming up with solutions that would either do the same thing, or wouldn't solve the problem in a general enough way.

I asked a couple of different chatbots, and the best they would come up with, is to delay the end of the animation by a specific number of frames. But that wouldn't work, because at the tail end of the animation, the object would already start reassembling, so we'd still get the 'pop out' at some point. Additionally, the length of the animation depends on lots of factor - the size of the object, the size of black hole, distortion factor (it grows dynamically), so there was no way to make it work.

One of the chatbots came up with the idea to break up the object into separate characters and animate them in a framebuffer inependently from the physical object, but that couldn't work, because then the framebuffer distortion would look totally differently than the object distortion we worked so hard to tweak, so we'd have to emulate the emulation, and still had to come up with how to wrap up the animation, and overall it's a huge overkill. I still tried, but it didn't work anyway.

The weird part is that ask any programmer or even someone like me, and it's immediately clear that neither of the suggested approaches could ever work in principle. Again, the AIs were stuck in a specific mode of thinking. In this case, because it didn't know what to do and just kept throwing marbles at the wall. Still cool, but in this case, not actually useful. There wasn't any way to make this without starting the whole thing from scratch with a new approach to the distortion animation. I guess my bad, that I didn't think of it earlier.

So eventually I had to come up with something totally different, a new consumption effect: If the object is marked for consumption, don't allow it to leave the accretion disk - instead, have it bounce around, and gradually lose random pieces with each frame of the animation, then remove the object when nothing is left. This works great in principle, because it keeps the existing distortion effect and allows the animation to play out naturally. And, again, it looks cool, which is the whole point, and is general enough to work for any object size and other dynamic factors. It's also simple for AI to code once you describe it, because it's just maths.

(Later I changed the bouncy motion to rotation, and added an additional check to treat the object as gone if it only consists of invisible characters, but that's just tweaking the details.)

The lesson here is: AI doesn't see what you see. Even if I was sending it screenshots, the console output, debug logs, whatever, it just couldn't conceptualise the difference of what it was doing in code and what I wanted on the screen. I'm sure a decent enough programmer could come up with a bunch of ideas that could do the job, but me being me, I had to come up with an 'artistic' approach, break it down to details and describe it.

And I think that's why it helps to understand how the mind of the AI is different. In some cases, it's still like talking to a baby that has all the knowledge, but can't put it together. In most cases, it's totally great, but always needs cooperation and guidance. Humans and AI working together, yay!

Pinging @Nyghtfall3D cause I was talking about it in your thread too, since the same happens when making pics and such.

Ok now I should go on to do something useful.
 
Back
Top