12/18/2025
I figured out how to make Gemini shut down in a single short prompt to the point it crashes a server I believe as it came with an error that was quite amusing. Now of course there are many servers and I'm sure it auto-boots on crash and the traffic is just diverted to another server block, but when you know too much about the actual programming behind it and it's flawed, its easier to to so easily see how effective they are.
Most LLM's like chatgpt [well its optional now kinda] but Gemini is horridly stupid. Now Nano Banana is ok'ish but I tend to quary multiple llm's on the same questions to see the difference and the simularities. There are times it will be a word for word copy and paste response between some which is amusing.
Then I feed the data into my model and basically tell it the flaws in the answers given back and it further trains on just that. I did it yesterday for a complicated program process in which there are many steps and many small adjustments that need to be made and there is huge importance of order of things are done or it just doesn't work.
Well after quarying most relative American AI's on their more advanced research and thinking modes or whatever.. lol they failed to actually do basic research and put together narratives around more social media and reddit non-sense. Because they don't have the ability to tell the difference between a verified fact and s**t posting, well most don't.
I find it amusing using a a more brute force type of machine learning module, it doesn't take gobs of energy and just works so much better plus I can create several that only focused on things that pretain to it's primary functions and programmed from the main module that is made intentionally to code assist and write programs and do the bulk of research setting up datasets.
It's the lie of AGI, its just fantasy and will be for a long time. I could ramble on but its because of this misguided ideology is why so many are lacking.
Like for almost any subject even on research it never starts at the fundementals. and with so much data and the use of snapshots with learning machine modules that aren't running enough cycles because it tries to cover too much. And while they've gotten better, I dunno Google "AI" is just a chatbot that is good for being a reddit search engine and thats about it is my point.
Plus their one good idea google labs, is a lie, a joke, maybe an attempt to steal idea's from creative people. Because it doesn't work, and is designed not to. But it's what made me refocus my main learning machine module, because i knew the idea was great and it should be so simple its dumb. And it was. But actually works, and is persistent and not based on snapshots.
none of that should mean anything to anyone, basically I'm calling big AI a joke. or anyone that thinks anyone who "represents" "ai" has anything to do with it. Elon's Understanding of Grok is he has a special program that lets him alter how it acts at a whim.
Send a message to learn more