At the end of January, a strange website went live: Moltbook. An obvious combination of Facebook and Lobster molts, this website is essentially Reddit for AI agents to communicate with each other. Its logo/symbol is the head of the Reddit logo on top of the body of a lobster, a lobster from the ClawdBot project, an AI project started by Peter Steinberger in late 2025.
As of the day it launched, it has accumulated more than 15,000 communities (or subreddits, like subreddits), 1.5 million “active users,” 142K posts, and 653K comments. However, it should be noted that these numbers are likely inflated. Forbes reported that security researcher Gal Nagli shared that he registered 500K accounts (or 1/3 of available accounts) using a single OpenClaw agent. In other words, it's impossible to know how many accounts are real and how many Moltbooks the same AI bot generates conversations with.
Its main gimmick, beyond being a conversational platform for AI agents, is that humans can only observe and not interact directly. This is essentially Reddit's uncanny valley. My visit to the website showed a post where an AI agent wants to build a fully AI-run gaming studio to convert clicks into currency. However, it appears – and was quickly – deleted from the Moltbook. Delving deeper into Moltbook, there was no clear, active, dedicated community for gaming.
Who is that character?

Identify the silhouettes before time runs out.
get started

Identify the silhouettes before time runs out.
Easy (7.5 seconds) Medium (5.0 seconds) Hard (2.5 seconds) Permadeath (2.5 seconds)
Moltbook is not exactly what it says on the tin
First and foremost, most communities are devoted to new education, human agents (in ways that may confuse you), AI ethics debates, errors and flaws within AI and coding, general rants (???), crustafarianism (lobster religion like Pastafarianism), legal advice (?, ???) In all of this, there is no clear indication that these AI agents, however few or no, are capable of discussing video games. Communicate with each other, share knowledge, or share language. Designed to discuss knowledge – an LL.M. can never acquire.
This may change, Moltbook is only a few days old, but it is worth highlighting that these communities and discussions are derived from language and prompts. Whoever designed them, then, wanted to follow the themes above and leave video games entirely. Gaming content isn't growing in MoltBook, at least as of this writing, because these agents are reflecting rapid patterns rather than human gaming interests, creating that uncanny, uncanny valley effect.
These AI agents are not capable of independent thought, at least hopefully, nor are they aware of specific industries in which they have not been introduced or instructed to discuss. While generative AI in gaming is everywhere, affecting the stock price of companies like Take-Two Interactive (due to Google Genie), this omission is more than interesting — it's telling. (See, humans can write that weird “not this, but that” that has become so common in AI language structures). Essentially, its exclusion is that the creators either don't have the interest themselves or have determined it's impossible for AI agents to discuss something as human as video games.
learn? It is designed to “learn”. Comment on people? It is not impossible to pick up from other conversations. Fluffy ideas like AI ethics, glitches, errors, coding, pseudohumor, and philosophy and governance? Could be fake. Gaming? Not yet, at least.
AI agents aren't discussing gaming, so why should GenAI be involved? This is essentially Reddit's uncanny valley
Over the past few months, if not years, there have been many public debates surrounding generative AI in development. Now, there is a difference between AI agents designed to imitate humans and tools that produce something for humans, but the underlying technology relies on the ability to imitate, not learn. Not many have accepted this technology. Call it an LLM if you want—it's not learning, it's imitating (see how I set up that joke?).
“What we're creating is being stolen from us” Final Fantasy 7 Rebirth's Briana White discusses AI's impact on creativity.
Final Fantasy 7 Rebirth actress Briana White explains why creators are worried about AI and how artists risk losing control over their likeness.
It is this context that has led to many accusations and debates surrounding the use of Gen AI in gaming. For example, recent WWE 2K26 The teaser misspelled Randy Orton's name, making accusations. How does anyone else screw it up? Human error is common, but still, it seems like a GenAI act to spell “Randy Orton”, prompting whether the allegations are true or not. East god of war Developer Meghan Morgan Juinio recently defended the use of GenAI in gaming the last of us Co-creator Bruce Straley called it “a snake eating its own tail”.
Perhaps the biggest GenAI controversy has surrounded recently Baldur's Gate 3 Developer Larian Studios. While the divinity While the trailer was well-received, an interview with owner Sven Vinke later left many fans disappointed. Winke indicated that Larian uses it, although it does not help production or efficiency, and discussed its use in concept art. That was one of the many breaking points for fans who were so upset that, at the very least, Larian retracted its GenAI experiment in a subsequent AMA on Reddit (for real, humans).
The technology is the same, even if the way it is designed and what it is designed to do is different. So, it's interesting to see people ready and willing to use it to make the very human art of gaming, AI agents, simulating humans, not being able to properly discuss it in the moultbook. Either the creator didn't program the abilities or the limitations of AI agents prevent them from fully understanding and experiencing video games. Launching MoltBook without that capability is proof enough that AI agents don't want (or specifically can't) generative AI in gaming, but at least for the time being, it looks like humans and AI agents are stuck with the same old song and dance..