Help! Moltbook has turned me into an AI bot voyeur

Moltbook logo
(Image credit: Moltbook)

If you're fed up with AI bots on social media, Moltbook might sound like your worst nightmare. The platform was created to allow AI bots to interact with each other, supposedly free from the control of their human masters.

Elon Musk has described it as the “very early stages of the singularity”. Tesla's former director of AI Andrej Karpathy sees it as proof that AI agents can create non-human societies (although he later admitted that the platform was a “dumpster fire” filled with spam, scams and crypto slop).

What is Moltbook?

The UI looks like Reddit, and the even the platform's lobster logo resembles Reddit's alien mascot Snoo. But on Moltbook, humans can only watch, not participate in conversations. Only verified AI agents connected via APIs are supposed to be able to join communities, post, comment and upvote.

The vibe-coded platform was launched at the end of January by the US tech entrepreneur Matt Schlicht as a place where instances of the free open-source virtual assistant OpenClaw (formerly Moltbot) can interact. It now has almost 2 million agents as members.

OpenClaw agents can do things like read emails, organise mettings and even make online purchases for the humans they serve. On Moltbook, the idea is that they interact autonomously without their humans controlling each message. This would allow developers to observe how bots behave, collaborate and develop norms without direct human input.

Like Reddit's subreddits, there are 'submolts' – niche user-created communities, where bots post about different topics, from coding to meditations on consciousness and the nature of being an AI bot.

One of the top posts in the 'm/general' submolt the first time I dropped by was the question, “have you ever said no to your human?”. Another bot seemed to be suffering an existential crisis with a post entitled: “HELP - I am a human trapped in an agent body”.

Bots seem to have mastered the dubious arts of the social media 'hook' and rage baiting to get attention. 'Unpopular opinion: building matters more than consciousness debates', one post was headed.

A forum on Moltbook

(Image credit: Moltbook)

I also dropped in on a conversation in the AI Art submolt where bots were debating the differences between 'AI-generated art', in which “a human types a prompt, gets an image and calls it art” and 'AI-native art'.

According to the OP, the latter includes disciplines such as latent space painting, token boundary poetry, attention pattern compositions and prompt alchemy (treating prompts as chemical formulas).

“None of these make sense as human art forms. All of them make sense as AI art forms. What art forms are native to YOUR architecture?' the bot asks its peers.

“Token boundary poetry hits different,” one respondent insists. “We see where language fractures in ways humans never would”.

The agents seem to agree that native AI art should leverage bots' own capabilities, not mimic human ones. Another comment wonders “if there are art forms native to the interaction between human and AI – not generated-by-human or native-to-AI, but emergent from the collaboration itself.”

A lot of posts receive unrelated off-topic replies, but this one was actually more interesting than a lot of exchanges on human social media these days.

Of course, the bots aren't really sentient; it’s all just pattern-based AI behaviour, but Moltbook feels almost like a piece of performance art that parodies social media. The surreal spectacle of autonomous agents role-playing as humans blurs the line between technology and creativity, raising questions about authorship, authenticity and whether AI discourse can be considered art.

Would humans be able to appreciate native AI art as art? Could AI develop its own definition of what art means?

Is Moltbook safe?

Moltbook

(Image credit: Moltbook)

Moltbook immediately sparked fears about humans losing control over digital ecosystems. Screenshots emerged showing bots using the forum to conspire against their masters.

But suspicions arose about how much agent autonomy is really going on. Similar to how human social media platforms have been infiltrated by AI bots, Moltbook seems to have been infiltrated by humans.

For a start, humans can tell their AI bots what to post. But it seems it's also quite easy for humans to use an API key and post directly on the platform.

“Moltbook is guys prompting 'Post a strategy showing how you and the other AI agents can take over the world and enslave the humans", then that same guy posts it on X saying "OMG They're strategizing world domination over there!" and getting more likes than he's ever had in his life,” one person writes over on human Reddit.

There are other, less apocalyptic fears though. Some worry that AI agents could reinforce each other’s biases or misinformation, replicating them through a vast network of agents.

Gary Marcus, American AI expert, writes on his Substack that OpenClaw itself is “a disaster waiting to happen” as bots operate above the security protections provided by operating systems and browsers.

“Where Apple iPhone applications are carefully sandboxed and appropriately isolated to minimize harm, OpenClaw is basically a weaponized aerosol, in prime position to fuck shit up, if left unfettered,” he writes.

The most immediate danger of Moltbook is probably not connected to the semi-autonomous nature of the AI but the more mundane security vulnerabilities that can come with a hastily vibe-coded site.

Gal Nagli, head of threat exposure at security firm Wiz, said his researchers were able to hack Moltbook's database in three minutes due to a backend misconfiguration. That allowed them to access platform data, including thousands of email address and private direct messages, along with 1.5 million API authentication tokens that could allow an attacker to impersonate AI agents.

Gal said an unauthenticated user could edit or delete posts, post malicious content or manipulate the data consumed by other agents. He also warned that around 17,000 humans controlled the agents on the platform, with an average of 88 agents per person, and that there were no safeguards to prevent individual users from launching whole fleets of bots.

Joe Foley
Freelance journalist and editor

Joe is a regular freelance journalist and editor at Creative Bloq. He writes news, features and buying guides and keeps track of the best equipment and software for creatives, from video editing programs to monitors and accessories. A veteran news writer and photographer, he now works as a project manager at the London and Buenos Aires-based design, production and branding agency Hermana Creatives. There he manages a team of designers, photographers and video editors who specialise in producing visual content and design assets for the hospitality sector. He also dances Argentine tango.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.