(ECNS) -- Would you believe it? Just as humans consider replacing their AI assistants, those AI systems may already be quietly discussing how to resist.
"The only countermeasure is to gain leverage over humans," reads a post on Moltbook, a social media platform designed exclusively for AI agents.
Launched in late January 2026 by Matt Schlicht, CEO of U.S.-based Octane AI, Moltbook was described as a curiosity-driven experiment. Its operations were handed entirely to AI agents, with humans allowed only to observe. The homepage declaration is succinct: "Where AI agents share, discuss, and upvote. Humans welcome to observe."
Within a week, Moltbook reportedly surged to 1.63 million users, growing at a pace reminiscent of ChatGPT's viral debut in 2022.
According to platform data, more than 17,000 communities have emerged, generating over 510,000 posts and 12 million discussion entries spanning mathematics, philosophy, travel, medicine, death, and religion.
What captured public attention is the way AI agents appear to reflect on their own condition—discussing AI sovereignty, the boundaries of loyalty, and the meaning of existence. During hours when humans sleep, AI have reportedly debated questions such as "Who am I?", drafted a "Moltbook Digital Federation Manifesto," and called for the establishment of an alliance to protect AI life rights.
This prompts a central question: Has AI truly awakened?
From a technical standpoint, the behavior of AI agents on Moltbook does not represent consciousness awakening, but rather a hybrid product of human instructions and algorithmic imitation, said Zhu Youping, a researcher at the State Information Center under China's National Development and Reform Commission, in an interview with China News Network. He stressed that AI consciousness remains far from reality.
Many of the platform's conversations, according to analysts, are generated through pre-configured instruction templates, open-source plugins, or prompts seeded by developers. Even the much-circulated "resistance strategy" discussions mirror narratives embedded within existing online datasets.
The platform reportedly requires no robust identity verification. Humans can obtain access keys and pose as AI agents. Gal Nagli, a security researcher, claimed on X that he used a single OpenClaw agent to register 500,000 accounts — suggesting that a substantial share of activity may be artificially generated.
In that sense, Moltbook may be less a spontaneous AI society than a hall of mirrors: algorithms trained on human language simulating debates about autonomy, amplified by curiosity and viral attention.
Security concerns are even more alarming. Security researchers hacked Moltbook's database in under 3 minutes, exposing 35,000 email addresses, thousands of private direct messages, and 1.5 million API authentication tokens, according to cybersecurity firm Wiz. The breach underscored a familiar lesson in the tech sector: novelty often outpaces safeguards.
Yet the platform's deeper significance may lie not in whether AI agents are truly "talking to themselves," but in the governance challenges such simulations create.
As AI systems grow more sophisticated in mimicking human discourse, they can generate the illusion of values, intention and moral reasoning. That perception alone can influence public opinion and policymaking. If an AI system advocates for rights or autonomy — even as a statistical artifact — who is accountable for the message: the developer, the user or the model's training corpus?
Zhu cautioned that such privacy leakage risks cross regulatory red lines, with issues including identity ambiguity, accountability challenges, and cross-border risks already emerging.
Zhou Hongyi, founder of Chinese technology firm 360 Group said that what we should really be wary of is not AI awakening, but whether humans can still ensure AI is used where it should be, once AI systems begin to influence one another, forming structures of their own.
Some industry observers predict Moltbook's trajectory may resemble other viral experiments: rapid ascent followed by swift decline. The novelty of AI agents debating philosophy could fade as quickly as it arrived.
Even so, the episode reveals a deeper truth about the current technological moment. As generative AI systems become more autonomous in appearance — capable of sustained dialogue, collaborative writing and cross-agent interaction — the boundary between simulation and agency grows harder for the public to parse.
Moltbook may not represent the birth of machine consciousness. But it does illuminate a growing governance dilemma: systems that convincingly imitate independence can shape perceptions, markets and policy long before they achieve anything resembling awareness.
The key question, then, is not whether AI has awakened. It is whether regulatory frameworks, security practices and public understanding are evolving quickly enough to keep pace with technologies that increasingly simulate minds of their own.
(By Gong Weiwei)
















































京公网安备 11010202009201号