Author: Ariel, Crypto City
AI agents with lobster logos, called OpenClaw, have become a global sensation. The Chinese government has issued the “Lobster Safety Manual” to warn about related risks. Legislator Lai Shih-bao observed that some domestic brokerages and stock market analysts are gradually adopting OpenClaw. If an AI agent makes a trading error, who is responsible? He also called on the Financial Supervisory Commission (FSC) to develop a dedicated lobster safety manual for the financial industry.
FSC’s Peng Jinlong responded that he has not used OpenClaw himself but has observed this phenomenon becoming quite common. Internal units at the FSC are already researching future measures and monitoring the usage of such tools by financial institutions.
Peng Jinlong pointed out that the FSC previously issued the “Guidelines for Financial Industry Using AI.” The financial sector already has certain cybersecurity and internal control mechanisms for new technologies. If these tools impact operational safety, they will be reviewed and relevant safety manuals will be drafted.
Six Key Points of the FSC’s Guidelines for AI Use in the Financial Industry
Image source: Gemini AI generated | Key points of FSC’s AI Guidelines for Financial Industry (AI diagram)
The Digital Development Department has responded to lobster safety concerns
Regarding cybersecurity concerns brought by AI agents, Department Head Lin Yi-ching stated during a recent inquiry that Taiwan is actively promoting sovereign AI to address security issues and enhance technological independence. This ensures that AI models used by the government and critical infrastructure operate domestically under legal regulation. She also mentioned Nvidia’s recent launch of the NemoClaw platform, which focuses on strengthening AI agent cybersecurity.
To build Taiwan’s own computing power, the Digital Development Department has received applications from Foxconn to invest in computing centers and is discussing with the Ministry of Finance and FSC to open up insurance sector funding, aiming to reduce reliance on overseas AI models.
What does China’s Lobster Safety Manual say?
OpenClaw was created by Austrian engineer Peter Steinberger and initially gained popularity in the tech community. Installing and using OpenClaw is called “lobster farming.” Recently, this craze has spread to China, with ordinary people also starting to farm lobsters, and even paid services emerging where engineers install it for users.
In response, China’s Ministry of State Security issued the Lobster Safety Farming Manual, warning of inherent risks such as host takeover, data theft, and speech manipulation. They urged users to strictly limit operational scope and run it in isolated environments like dedicated virtual machines or sandboxes.
Where there are lobsters, there are also un-lobsters
Although lobster farming is booming, Chinese communities have shifted from frantic installation to paying for uninstallation.
The BBC reported that experts attribute this decline to the high technical barriers and operational costs of OpenClaw, as each action requires significant funds to invoke AI models.
Security risks are also a major concern. China’s National Internet Emergency Center warned that improper use could lead to theft of sensitive information like photos and payment accounts, or even AI misinterpreting commands and accidentally deleting data. For ordinary users, the software’s actual utility is limited, which has triggered this wave of uninstallation.