Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The nationwide "shrimp farming" craze is sweeping the internet, but the banking industry is collectively "ignoring" it; experts say: OpenClaw's high system permissions conflict inherently with financial compliance requirements.
Financial Daily Reporter | Li Yuwen Financial Daily Editor | Zhang Yiming
Recently, open-source AI (artificial intelligence) agents like OpenClaw (also known as “Lobster”) have become a hot topic, attracting attention from many industries. However, the banking sector generally remains cautious about this “shrimp farming” trend. A head office official from a joint-stock bank told the Financial Daily that the bank recently received a risk alert from regulators regarding “Lobster.”
However, before OpenClaw gained popularity, the banking industry had already been exploring and applying intelligent agents. Many banks are actively promoting the use of agents in frontline business scenarios to improve operational efficiency.
As a risk-controlled institution, how can banks balance innovation and compliance in the face of the AI technological wave?
Multiple Banks Take a Cautious View of the “Shrimp Farming” Trend
OpenClaw, named for its icon resembling a red lobster, is also called “Lobster.” The process of installing and deploying it is vividly described as “shrimp farming.” Unlike purely conversational AI like ChatGPT, OpenClaw integrates communication software and large language models, enabling it to autonomously perform complex tasks such as file management, email handling, and data processing on users’ local computers. It appears to act as a “digital employee” working on behalf of users, which has attracted many to conduct practical applications.
As OpenClaw continues to heat up, security concerns are increasingly drawing public attention. Recently, the Ministry of Industry and Information Technology and the National Internet Emergency Center issued risk alerts, warning users to exercise caution due to potential security risks associated with OpenClaw.
Amid this “shrimp farming” craze, the banking industry remains quite “calm.” Recently, a source from a joint-stock bank’s head office revealed that the bank had received a regulatory risk alert regarding “Lobster.” Another official from a state-owned bank told the Financial Daily that their bank has not yet deployed OpenClaw or arranged for training.
Why are banks cautious about OpenClaw?
“Unlike conversational AI, OpenClaw as an intelligent agent needs access to local files, external APIs, and even system-level permissions. This ‘end-to-end’ automation mechanism can easily trigger cyberattacks and leak core transaction data, which conflicts with the bank’s strict regulatory and zero-tolerance policies,” said Wang Peng, deputy researcher at the Beijing Academy of Social Sciences, in an interview with the Financial Daily on March 16.
Gao Chengfei, general manager of the IP Business Department at Zhanyou Marketing Consulting, shared a similar view: “OpenClaw’s high system permissions are inherently at odds with financial compliance requirements.”
Gao explained that OpenClaw defaults to high permissions such as local file access and API calls. While this can improve office efficiency, multiple medium- and high-risk vulnerabilities have been publicly disclosed. Its plugin functions lack effective security review mechanisms, posing risks of malicious exploitation to steal online banking passwords, payment keys, and other sensitive information. More critically, its autonomous execution capabilities could lead to errors like unauthorized fund transfers or purchasing investment products. Since AI technology still lacks full interpretability, it is difficult to determine responsibility after automated actions. Additionally, data generated during agent operation might be transmitted to third parties, raising compliance risks when involving sensitive information like credit data and loan approval materials.
Therefore, Gao believes that in the short term, OpenClaw is more suitable for small-scale pilots in non-core business scenarios. Large-scale deployment should wait until key issues such as security, clear responsibilities, and algorithm interpretability are resolved.
Wang Peng also pointed out that banks are unlikely to directly adopt open-source OpenClaw but will instead incorporate its technological approach. Future implementations are likely to be “private deployment in restricted environments,” meaning within the bank’s internal network, using self-developed or customized solutions to apply agents in non-core, high-sensitivity scenarios such as office automation and risk control support.
Banking Industry’s AI Agent Exploration Is Underway
It is worth noting that even before OpenClaw’s popularity, the banking industry had already been exploring and applying intelligent agents. Several banks are actively promoting agent-enabled frontline services to enhance operational efficiency.
For example, Nanjing Bank has partnered with Volcano Engine to explore large-scale deployment of intelligent agents in financial scenarios. They have launched a one-stop intelligent agent workstation called HiAgent, which has already implemented over 20 high-quality agents. These are deeply applied in areas such as office work, operations, business development, and risk management.
How effective are these practices? For instance, corporate relationship managers often spend a lot of time gathering pre-visit information across multiple systems and platforms before visiting clients. An “One-Page” pre-visit intelligent agent can automatically integrate data from internal and external sources, perform crawling, cleaning, fusion, and quality checks, and quickly generate a comprehensive and accurate pre-visit analysis report. This reduces preparation time from two hours to within five minutes, becoming a core tool during peak marketing seasons and critical periods.
KPMG recently released the “2026 Outlook for China’s Banking Industry” report, which noted that analysis of public tender information and case studies from KPMG show an overall upward trend in bank large-model projects from January to November 2025, with a small peak in August. The project content from January to June mainly focused on knowledge Q&A, with agent applications being sporadic. Starting in July, the number of agent application projects surged, especially in October and November, with all project types being agent-related.
So, how should banks balance innovation and compliance when exploring agent applications?
On March 16, Fu Yifu, a special researcher at Su Commercial Bank, told the Financial Daily that when promoting agent-enabled frontline services, banks need to innovate management mechanisms, test new technologies in controlled environments, and ensure risks are measurable and controllable. They should strengthen data privacy protections and conduct algorithm audits, following the principle of “least privilege” to avoid excessive collection of customer information. Maintaining close communication with regulators and participating in industry standard-setting can help identify compliance red lines early. Additionally, banks should establish manual review processes to double-check key decisions made by agents, preventing automation errors. Embedding compliance requirements throughout the R&D process and cultivating multidisciplinary talent will help banks safely unlock the innovative value of intelligent agents.