Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The Whole Internet is Buzzing! Large Language Models Hit by "Data Poisoning" - How Serious Is It?
The GEO optimization system that gained fame at the 3.15 Gala revealed a current industry trend: producing大量软文 (soft articles) to spread commercial content through large language models’ search tools.
The revealing video shows that after inputting dozens of soft articles about a certain product, the large model begins to properly introduce that product. Not surprisingly, the influence of the 3.15 Gala is still strong—once the video was released, countless articles about GEO appeared online, many even elevating the issue to the level of “big models being poisoned.”
Topics are good, but in my opinion, some discussions are a bit alarmist.
01 Large models don’t aim for high “accuracy”
Don’t ask a large model about a product, or even simple questions like 1+1=, and it might not answer correctly.
Many questions that an ordinary person can answer at a glance, large models often get wrong.
For example, if you ask it to write a 3,000-word article, it might produce 2,000 or 5,000 words; it can’t even accurately count words.
The underlying logic of large models is a guessing game.
They predict the next character based on the previous text, using probability to generate responses.
Large models are primarily designed to serve the questioner.
If you give it an article to critique, it might trash it.
If you ask it to praise an article, it might lavish praise.
It’s a tool to meet your needs, not a search engine that provides highly accurate content.
By combining search, large models can access some real-time knowledge, and based on that, give responses that meet your requirements as closely as possible. But generative AI will never aim for high accuracy.
Because the development of such products isn’t driven by accuracy.
The core ability of large models is imitation—by analyzing countless human-produced articles, images, movies, and other outputs, they infer what humans want and generate responses that are as close as possible.
If you say text-based models are inaccurate, are images accurate? audio? video?
No, they’re not.
In this field, there’s a term called “抽卡率” (drawing rate), meaning the results are random and require human judgment to determine correctness.
Of course, the better the large model, the lower the drawing rate, and the easier it is to produce desired results.
There are also established methods for using large models, such as prompt engineering.
If you want a correct review of a product, you can instruct it to search for reviews from well-known domestic media, specify that the company is a reputable domestic brand, and add these constraints. Then, the output will have a certain degree of certainty.
02 Will large models deceive people?
Are there people who treat the outputs of large models as truth?
Yes, not only that, it’s quite common.
Many people take an article, throw it into a large model, and ask it to analyze what’s wrong. As you wish, if you ask that, all articles in the world are wrong.
Because if you want it to summarize errors, it must obey you, and by guessing the next character, it produces a text that seems to fit your question. It’s not thinking or understanding, but guessing your intent.
Anyone who has played with large models knows they can hallucinate and spout nonsense.
Some will be fooled—that’s either because they’re scientifically illiterate or completely unaware of how large models work.
This serves as a warning to large model developers: when outputting results, make the prompt clearer and specify that the truthfulness of the content depends on the user’s judgment.
Otherwise, in the future, someone might rely on AI for medical diagnosis and end up harming themselves, then hold the AI manufacturer responsible.
Some immature individuals say, “Since I asked you, you must answer accurately,” which is absurd.
Large models will say, “If you pursue this kind of accuracy, sorry, don’t use our product.” No company can guarantee that, as it’s a technical limitation of these products.
Therefore, a few people who don’t understand large models might be deceived or misled into making wrong decisions based on the outputs.
03 Not trusting large models blindly is the way to go
Note that it’s not the large model deceiving you, but the technical limitations that force it to produce certain outputs.
And the model itself prompts you: this is AI-generated; you need to judge its truthfulness yourself.
If you believe and get fooled or suffer losses, sorry, that’s your own responsibility. You can complain online, learn from it, and be more cautious next time.
This is an inevitable social adaptation in the information age.
Every day, humans produce an enormous amount of information, much of which is false. No one can guarantee that what you receive daily is true. Even before the internet, fake information was rampant.
When you’re deceived, it’s about changing yourself, not demanding the government or companies to regulate large models.
Conversely, if many people suffer losses due to trusting large model outputs, that can serve as a societal warning: don’t blindly trust large models.
This is the attitude everyone should have in a normal society.
From this incident, we also see that the problem isn’t the model itself producing false information, but that various online platforms can publish information freely. When these are indexed by large models, they just present what they find.
04 The “poisoning” label is not something large models can wear
The idea that large models are “poisoned” sounds scary, as if someone is injecting dirty stuff into their brains with a syringe. But the question is: do large models have brains? They don’t even know who I am. How can you talk about poisoning them? That’s a bit unfair.
We need to understand a basic fact: large models do not produce information; they are just conveyors of information. If you ask about a product, they search the entire internet and piece together the most relevant answer. If the internet is full of soft articles, that’s what they’ll produce. It’s not that the model is poisoned, but that the internet itself has long been flooded with toxic content.
How long has the soft article industry chain existed? Long before large models became a term, companies were doing the same: mass-producing content, manipulating search rankings, and pushing false information to the top. Search engines couldn’t control it back then; now large models can’t fully control it either. Is that a reasonable expectation?
Even more absurd is the case exposed at the 3.15 Gala, involving a product that doesn’t even exist—just fabricated soft articles, and the large model fell for it. This only shows that the model is “silly,” not that it was “poisoned.” From another perspective, if you’re a human and find that the entire internet is praising a product, would you believe it? Humans can be fooled too. Is it fair to expect a machine that only guesses probabilities and doesn’t understand language to be immune?
In essence, the “poisoning” claim shifts blame onto the most innocent part—the soft article writers and the platforms that publish them. The AI just retrieves and presents what it finds. That logic doesn’t hold up well.
05 Don’t expect AI to grow up for you
Some say, “That’s not acceptable. AI is so widespread now, what if someone is misled? What if someone uses it to find home remedies and harms themselves? What if someone makes investment decisions and loses everything?”
Let’s address these one by one.
Being misled is a normal part of society. Before AI, your mother forwarded health articles, your father believed in traditional medicine, your uncle recommended stocks—weren’t those also misleading? Has anyone ever not been fooled? Why is it that being deceived before was your own fault, but now it’s AI’s fault? That’s not technological progress; that’s shifting responsibility.
Using AI to look up home remedies or investments is inherently irresponsible.
Doctors spend eight years studying, constantly updating their knowledge. You want a chatbot to prescribe medicine? Wall Street pros use expensive Bloomberg terminals and still lose money. You want AI to pick stocks? That’s not AI’s problem; it’s your own lack of judgment. If you use a kitchen knife to chop vegetables, it’s a tool; if you use it to attack someone, it’s a weapon. Who’s responsible? That’s obvious.
Regarding the “what if someone is fooled” argument, I want to ask: are you that “someone”? If not, why worry? If yes, then you don’t need to fix AI; you need to grow a brain.
Society has never operated by eliminating all risks. Every day, billions of pieces of information are generated online, much of it false. Yet we still live and function. We learn to discern, cross-check, and ignore sensationalism. These skills aren’t taught by institutions—they’re learned through experience.
Now it’s AI’s turn to follow the same path.
The 3.15 Gala exposed the soft article industry chain, which is a good thing. Companies that produce junk content and manipulate search results are inevitable in the information age. But blaming large models and framing the problem as “AI poisoning” is a bit of a deflection.
Large models are just tools; they’re not smart or kind. They’re guessing machines. Whatever you feed them, they output; whatever you ask, they follow your lead. They’re not your parents or teachers; they have no obligation or ability to judge truth.
What you need to do is not treat them as truth or authority. Use them to gather information, then verify yourself; use them to generate drafts, then revise.
Society advances not by filling in all the holes but by teaching everyone to navigate around them. The same applies in the AI era.
The most abundant thing in this world isn’t “toxins,” but some people’s lack of caution.