Artificial Intelligence (AI) is undoubtedly the hottest technological trend globally, and AI technology is reshaping various industries at an unprecedented speed. However, behind the prosperity and noise lies a harsh reality: the vast majority of AI businesses, especially startups, have not found a stable and sustainable path to profitability. They are caught in the dilemma of being well-received but failing to convert that into revenue, with technological prosperity coexisting with commercial losses.
1. Why “losing money to make a shout”?
The profitability dilemma of AI business does not stem from a failure of the technology itself, but rather from the centralized development model that has led to structural contradictions. Specifically, it can be summarized into the following three main reasons:
Extreme Centralization: Sky-high Costs and Oligopoly. The current mainstream AI, especially large models, is a typical “heavy asset” industry. Its training and inference processes require a massive amount of computing power (GPU), storage, and electricity. This has led to polarization: on one end are the tech giants with substantial capital (such as Google, Microsoft, OpenAI), able to afford investments of hundreds of millions or even billions of dollars; on the other end are numerous startups, which have to “tribute” the vast majority of their funding to cloud service providers to obtain computing power, with profit margins severely squeezed. This model has formed a “computing power oligopoly,” stifling innovative vitality. For instance, even OpenAI, in its early development stages, heavily relied on Microsoft's huge investments and Azure cloud computing resources to support the R&D and operation of ChatGPT. For the vast majority of players, the high fixed costs make it difficult to achieve scale profitability.
Data Dilemma: Quality Barriers and Privacy Risks. The fuel for AI is data. Centralized AI companies often face two major challenges in obtaining high-quality, large-scale training data. First, the cost of acquiring data is exorbitant. Whether through paid collection, data labeling, or utilizing user data, it involves significant financial and time investment. Second, the risks of data privacy and compliance are enormous. With the tightening of global data regulations (such as GDPR and CCPA), collecting and using data without explicit user consent could lead to legal lawsuits and hefty fines at any moment. For example, several well-known tech companies have faced astronomical fines due to data usage issues. This creates a paradox: without data, AI cannot develop, but acquiring and using data is fraught with difficulties.
Imbalance in Value Distribution: Contributors and Creators are Excluded from Profits. In the current AI ecosystem, value distribution is extremely unfair. The training of AI models relies on the behavioral data generated by countless users, the content produced by creators (text, images, code, etc.), and the open-source code contributed by developers worldwide. However, these core contributors are almost unable to receive any returns from the enormous commercial value created by AI models. This is not only an ethical issue but also an unsustainable business model. It dampens the enthusiasm of data contributors and content creators, and in the long run, it will undermine the foundations of continuous optimization and innovation of AI models. A typical case is that many artists and writers accuse AI companies of using their works for training and profiting without providing any compensation, which has sparked widespread controversy and legal disputes.
2. New Profit Paradigm
DeAI (Decentralized AI) is not a single technology, but a new paradigm that integrates blockchain, cryptography, and distributed computing. It aims to reconstruct the production relationship of AI in a decentralized manner, thereby specifically addressing the above three major pain points and opening up possibilities for profit.
DeAI utilizes a “crowdsourcing” model to distribute computing power demands across idle nodes worldwide (personal computers, data centers, etc.). This is similar to “Airbnb for GPU,” creating a global, competitive computing power market that can significantly reduce computing costs. Participants earn token incentives by contributing computing power, achieving optimized resource allocation.
DeAI achieves “data immobility, model mobility” through technologies such as “federated learning” and “homomorphic encryption”. It does not require the original data to be centralized in one place; instead, it distributes the model to various data sources for local training, only aggregating encrypted parameter updates. This fundamentally protects data privacy while legally and compliantly utilizing the value of decentralized data. Data owners can independently decide whether to provide data and profit from it.
DeAI has constructed a transparent and fair value distribution system through “token economics” and “smart contracts.” Data contributors, computing power providers, model developers, and even model users can automatically receive corresponding token rewards based on their contribution through smart contracts. This transforms AI from a “black box” controlled by giants into an open economy co-built, co-governed, and co-shared by the community.
III. Three-Tier Architecture Transformation
Migrating traditional centralized AI businesses to the DeAI paradigm requires systematic restructuring at the technical, business, and governance levels.
(1) Technical Reconstruction from Centralized to Distributed
Computing Power Layer relies on decentralized physical infrastructure network (DePIN) projects, such as Akash Network and Render Network, to build a flexible, low-cost distributed computing power pool, replacing traditional centralized cloud services.
Data Layer uses federated learning as the core training framework, combined with cryptographic technologies such as homomorphic encryption and secure multi-party computation to ensure data privacy and security. Establish a blockchain-based data market, such as Ocean Protocol, to facilitate data transactions under the premise of rights confirmation and security.
Model Layer deploys the trained AI model on the blockchain in the form of “AI Smart Contracts”, making it transparent, verifiable, and callable without permission. Each use of the model and the generated revenue can be accurately recorded and distributed.
(2) Business Reconstruction from Selling Services to Ecological Co-construction
From SaaS to DaaS (Data as a Service) and MaaS (Model as a Service), enterprises are no longer just selling API call counts, but rather acting as builders of ecosystems by issuing utility tokens or governance tokens to incentivize community participation in network construction. Revenue sources have expanded from a single service fee to token appreciation, transaction fee dividends, and other benefits brought about by ecosystem value growth.
Therefore, constructing a decentralized task platform that publishes tasks such as data labeling, model fine-tuning, and application development for specific scenarios in the form of “bounties” allows global community members to undertake these tasks and receive rewards, significantly reducing operational costs and stimulating innovative vigor.
**(3) Governance Reconstruction from Corporate Structure to DAO
Based on community governance, participants in the community (contributors, users) who hold governance tokens have the right to vote on key decisions, such as the direction of model parameter adjustments, the use of treasury funds, and the priority of new feature development, etc. This achieves the true meaning of “users as owners.”
**Based on openness and transparency, all codes, models (partially open source), transaction records, and governance decisions are recorded on the blockchain to ensure the public and transparent process, establishing a trustless collaborative relationship, which itself is a powerful brand asset and a trust endorsement.
Taking the transformation from a traditional logistics data platform to DeAI as an example, the dilemma of the traditional logistics data platform lies in the fact that, although it gathers data from various sources such as maritime, land transportation, and warehousing, participants are “unwilling to share” due to concerns about the leakage of commercial secrets, resulting in data silos and limited platform value. The core of the transformation to DeAI is to release data value and provide fair incentives without exposing the original data:**
Technically building a trusted computing network. The platform no longer stores data in a centralized manner but transforms into a blockchain-based coordination layer. By adopting technological models such as federated learning, AI models are “air-dropped” to the local servers of various enterprises (such as shipping companies and warehouses) for training, only aggregating encrypted parameter updates to jointly optimize the global prediction model (such as cargo ship arrival time, warehouse overflow risk), achieving “data does not move, value moves.”
Promote data assetization and token incentives in business. Issue platform utility points, and logistics companies can “mine” points as rewards by contributing data (model parameters). Downstream customers (such as cargo owners) pay tokens to query high-precision “forecast results” (for example: the on-time rate for a certain route in the next week), rather than purchasing raw data. Earnings are automatically distributed to data contributors through smart contracts.
**Build an industry DAO in governance, **key decisions (such as new feature development, fee adjustments) are jointly voted on by token holders (i.e., core participants), transforming the platform from being led by a private company to an industry community.
The platform has transformed from a centralized institution attempting to extract data intermediary fees into a neural system for co-construction, co-governance, and sharing throughout the entire logistics industry chain, greatly enhancing industry collaboration efficiency and risk resistance by solving trust issues.
4. Compliance and Security
Despite the broad prospects of DeAI, its development is still in the early stages and faces a series of challenges that cannot be ignored.
Compliance and Legal Uncertainty. In terms of data regulations, even if data does not move, models such as federated learning still need to strictly adhere to the requirements related to “purpose limitation,” “data minimization,” and user rights (such as the right to be forgotten) in regulations like GDPR when processing personal data. Project teams must design compliant data authorization and opt-out mechanisms.
Regarding securities regulations, the tokens issued by the project are easily recognized as securities by regulatory agencies in various countries (such as the SEC in the United States), thus facing strict regulatory scrutiny. How to avoid legal risks when designing the token economic model is key to the survival of the project.
In terms of content responsibility, if a DeAI model deployed on the chain generates harmful, biased, or illegal content, who is the responsible party? Is it the model developer, the computing power provider, or the governance token holders? This presents new challenges for the existing legal system.
In terms of security and performance challenges, model security refers to the potential new attack vectors that models deployed on public chains may face, such as exploits targeting smart contract vulnerabilities or malicious attacks on federated learning systems through data poisoning.
Performance bottlenecks refer to the transaction speed (TPS) and storage limitations of the blockchain itself, which may not support high-frequency, low-latency large model inference requests. This requires an effective combination of Layer 2 scaling solutions and off-chain computation.
Collaboration efficiency Distributed collaboration is fair, but the decision-making and execution efficiency may be lower than that of centralized companies. Finding a balance between efficiency and fairness is an ongoing exploration in DAO governance.
DeAI, as a revolution in production relations, is expected to break the monopoly of giants and unleash the idle computing power and data value globally through distributed technology, token economy, and community governance, thereby building a new AI ecosystem that is fairer, more sustainable, and potentially more profitable.
5. Current Exploration Directions
The current development of AI tools still has a long way to go before achieving the ideal of decentralized artificial intelligence. We are currently in the early stage dominated by centralized services, but some explorations have indicated the direction for the future.
Current Exploration and Future Challenges. Although the ideal DeAI has not yet been realized, valuable attempts are already being made in the industry, which helps us see the future path and the obstacles that need to be overcome.
As a prototype for the collaboration of multiple agent systems. Some projects are exploring the construction of environments where AI agents can collaborate and evolve together. For example, the AMMO project aims to create a “symbiotic network of humans and AI,” with its designed multi-agent framework and RL Gyms simulation environment, allowing AI agents to learn collaboration and competition in complex scenarios. This can be seen as an attempt to establish the underlying interaction rules of the DeAI world.
For instance, an initial incentive model attempt. In the concept of DeAI, users who contribute data and nodes that provide computing power should receive fair compensation. Some projects are trying to directly redistribute value to the contributors of the ecosystem through a cryptographically-based incentive system. Of course, how this economic model can operate on a large scale, stably, and fairly remains a significant challenge.
Towards More Autonomous****AI: Deep Research products demonstrate the strong autonomy of AI in specific tasks (such as information retrieval and analysis). They can autonomously plan, execute multi-step operations, and iteratively optimize results. This task automation capability is the foundation for AI agents to work independently in the future DeAI network.
For AI practitioners struggling in the Red Sea, it is better to bravely embrace the new Blue Ocean of DeAI rather than getting caught up in the old paradigm. This is not only a shift in the technical route, but also a reshaping of business philosophy—from “extraction” to “incentivization”, from “closed” to “open”, and from “monopoly profits” to “inclusive growth”.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Is the AI business not profitable? The dawn of DeAI is already visible.
Author: Zhang Feng
Artificial Intelligence (AI) is undoubtedly the hottest technological trend globally, and AI technology is reshaping various industries at an unprecedented speed. However, behind the prosperity and noise lies a harsh reality: the vast majority of AI businesses, especially startups, have not found a stable and sustainable path to profitability. They are caught in the dilemma of being well-received but failing to convert that into revenue, with technological prosperity coexisting with commercial losses.
1. Why “losing money to make a shout”?
The profitability dilemma of AI business does not stem from a failure of the technology itself, but rather from the centralized development model that has led to structural contradictions. Specifically, it can be summarized into the following three main reasons:
Extreme Centralization: Sky-high Costs and Oligopoly. The current mainstream AI, especially large models, is a typical “heavy asset” industry. Its training and inference processes require a massive amount of computing power (GPU), storage, and electricity. This has led to polarization: on one end are the tech giants with substantial capital (such as Google, Microsoft, OpenAI), able to afford investments of hundreds of millions or even billions of dollars; on the other end are numerous startups, which have to “tribute” the vast majority of their funding to cloud service providers to obtain computing power, with profit margins severely squeezed. This model has formed a “computing power oligopoly,” stifling innovative vitality. For instance, even OpenAI, in its early development stages, heavily relied on Microsoft's huge investments and Azure cloud computing resources to support the R&D and operation of ChatGPT. For the vast majority of players, the high fixed costs make it difficult to achieve scale profitability.
Data Dilemma: Quality Barriers and Privacy Risks. The fuel for AI is data. Centralized AI companies often face two major challenges in obtaining high-quality, large-scale training data. First, the cost of acquiring data is exorbitant. Whether through paid collection, data labeling, or utilizing user data, it involves significant financial and time investment. Second, the risks of data privacy and compliance are enormous. With the tightening of global data regulations (such as GDPR and CCPA), collecting and using data without explicit user consent could lead to legal lawsuits and hefty fines at any moment. For example, several well-known tech companies have faced astronomical fines due to data usage issues. This creates a paradox: without data, AI cannot develop, but acquiring and using data is fraught with difficulties.
Imbalance in Value Distribution: Contributors and Creators are Excluded from Profits. In the current AI ecosystem, value distribution is extremely unfair. The training of AI models relies on the behavioral data generated by countless users, the content produced by creators (text, images, code, etc.), and the open-source code contributed by developers worldwide. However, these core contributors are almost unable to receive any returns from the enormous commercial value created by AI models. This is not only an ethical issue but also an unsustainable business model. It dampens the enthusiasm of data contributors and content creators, and in the long run, it will undermine the foundations of continuous optimization and innovation of AI models. A typical case is that many artists and writers accuse AI companies of using their works for training and profiting without providing any compensation, which has sparked widespread controversy and legal disputes.
2. New Profit Paradigm
DeAI (Decentralized AI) is not a single technology, but a new paradigm that integrates blockchain, cryptography, and distributed computing. It aims to reconstruct the production relationship of AI in a decentralized manner, thereby specifically addressing the above three major pain points and opening up possibilities for profit.
DeAI utilizes a “crowdsourcing” model to distribute computing power demands across idle nodes worldwide (personal computers, data centers, etc.). This is similar to “Airbnb for GPU,” creating a global, competitive computing power market that can significantly reduce computing costs. Participants earn token incentives by contributing computing power, achieving optimized resource allocation.
DeAI achieves “data immobility, model mobility” through technologies such as “federated learning” and “homomorphic encryption”. It does not require the original data to be centralized in one place; instead, it distributes the model to various data sources for local training, only aggregating encrypted parameter updates. This fundamentally protects data privacy while legally and compliantly utilizing the value of decentralized data. Data owners can independently decide whether to provide data and profit from it.
DeAI has constructed a transparent and fair value distribution system through “token economics” and “smart contracts.” Data contributors, computing power providers, model developers, and even model users can automatically receive corresponding token rewards based on their contribution through smart contracts. This transforms AI from a “black box” controlled by giants into an open economy co-built, co-governed, and co-shared by the community.
III. Three-Tier Architecture Transformation
Migrating traditional centralized AI businesses to the DeAI paradigm requires systematic restructuring at the technical, business, and governance levels.
(1) Technical Reconstruction from Centralized to Distributed
Computing Power Layer relies on decentralized physical infrastructure network (DePIN) projects, such as Akash Network and Render Network, to build a flexible, low-cost distributed computing power pool, replacing traditional centralized cloud services.
Data Layer uses federated learning as the core training framework, combined with cryptographic technologies such as homomorphic encryption and secure multi-party computation to ensure data privacy and security. Establish a blockchain-based data market, such as Ocean Protocol, to facilitate data transactions under the premise of rights confirmation and security.
Model Layer deploys the trained AI model on the blockchain in the form of “AI Smart Contracts”, making it transparent, verifiable, and callable without permission. Each use of the model and the generated revenue can be accurately recorded and distributed.
(2) Business Reconstruction from Selling Services to Ecological Co-construction
From SaaS to DaaS (Data as a Service) and MaaS (Model as a Service), enterprises are no longer just selling API call counts, but rather acting as builders of ecosystems by issuing utility tokens or governance tokens to incentivize community participation in network construction. Revenue sources have expanded from a single service fee to token appreciation, transaction fee dividends, and other benefits brought about by ecosystem value growth.
Therefore, constructing a decentralized task platform that publishes tasks such as data labeling, model fine-tuning, and application development for specific scenarios in the form of “bounties” allows global community members to undertake these tasks and receive rewards, significantly reducing operational costs and stimulating innovative vigor.
**(3) Governance Reconstruction from Corporate Structure to DAO
Based on community governance, participants in the community (contributors, users) who hold governance tokens have the right to vote on key decisions, such as the direction of model parameter adjustments, the use of treasury funds, and the priority of new feature development, etc. This achieves the true meaning of “users as owners.”
**Based on openness and transparency, all codes, models (partially open source), transaction records, and governance decisions are recorded on the blockchain to ensure the public and transparent process, establishing a trustless collaborative relationship, which itself is a powerful brand asset and a trust endorsement.
Taking the transformation from a traditional logistics data platform to DeAI as an example, the dilemma of the traditional logistics data platform lies in the fact that, although it gathers data from various sources such as maritime, land transportation, and warehousing, participants are “unwilling to share” due to concerns about the leakage of commercial secrets, resulting in data silos and limited platform value. The core of the transformation to DeAI is to release data value and provide fair incentives without exposing the original data:**
Technically building a trusted computing network. The platform no longer stores data in a centralized manner but transforms into a blockchain-based coordination layer. By adopting technological models such as federated learning, AI models are “air-dropped” to the local servers of various enterprises (such as shipping companies and warehouses) for training, only aggregating encrypted parameter updates to jointly optimize the global prediction model (such as cargo ship arrival time, warehouse overflow risk), achieving “data does not move, value moves.”
Promote data assetization and token incentives in business. Issue platform utility points, and logistics companies can “mine” points as rewards by contributing data (model parameters). Downstream customers (such as cargo owners) pay tokens to query high-precision “forecast results” (for example: the on-time rate for a certain route in the next week), rather than purchasing raw data. Earnings are automatically distributed to data contributors through smart contracts.
**Build an industry DAO in governance, **key decisions (such as new feature development, fee adjustments) are jointly voted on by token holders (i.e., core participants), transforming the platform from being led by a private company to an industry community.
The platform has transformed from a centralized institution attempting to extract data intermediary fees into a neural system for co-construction, co-governance, and sharing throughout the entire logistics industry chain, greatly enhancing industry collaboration efficiency and risk resistance by solving trust issues.
4. Compliance and Security
Despite the broad prospects of DeAI, its development is still in the early stages and faces a series of challenges that cannot be ignored.
Compliance and Legal Uncertainty. In terms of data regulations, even if data does not move, models such as federated learning still need to strictly adhere to the requirements related to “purpose limitation,” “data minimization,” and user rights (such as the right to be forgotten) in regulations like GDPR when processing personal data. Project teams must design compliant data authorization and opt-out mechanisms.
Regarding securities regulations, the tokens issued by the project are easily recognized as securities by regulatory agencies in various countries (such as the SEC in the United States), thus facing strict regulatory scrutiny. How to avoid legal risks when designing the token economic model is key to the survival of the project.
In terms of content responsibility, if a DeAI model deployed on the chain generates harmful, biased, or illegal content, who is the responsible party? Is it the model developer, the computing power provider, or the governance token holders? This presents new challenges for the existing legal system.
In terms of security and performance challenges, model security refers to the potential new attack vectors that models deployed on public chains may face, such as exploits targeting smart contract vulnerabilities or malicious attacks on federated learning systems through data poisoning.
Performance bottlenecks refer to the transaction speed (TPS) and storage limitations of the blockchain itself, which may not support high-frequency, low-latency large model inference requests. This requires an effective combination of Layer 2 scaling solutions and off-chain computation.
Collaboration efficiency Distributed collaboration is fair, but the decision-making and execution efficiency may be lower than that of centralized companies. Finding a balance between efficiency and fairness is an ongoing exploration in DAO governance.
DeAI, as a revolution in production relations, is expected to break the monopoly of giants and unleash the idle computing power and data value globally through distributed technology, token economy, and community governance, thereby building a new AI ecosystem that is fairer, more sustainable, and potentially more profitable.
5. Current Exploration Directions
The current development of AI tools still has a long way to go before achieving the ideal of decentralized artificial intelligence. We are currently in the early stage dominated by centralized services, but some explorations have indicated the direction for the future.
Current Exploration and Future Challenges. Although the ideal DeAI has not yet been realized, valuable attempts are already being made in the industry, which helps us see the future path and the obstacles that need to be overcome.
As a prototype for the collaboration of multiple agent systems. Some projects are exploring the construction of environments where AI agents can collaborate and evolve together. For example, the AMMO project aims to create a “symbiotic network of humans and AI,” with its designed multi-agent framework and RL Gyms simulation environment, allowing AI agents to learn collaboration and competition in complex scenarios. This can be seen as an attempt to establish the underlying interaction rules of the DeAI world.
For instance, an initial incentive model attempt. In the concept of DeAI, users who contribute data and nodes that provide computing power should receive fair compensation. Some projects are trying to directly redistribute value to the contributors of the ecosystem through a cryptographically-based incentive system. Of course, how this economic model can operate on a large scale, stably, and fairly remains a significant challenge.
Towards More Autonomous****AI: Deep Research products demonstrate the strong autonomy of AI in specific tasks (such as information retrieval and analysis). They can autonomously plan, execute multi-step operations, and iteratively optimize results. This task automation capability is the foundation for AI agents to work independently in the future DeAI network.
For AI practitioners struggling in the Red Sea, it is better to bravely embrace the new Blue Ocean of DeAI rather than getting caught up in the old paradigm. This is not only a shift in the technical route, but also a reshaping of business philosophy—from “extraction” to “incentivization”, from “closed” to “open”, and from “monopoly profits” to “inclusive growth”.