Author: Lao Bai, Partner of ABCEDE Investment Research
The hottest AI at the moment is regarded as the key point and core of the fourth industrial revolution, and a hot concept in the technology world is Web3, which is regarded as the key core of the next-generation Internet.
AI and Web3 are two concepts that will set off a wave of technological revolution. If they can be combined, what kind of “surprise” might they bring us?
01Let’s talk about AI itself first
The AI industry is actually going to be cold in the first place. Everyone knows that the founder of Near, Yilong, is right. This guy actually used to do AI. He is the main code contributor of TensorFlow (the most popular machine learning framework). Everyone speculated that he came to do Web3 because there was no hope for AI (machine learning before the big model).
**As a result, the industry finally ushered in ChatGpt3.5 at the end of last year, and the industry suddenly became alive again, because this time it can really be regarded as a qualitative change, rather than the previous waves of hype and quantitative change. **The wave of AI entrepreneurship has also passed to our Web3 within a few months. Silicon Valley’s Web2 side is not doing well, various capital Fomo, various homogenization schemes are starting to compete for price wars, and various large manufacturers and large models are competing against each other…
However, it should be noted that AI has also entered a relatively bottleneck period after more than half a year of explosion. For example, Google’s search interest in AI has fallen off a cliff, the growth rate of Chatgpt users has slowed down sharply, and AI Output has a certain degree of randomness. Sexuality limits many landing scenarios… All in all, we are still very, very far away from the legendary “AGI-General Artificial Intelligence”.
At present, the Silicon Valley venture capital circle has several judgments on the next development of AI:
There is no vertical model, only large model + vertical application (we will mention it later when we talk about Web3+AI)
Data on edge devices such as mobile phones may be a barrier, and AI based on edge devices may also be an opportunity
The length of Context may lead to qualitative changes in the future (the vector database is now used as AI memory, but the length of the context is still not enough)
02Web3+AI
AI and Web3 are actually two completely different fields. AI requires centralized computing power + massive data for training, and it is very centralized. Web3 focuses on decentralization, so it is not so easy to combine. Naihe narrative The argument that AI changes productivity and blockchain changes production relations is too deeply rooted in the hearts of the people, so there will always be people who are trying to find that joint point. In the past two months, we have talked about no less than 10 AI projects.
Before talking about the new combined track, let’s talk about the old AI+Web3 projects, which are basically platform-based, represented by FET and AGIX. How should I put it, a friend who specializes in AI in China told me this way - “In the past, those who did AI in the past are basically useless now. Whether it is Web2 or Web3, many of them are burdens rather than experience. The direction and future are just like OpenAI. This Transformer-based large model, the large model saves AI”, you can taste it yourself.
Therefore, the general-purpose platform type is not the Web3+AI model he is optimistic about. The more than 10 projects I talked about did not have this aspect. What I have seen so far are basically the following tracks:
Assetization of Bot/Agent/Assistant model
Computing platform
Data platform
Generative AI
Defi transaction/audit/risk control
ZKML
1. Assetization of Bot/Agent/Assistant model
**The assetization of Bot/Agent/Assitant is the most talked about track, and it is the track with the most homogenization. **To put it simply, most of these projects use OpenAI as the bottom layer, cooperate with other open source/self-developed technical means, such as TTS (Text to Speech), and add specific data, FineTune comes out with some “certain field A better bot than ChatGPT”.
For example, you can train a beautiful teacher who teaches you English. You can choose whether she has an American accent or a London accent. Her personality and chatting style can also be adjusted. In this way, compared with the more mechanical and official answers of ChatGPT, the interactive experience will be better. In the front of the circle, there is a virtual boyfriend DAPP and a Web3 female-oriented game called HIM, which can be regarded as a representative of this type.
**Starting from this idea, you can theoretically have many Bot/Agents serving you. **For example, if you want to cook boiled fish, there may be a Cooking Bot specially for Fine Tune in this field to teach you. The answers given are more professional than ChatGPT. If you want to travel, there is also a travel assistant Bot to provide you Travel suggestions and planning, or if you are a project party, get a Discord customer service robot to help you answer community questions.
**In addition to doing this kind of “GPT-based vertical application” Bot, there are also derivative projects based on this, such as Bot is considered “model assetization”. **It’s a bit like NFT “capitalization of small pictures”. Now, can the prompts that are popular in AI also be capitalized? For example, different prompts in MidJourney can generate different pictures, and different prompts will have different results when training Bots effect, so Promotet itself has value and can be capitalized.
There are also projects such as portal indexing and search based on this kind of Bot. When we have thousands of Bots, how to find the most suitable Bot for you? Perhaps at that time, a Web2 world portal like Hao123, or a search engine like Google will be needed to help you “locate”.
In my personal opinion, Bot (model) assetization has two disadvantages + two directions at this stage:
1) Disadvantages
Disadvantage 1 - Homogenization is too serious, because this is the easiest AI+web3 track for users to understand, and it looks a bit like an NFT with a little Utility attribute. Therefore, the current primary market has begun to show a red sea trend, and it is rolled up, but the bottom layer is all OpenAI, so everyone actually has no technical barriers, and can only compete in design and operation;
Disadvantage 2 - Sometimes things like Starbucks membership card NFT chaining, although it is a good attempt to get out of the circle, but for most users, it may not be as convenient as a physical or electronic membership card. Bots based on Web3 also have this problem. If I want to learn English with a robot or chat with Musk or Socrates, isn’t it good for me to use Web2 directly?
2) Direction
Direction 1 - It is the near + mid-term, and the chaining of the model may be an idea. At present, these models have the meaning of small ETH NFT pictures, and MetaData mostly points to off-chain servers or IPFS, rather than pure on-chain. Models are usually tens to hundreds of megabytes in size, and they have to be thrown on the server.
However, with the recent rapid decline in storage prices (2TB SSD 500 RMB), and the advancement of storage projects such as Filecoin FVM and ETH Storage, I believe that it should not be difficult to upload a 100-megabyte model to the chain in the next two or three years.
You may ask what are the benefits of going to the chain? Once the model is on-chain, it can be directly called by other contracts. It is more Crypto Native, and there must be more tricks to play. It has a sense of sight of a Fully Onchain Game, because all data is native to the chain. At present, we can see that some teams are exploring in this area, of course, it is still in a very early state.
Direction 2 - medium + long-term. If you think about smart contracts seriously, the most suitable thing is not human-computer interaction, but “machine-computer interaction”. AI now has the concept of AutoGPT, Get your “virtual avatar” or “virtual assistant”, which can not only chat with you, but also help you perform tasks according to your requirements, such as helping you book air tickets, hotels, buy domain names and build websites…
Do you think the AI assistant is convenient for operating your various bank accounts, Alipay, etc., or is it convenient for transfers to the entire blockchain address? The answer is obvious. So in the future, will there be a bunch of AI assistants like AutoGPT integrated, which can automatically carry out C2C, B2C, and even B2B payment and settlement through blockchain and smart contracts in various task scenarios? At that time, the boundary between Web2 and Web3 became very blurred.
2. Computing platform
The project of the computing power platform does not have as many assets as the Bot model, but it is relatively easier to understand. Everyone knows that AI requires a lot of computing power, and BTC and ETH have proved that there is such a method in the world over the past 10 years , can be spontaneous, decentralized, and ** organize and coordinate massive computing power in an environment of economic incentives and games to cooperate + compete to do one thing. This approach can now be applied to AI.
The two most famous projects in the industry are undoubtedly Together and Gensyn. One seed round is 10 million-level financing, and the other is A-round financing of 43 million. The reason why these two companies have to raise so much money is said to be because they need capital and computing power first. Train your own model, and then it will be made into a computing power platform and provided to other AI projects for training.
The amount of financing for computing power platforms that do reasoning will be much smaller, because in essence, they aggregate the computing power of idle GPUs and provide them to AI projects in need for reasoning. RNDR is for rendering computing power aggregation, and these platforms do reasoning calculations. Force aggregation. But the technical threshold is relatively vague at present, and I even wonder if one day RNDR or Web3 cloud computing power platform will extend its foot to the reasoning computing power platform.
The direction of the computing power platform is more realistic and predictable than model capitalization. Basically, there will be demand and there will be a track for one or two top projects. It depends on who can kill it. The only thing that is currently uncertain Do training and reasoning have their own leader, or the leader will cover both training and reasoning.
3. Data Platform
This is actually not difficult to understand, **because the bottom layer of AI is simply three things: algorithm (model), computing power, and data. **
Since there are “decentralized versions” of algorithms and computing power, data will definitely not be absent. This is also the most optimistic direction when Dr. Lu Qi, the founder of Qiji Chuangtan, talks about AI and Web3.
Web3 has always emphasized data privacy and sovereignty, and there are technologies such as ZK to ensure data reliability and integrity, so the AI trained based on Web3 on-chain data must be different from the one trained on Web2 off-chain data. Therefore, this line makes sense as a whole. At present, Ocean in the circle should be regarded as this track, and there are also projects such as a special AI data market based on Ocean in the primary market.
4. Generative AI
**To put it simply, it is to use AI to draw pictures, or similar creations, to serve other scenes. **Such as NFT, or in-game map generation, NPC background generation, etc. I feel that it is more difficult to make NFT line, because the scarcity of AI generation is not enough, Gamefi is a way, and there are teams trying in the primary market.
However, I saw a news a few days ago** that Unity (which has occupied the game engine market for many years together with Unreal Engine) has also released its own AI generation tools Sentis and Muse**, which are still in the beta testing stage and are expected to be officially launched next year. On the line. How should I put it, I feel that game AIGC projects in the Web3 circle may be hit by Unity dimensionality reduction…
5. DeFi Transaction/Audit/Yield/Risk Control
These categories have seen projects trying, and the homogenization is relatively not obvious.
1) DeFi trading - This is a bit tricky, because if a trading strategy is easy to use, as more people use it, the strategy may gradually become less useful, and you have to switch to a new strategy. Then I am curious about the future winning rate of the AI trading robot, and which rank it will be among ordinary traders.
2) Audit - Visual inspection should be able to quickly review and deal with existing common loopholes. New or logical loopholes that have not appeared before should not work. This should only be possible in the AGI era.
3) Yield and risk control - Yield is not difficult to understand, you can just imagine it as a YFI with AI intelligence, throw money at it, and AI will find the platform Staking, group LP, and mine according to your risk preference mines and the like. As for risk control, it feels strange to make a separate project, and it feels more Make Sense to serve various loans or similar Defi platforms in the form of plug-ins.
6.ZKML
A track that is becoming more and more popular in the current circle,** because it combines two most cutting-edge technologies, one inside the circle ZK, one outside the circle ML (Mechine Learning machine learning, a narrow branch of the AI field). **
Theoretically speaking, the combination with ZK can provide ML with privacy, integrity and accuracy, but you have to insist on specific usage scenarios, in fact, many project parties can’t think of it, and the infrastructure will be built first… * *The only thing that is really needed at present is that some machine learning in the medical field does have the privacy requirements of patient data. As for the integrity of games on the chain or the narrative of anti-cheating, it always feels a bit far-fetched. **
At present, there are only a few star projects on this track, such as Modulus Labs, EZKL, Giza, etc., which are all hot objects in the primary market. No way, because there are only a few people in the world who understand ZK, and there are even fewer talents who understand ZK and ML at the same time. Therefore, the technical threshold of this track is much higher than that of others, and the homogeneity is relatively low. obvious. Finally, ZKML is mostly for inference, not training.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
What are the specific trends and opportunities of AI+Web3 or one of the hot spots in the next round of bull market?
Author: Lao Bai, Partner of ABCEDE Investment Research
The hottest AI at the moment is regarded as the key point and core of the fourth industrial revolution, and a hot concept in the technology world is Web3, which is regarded as the key core of the next-generation Internet.
AI and Web3 are two concepts that will set off a wave of technological revolution. If they can be combined, what kind of “surprise” might they bring us?
01 Let’s talk about AI itself first
The AI industry is actually going to be cold in the first place. Everyone knows that the founder of Near, Yilong, is right. This guy actually used to do AI. He is the main code contributor of TensorFlow (the most popular machine learning framework). Everyone speculated that he came to do Web3 because there was no hope for AI (machine learning before the big model).
**As a result, the industry finally ushered in ChatGpt3.5 at the end of last year, and the industry suddenly became alive again, because this time it can really be regarded as a qualitative change, rather than the previous waves of hype and quantitative change. **The wave of AI entrepreneurship has also passed to our Web3 within a few months. Silicon Valley’s Web2 side is not doing well, various capital Fomo, various homogenization schemes are starting to compete for price wars, and various large manufacturers and large models are competing against each other…
However, it should be noted that AI has also entered a relatively bottleneck period after more than half a year of explosion. For example, Google’s search interest in AI has fallen off a cliff, the growth rate of Chatgpt users has slowed down sharply, and AI Output has a certain degree of randomness. Sexuality limits many landing scenarios… All in all, we are still very, very far away from the legendary “AGI-General Artificial Intelligence”.
At present, the Silicon Valley venture capital circle has several judgments on the next development of AI:
There is no vertical model, only large model + vertical application (we will mention it later when we talk about Web3+AI)
Data on edge devices such as mobile phones may be a barrier, and AI based on edge devices may also be an opportunity
The length of Context may lead to qualitative changes in the future (the vector database is now used as AI memory, but the length of the context is still not enough)
02 Web3+AI
AI and Web3 are actually two completely different fields. AI requires centralized computing power + massive data for training, and it is very centralized. Web3 focuses on decentralization, so it is not so easy to combine. Naihe narrative The argument that AI changes productivity and blockchain changes production relations is too deeply rooted in the hearts of the people, so there will always be people who are trying to find that joint point. In the past two months, we have talked about no less than 10 AI projects.
Before talking about the new combined track, let’s talk about the old AI+Web3 projects, which are basically platform-based, represented by FET and AGIX. How should I put it, a friend who specializes in AI in China told me this way - “In the past, those who did AI in the past are basically useless now. Whether it is Web2 or Web3, many of them are burdens rather than experience. The direction and future are just like OpenAI. This Transformer-based large model, the large model saves AI”, you can taste it yourself.
Therefore, the general-purpose platform type is not the Web3+AI model he is optimistic about. The more than 10 projects I talked about did not have this aspect. What I have seen so far are basically the following tracks:
Assetization of Bot/Agent/Assistant model
Computing platform
Data platform
Generative AI
Defi transaction/audit/risk control
ZKML
1. Assetization of Bot/Agent/Assistant model
**The assetization of Bot/Agent/Assitant is the most talked about track, and it is the track with the most homogenization. **To put it simply, most of these projects use OpenAI as the bottom layer, cooperate with other open source/self-developed technical means, such as TTS (Text to Speech), and add specific data, FineTune comes out with some “certain field A better bot than ChatGPT”.
For example, you can train a beautiful teacher who teaches you English. You can choose whether she has an American accent or a London accent. Her personality and chatting style can also be adjusted. In this way, compared with the more mechanical and official answers of ChatGPT, the interactive experience will be better. In the front of the circle, there is a virtual boyfriend DAPP and a Web3 female-oriented game called HIM, which can be regarded as a representative of this type.
**Starting from this idea, you can theoretically have many Bot/Agents serving you. **For example, if you want to cook boiled fish, there may be a Cooking Bot specially for Fine Tune in this field to teach you. The answers given are more professional than ChatGPT. If you want to travel, there is also a travel assistant Bot to provide you Travel suggestions and planning, or if you are a project party, get a Discord customer service robot to help you answer community questions.
**In addition to doing this kind of “GPT-based vertical application” Bot, there are also derivative projects based on this, such as Bot is considered “model assetization”. **It’s a bit like NFT “capitalization of small pictures”. Now, can the prompts that are popular in AI also be capitalized? For example, different prompts in MidJourney can generate different pictures, and different prompts will have different results when training Bots effect, so Promotet itself has value and can be capitalized.
There are also projects such as portal indexing and search based on this kind of Bot. When we have thousands of Bots, how to find the most suitable Bot for you? Perhaps at that time, a Web2 world portal like Hao123, or a search engine like Google will be needed to help you “locate”.
In my personal opinion, Bot (model) assetization has two disadvantages + two directions at this stage:
1) Disadvantages
Disadvantage 1 - Homogenization is too serious, because this is the easiest AI+web3 track for users to understand, and it looks a bit like an NFT with a little Utility attribute. Therefore, the current primary market has begun to show a red sea trend, and it is rolled up, but the bottom layer is all OpenAI, so everyone actually has no technical barriers, and can only compete in design and operation;
Disadvantage 2 - Sometimes things like Starbucks membership card NFT chaining, although it is a good attempt to get out of the circle, but for most users, it may not be as convenient as a physical or electronic membership card. Bots based on Web3 also have this problem. If I want to learn English with a robot or chat with Musk or Socrates, isn’t it good for me to use Web2 directly?
2) Direction
Direction 1 - It is the near + mid-term, and the chaining of the model may be an idea. At present, these models have the meaning of small ETH NFT pictures, and MetaData mostly points to off-chain servers or IPFS, rather than pure on-chain. Models are usually tens to hundreds of megabytes in size, and they have to be thrown on the server.
However, with the recent rapid decline in storage prices (2TB SSD 500 RMB), and the advancement of storage projects such as Filecoin FVM and ETH Storage, I believe that it should not be difficult to upload a 100-megabyte model to the chain in the next two or three years.
You may ask what are the benefits of going to the chain? Once the model is on-chain, it can be directly called by other contracts. It is more Crypto Native, and there must be more tricks to play. It has a sense of sight of a Fully Onchain Game, because all data is native to the chain. At present, we can see that some teams are exploring in this area, of course, it is still in a very early state.
Direction 2 - medium + long-term. If you think about smart contracts seriously, the most suitable thing is not human-computer interaction, but “machine-computer interaction”. AI now has the concept of AutoGPT, Get your “virtual avatar” or “virtual assistant”, which can not only chat with you, but also help you perform tasks according to your requirements, such as helping you book air tickets, hotels, buy domain names and build websites…
Do you think the AI assistant is convenient for operating your various bank accounts, Alipay, etc., or is it convenient for transfers to the entire blockchain address? The answer is obvious. So in the future, will there be a bunch of AI assistants like AutoGPT integrated, which can automatically carry out C2C, B2C, and even B2B payment and settlement through blockchain and smart contracts in various task scenarios? At that time, the boundary between Web2 and Web3 became very blurred.
2. Computing platform
The project of the computing power platform does not have as many assets as the Bot model, but it is relatively easier to understand. Everyone knows that AI requires a lot of computing power, and BTC and ETH have proved that there is such a method in the world over the past 10 years , can be spontaneous, decentralized, and ** organize and coordinate massive computing power in an environment of economic incentives and games to cooperate + compete to do one thing. This approach can now be applied to AI.
The two most famous projects in the industry are undoubtedly Together and Gensyn. One seed round is 10 million-level financing, and the other is A-round financing of 43 million. The reason why these two companies have to raise so much money is said to be because they need capital and computing power first. Train your own model, and then it will be made into a computing power platform and provided to other AI projects for training.
The amount of financing for computing power platforms that do reasoning will be much smaller, because in essence, they aggregate the computing power of idle GPUs and provide them to AI projects in need for reasoning. RNDR is for rendering computing power aggregation, and these platforms do reasoning calculations. Force aggregation. But the technical threshold is relatively vague at present, and I even wonder if one day RNDR or Web3 cloud computing power platform will extend its foot to the reasoning computing power platform.
The direction of the computing power platform is more realistic and predictable than model capitalization. Basically, there will be demand and there will be a track for one or two top projects. It depends on who can kill it. The only thing that is currently uncertain Do training and reasoning have their own leader, or the leader will cover both training and reasoning.
3. Data Platform
This is actually not difficult to understand, **because the bottom layer of AI is simply three things: algorithm (model), computing power, and data. **
Since there are “decentralized versions” of algorithms and computing power, data will definitely not be absent. This is also the most optimistic direction when Dr. Lu Qi, the founder of Qiji Chuangtan, talks about AI and Web3.
Web3 has always emphasized data privacy and sovereignty, and there are technologies such as ZK to ensure data reliability and integrity, so the AI trained based on Web3 on-chain data must be different from the one trained on Web2 off-chain data. Therefore, this line makes sense as a whole. At present, Ocean in the circle should be regarded as this track, and there are also projects such as a special AI data market based on Ocean in the primary market.
4. Generative AI
**To put it simply, it is to use AI to draw pictures, or similar creations, to serve other scenes. **Such as NFT, or in-game map generation, NPC background generation, etc. I feel that it is more difficult to make NFT line, because the scarcity of AI generation is not enough, Gamefi is a way, and there are teams trying in the primary market.
However, I saw a news a few days ago** that Unity (which has occupied the game engine market for many years together with Unreal Engine) has also released its own AI generation tools Sentis and Muse**, which are still in the beta testing stage and are expected to be officially launched next year. On the line. How should I put it, I feel that game AIGC projects in the Web3 circle may be hit by Unity dimensionality reduction…
5. DeFi Transaction/Audit/Yield/Risk Control
These categories have seen projects trying, and the homogenization is relatively not obvious.
1) DeFi trading - This is a bit tricky, because if a trading strategy is easy to use, as more people use it, the strategy may gradually become less useful, and you have to switch to a new strategy. Then I am curious about the future winning rate of the AI trading robot, and which rank it will be among ordinary traders.
2) Audit - Visual inspection should be able to quickly review and deal with existing common loopholes. New or logical loopholes that have not appeared before should not work. This should only be possible in the AGI era.
3) Yield and risk control - Yield is not difficult to understand, you can just imagine it as a YFI with AI intelligence, throw money at it, and AI will find the platform Staking, group LP, and mine according to your risk preference mines and the like. As for risk control, it feels strange to make a separate project, and it feels more Make Sense to serve various loans or similar Defi platforms in the form of plug-ins.
6.ZKML
A track that is becoming more and more popular in the current circle,** because it combines two most cutting-edge technologies, one inside the circle ZK, one outside the circle ML (Mechine Learning machine learning, a narrow branch of the AI field). **
Theoretically speaking, the combination with ZK can provide ML with privacy, integrity and accuracy, but you have to insist on specific usage scenarios, in fact, many project parties can’t think of it, and the infrastructure will be built first… * *The only thing that is really needed at present is that some machine learning in the medical field does have the privacy requirements of patient data. As for the integrity of games on the chain or the narrative of anti-cheating, it always feels a bit far-fetched. **
At present, there are only a few star projects on this track, such as Modulus Labs, EZKL, Giza, etc., which are all hot objects in the primary market. No way, because there are only a few people in the world who understand ZK, and there are even fewer talents who understand ZK and ML at the same time. Therefore, the technical threshold of this track is much higher than that of others, and the homogeneity is relatively low. obvious. Finally, ZKML is mostly for inference, not training.