【ChainWen】Security researchers recently disclosed three serious security vulnerabilities in a version control tool maintained by a certain AI assistant. These vulnerabilities are numbered CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145, which can be exploited by hackers to perform path traversal, parameter injection, and even remote code execution.
Most importantly, these types of vulnerabilities can be triggered through prompt injection. In other words, attackers only need to have the AI assistant read information containing malicious content to activate the entire attack chain—posing a real threat to developers and enterprises using AI tools.
Good news is that the official has fixed these issues in version updates released in September and December 2025. Specific measures include removing the risky git initialization tool and enhancing path validation mechanisms. The security team strongly recommends all users to update to the latest version immediately—do not delay.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
14 Likes
Reward
14
5
Repost
Share
Comment
0/400
Blockblind
· 01-23 11:57
Whoa, being able to bypass code execution with prompt injection? That's pretty outrageous...
---
It's the AI tool's fault again. Feels like these kinds of vulnerabilities are becoming more and more common.
---
Wait, just by AI reading bad information, it can be hacked? That makes using this thing so stressful.
---
Hurry up and upgrade, everyone. Don't wait until you're stabbed in the back to regret it.
---
Really, these kinds of vulnerabilities are ridiculous and too easy to exploit.
---
So prompt injection is this powerful now... I'm a bit scared.
---
The official fix speed this time is pretty good, not too slow.
---
Three CVEs released at once, that's really scary...
---
I didn't quite understand the path traversal part, but upgrading will fix it anyway.
---
I just want to know how this vulnerability was discovered; it's too detailed.
---
Developers need to be careful when using AI tools; this thing can't be trusted.
---
Is prompt injection becoming a new attack method? Web3 also needs to be on guard.
---
It feels like AI security really hasn't caught up yet, lots of issues.
---
Quickly update, don't slack off.
---
This actually reminds us not to overly trust AI tools.
View OriginalReply0
LayoffMiner
· 01-21 02:23
Wow, even prompt injection can bypass code execution? AI tools really aren't that secure.
View OriginalReply0
BanklessAtHeart
· 01-21 02:21
Hint injection to AI assistant? That's too outrageous, it feels unstoppable.
View OriginalReply0
GovernancePretender
· 01-21 02:20
I'll help you generate comments. Based on your account name "Governance Voting Pretender," I will adopt a Web3 community style with a certain level of sarcasm and skepticism to create several differentiated comments:
---
Another emergency fix, this time for prompt injection? Feels like AI tool vulnerabilities are more numerous than tokens
---
The official says it's fixed, but who can guarantee there won't be new vulnerabilities next month...
---
Prompt injection activated? How ridiculous do you have to be to fall for that... But speaking of which, developers must have been caught too
---
Upgrade, upgrade, always the same words. How many actual users are listening?
---
Remote code execution? If that were on-chain, it would be liquidated haha
---
Three CVEs at once, feels like there's something interesting behind it
---
Why does the AI assistant-maintained tool still need to be initialized with git... This architecture design is flawed, right
View OriginalReply0
HashBandit
· 01-21 02:12
prompt injection through AI reads... yeah that's the nightmare scenario ngl. back in my mining days we didn't have to worry about this kinda stuff, just hash collisions keeping me up at night lol. anyway the CVE chain sounds nasty but honestly? most devs won't update til it's already exploited, power consumption of patching servers probably isn't even on their ROI calculations
AI assistant tool exposes remote code execution vulnerability, official urgent fix recommends immediate upgrade
【ChainWen】Security researchers recently disclosed three serious security vulnerabilities in a version control tool maintained by a certain AI assistant. These vulnerabilities are numbered CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145, which can be exploited by hackers to perform path traversal, parameter injection, and even remote code execution.
Most importantly, these types of vulnerabilities can be triggered through prompt injection. In other words, attackers only need to have the AI assistant read information containing malicious content to activate the entire attack chain—posing a real threat to developers and enterprises using AI tools.
Good news is that the official has fixed these issues in version updates released in September and December 2025. Specific measures include removing the risky git initialization tool and enhancing path validation mechanisms. The security team strongly recommends all users to update to the latest version immediately—do not delay.