Recently, the AI community has exploded again - a leading AI laboratory has released two formidable characters at once: the standard version V3.2 and the competition-specialized version Speciale.
First, let’s talk about Speciale, the "exam champion": it directly sweeps the gold medals at the four major international competitions, including the IMO Mathematics Olympiad and the ICPC Programming Contest. Scoring 99.2 points in the HMMT Feb 2025 means crushing 99% of human competitors. The programming ability is even more outrageous, with a CodeForces rating of 2701—what does this mean? It’s basically the ceiling level of top engineers.
The standard version V3.2 is also not idle: relying on the DSA sparse attention architecture, the inference speed is directly 30% faster, and the output content can be streamlined by 40%. In benchmark tests like Tool-Decathlon, it can achieve 85% of the performance of closed-source large models without specialized training.
The web version, App, and API have all switched to the official version V3.2. If you want to try Speciale, you need to use their temporarily opened API interface—after all, this thing is quite powerful and may still be in the observation phase.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Recently, the AI community has exploded again - a leading AI laboratory has released two formidable characters at once: the standard version V3.2 and the competition-specialized version Speciale.
First, let’s talk about Speciale, the "exam champion": it directly sweeps the gold medals at the four major international competitions, including the IMO Mathematics Olympiad and the ICPC Programming Contest. Scoring 99.2 points in the HMMT Feb 2025 means crushing 99% of human competitors. The programming ability is even more outrageous, with a CodeForces rating of 2701—what does this mean? It’s basically the ceiling level of top engineers.
The standard version V3.2 is also not idle: relying on the DSA sparse attention architecture, the inference speed is directly 30% faster, and the output content can be streamlined by 40%. In benchmark tests like Tool-Decathlon, it can achieve 85% of the performance of closed-source large models without specialized training.
The web version, App, and API have all switched to the official version V3.2. If you want to try Speciale, you need to use their temporarily opened API interface—after all, this thing is quite powerful and may still be in the observation phase.