Search results for "MPT"
2026-02-13
09:46

XRP Ledger unlocks a new era of token custody, and after the launch of XLS-85, assets like RLUSD can be locked on-chain

On February 13, it was announced that the XRP Ledger has officially activated the token custody amendment XLS-85, allowing users to create custody accounts for issued interchangeable tokens. This means that, in addition to XRP itself, Trust Line tokens and Multi-Purpose Tokens (MPT) can also be locked on-chain under certain conditions, providing more flexible asset management options for decentralized finance and enterprise applications. The amendment regained support from 30 validators on January 30, 2026, reaching the activation threshold, and went live two weeks later. Previously, XLS-85 was close to passing in September 2025, but disagreements arose due to incompatibility issues with the MPT standard, causing support to drop to just 16 votes. XRPL dUNL validator Vet pointed out flaws in custody accounting related to transfer fees and supply tracking. The community subsequently released fixTokenEscrowV1 and incorporated it into Rippled v3.0.0, restoring confidence and driving final activation.
More
XRP3,18%
MPT10,38%
02:48

🚀Miracle Play (MPT) Trading Contest Kicks Off with $10,000 Worth of Prizes! ⏳ Event Period: 04.22-04.29 11:00AM [UTC+8] ✅ Trade $MPT$ to win a share of $8,000 ✅ Exclusive benefits for new users to share a $1,000 prize pool ✅ Invite new users and enjoy $1,000 in rewards 💸 Get involved: https://www.gate.io/zh/article/36075 #Gateio #MPT #Trade
More
MPT10,38%
04:57

TinyLlama, an Open Source model for mini AI, was released and occupies only 637 MB

According to a report by Webmaster's Home on January 6, the TinyLlama team released a high-performance AI Open Source model that occupies only 637 MB, TinyLlama. TinyLlama is a compact version of Meta's open source language model Llama2, which has 1 billion parameters and superior performance for multi-domain language model research, and its final version outperforms existing open source language models of comparable size, including Pythia-1.4B, OPT-1.3B, and MPT-1.3B. It is reported that TinyLlama can be deployed on edge devices and can also be used to assist in speculative decoding of large models.
More
07:39

Baidu Smart Cloud "Qianfan Large Model Platform" upgrade: access to 33 models including LLaMA2

According to the "Kechuangban Daily" report on August 2, Baidu Smart Cloud Qianfan large-scale model platform has completed a new round of upgrades, fully accessing 33 large-scale models including the full series of LLaMA2, ChatGLM2, RWKV, MPT, Dolly, OpenLLaMA, and Falcon , has become the platform with the largest number of large models in China, and the connected models have undergone secondary performance enhancement of the Qianfan platform, and the cost of model reasoning can be reduced by 50%. At the same time, the Qianfan platform has launched a preset_template library with 103 templates, covering more than ten scenarios of dialogue, games, programming, and writing. In addition, this upgrade released a number of new plug-ins again.
More
06:55

AI company MosaicML launched the 30 billion parameter model MPT-30B, saying that the training cost is only a fraction of competing products

According to IT House's report on June 25, AI startup MosaicML recently released its language model MPT-30B. The model has 30 billion parameters, and the training cost is "only a fraction of other similar competing models." The training cost of such models expands the application of AI models in a wider range of fields. Naveen Rao, CEO and co-founder of MosaicML, said that the training cost of MPT-30B is 700,000 US dollars (about 5.0244 million yuan), which is far lower than the tens of millions of dollars required for similar products such as GPT-3. . In addition, due to the lower cost and smaller size of MPT-30B, it can also be trained more quickly and is more suitable for deployment on local hardware. It is reported that MosaicML uses Alibi and FlashAttention technology to optimize the model, which can achieve longer text length and higher utilization of GPU computing. MosaicML is also one of the few laboratories that can use Nvidia H100 GPU. Compared with previous achievements, the current throughput of each GPU has increased by more than 2.4 times, which can bring faster completion time.
More