Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Market Capitalization:2 429 147 609 866,7 USD
Vol. in 24 hours:96 244 072 696,75 USD
Dominance:BTC 59,05%
ETH:10,85%
Yes

TAO's rally is driven by an incorrect motive

crypthub
TAO's rally is driven by an incorrect motive

Decentralized Training Milestone

On March 10 2026 Bittensor coordinated training of Covenant‑72B, a 72‑billion‑parameter model across more than 70 global nodes. The model achieved 67.1 MMLU, comparable to Meta’s Llama 2 70B, but remains far below frontier GPT‑5.4‑class systems. The run proved decentralized training possible, yet large‑scale pre‑training needs co‑located GPUs and ultra‑low‑latency links that public networks cannot match. Consequently, decentralized training is unlikely to close the frontier gap.

Bittensor Network Overview

Bittensor is a blockchain‑based AI network that rewards miners with TAO for providing compute in specialized subnets. Since February 2025 it operates under Dynamic TAO (dTAO), where each subnet issues its own Alpha token and competes for daily TAO emissions via market‑driven staking. By April 2026 the network supports 128 active subnets, including τemplar (SN3) for LLM pre‑training (~$93 M market cap) and Chutes (SN64) for serverless inference (~$129 M market cap).

Inference Edge and Investment Outlook

Inference tolerates heterogeneous hardware and benefits from geographically dispersed nodes, giving Chutes a cost advantage over centralized clouds. Idle GPU capacity has near‑zero marginal cost, enabling pay‑per‑use pricing that can undercut hyperscaler rates, especially for latency‑sensitive edge applications. While current low prices rely on high TAO emissions, growing utilization should make the advantage self‑sustaining. Investors should focus on the durable inference thesis rather than the training milestone.