KEY TAKEAWAYS
- Fluence launches GPU compute services to cut AI workload costs, offering up to 85% savings compared to traditional cloud providers.
- The new service is supported by a partnership with Spheron Network, enhancing Fluence’s decentralized infrastructure.
- Fluence’s GPU containers are now available, with support for virtual machines and bare metal coming soon.
Fluence has announced the launch of GPU compute services aimed at reducing costs for AI workloads. This new offering is available through the Fluence Platform and promises to deliver GPU containers at significantly lower prices compared to centralized cloud providers. The initiative is supported by a partnership with Spheron Network, a key compute provider.
Addressing AI’s Compute Bottleneck
AI projects often face high compute costs and hidden fees from large cloud providers, which can lead to long-term and inflexible pricing structures. Fluence aims to address these challenges by offering open, low-cost, and short-term GPU access. This expansion from CPU-based virtual servers to GPUs allows customers to access high-performance hardware at up to 85% lower costs than traditional cloud services.
The addition of GPUs builds on Fluence’s existing expertise in providing CPU-based services. The company’s CPU marketplace currently generates over $1 million in annual recurring revenue, with a pipeline exceeding $8 million. Fluence’s decentralized infrastructure supports thousands of active blockchain nodes, serving clients such as Antier, NEO, and RapidNode.
Partnership with Spheron Network
The partnership with Spheron Network expands Fluence’s provider network, which already includes Kabat and Piknik. Evgeny Ponomarev, Co-Founder of Fluence, emphasized the importance of cost-efficient access to enterprise-grade GPUs to meet the growing demand for AI. Prashant Maurya, Co-Founder of Spheron Network, highlighted that the partnership removes barriers related to GPU scarcity and cost, providing AI teams with reliable, decentralized compute power.
GPU Containers Available Now
GPU containers are currently live on the Fluence Console, optimized for AI workloads. Support for GPU virtual machines and bare metal is expected in the coming weeks, offering more options for AI projects seeking decentralized, enterprise-grade performance. Developers can begin deploying their projects by visiting Fluence’s website.
Why This Matters: Impact, Industry Trends & Expert Insights
Fluence has launched a new GPU compute service aimed at reducing costs for AI workloads by up to 85% compared to traditional cloud providers. This move, supported by a partnership with Spheron Network, seeks to address high compute costs and inflexible pricing structures faced by AI projects.
Current trends in decentralized GPU computing for AI highlight the rise of GPU tokenization and decentralized physical infrastructure networks (DePIN) that democratize access to high-performance GPU resources. This trend is evident as Fluence’s new service leverages decentralized infrastructure to offer cost-effective GPU access, challenging centralized cloud providers. OurCryptoTalk
Expert opinions on cost reduction in AI compute workloads emphasize the importance of infrastructure optimization and AI-driven procurement strategies. This insight reinforces Fluence’s approach of leveraging decentralized infrastructure to cut costs and improve accessibility for AI projects. IBM Think Insights
Explore More News:
Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the official policy of CoinsHolder. Content, including that generated with the help of AI, is for informational purposes only and is not intended as legal, financial, or professional advice. Readers should do their research before taking any actions related to the company and carry full responsibility for their decisions.