What is the Lilypad Decentralized Compute Network?


Lilypad is a decentralized platform designed to democratize access to GPU and high-performance compute—resources essential for modern AI and ML tasks. We are striving to make it possible for anyone, anywhere to be able to access a network of compute resources, crowdsources from a community of individual providers and datacenters, to run complex jobs that may not be otherwise affordable and accessible due to a lack of powerful hardware.
Typically, smaller startups, academic researchers, and indeed any organization without funding and resources to set up or purchase compute infrastructure. Demand for HPC often exceeds capacity by over 300% [1]. Lilypad will make it trivial to not only access these resources, but to also enable the flexibility and customizability to access resources on-demand, for any task. This includes certain High Performance Compute (HPC) tasks. Lilypad will take on specific types of tasks, such as large-scale batch inference, that require considerable infrastructure, but aren’t necessarily the best use of a typical HPC network’s resource. This type of task doesn’t require fast networking speeds between machines, and can be outsources to a decentralized system like Lilypad.
Traditional HPC Networks
High Performance Compute is typically a system where computer hardware is joined together in a network that has high bandwidth (400Gbps transfer), low latency, high efficiency, and has specialized hardware. Traditionally, high performance compute networks were a very specialized and coordinated set of hardware and configurations that were localized to one or a few geographical locations. Ten years ago, HPC networks were made up of tens or hundreds of computers networked together. Today, you can get the same GPU compute power one used to have to source from an HPC network from a single piece of hardware for less than $10,000.
HPC Networks enable a level of computation that isn’t possible on single machines. Use cases include genomic research (tasks such as protein folding), weather forecasting, complex scientific and engineering simulations, and AI/ML Model training that require massive parallel processing over what can amount to petabytes of data.
The Lilypad Approach
GPUs such as NVIDIA’s A100 and H100 or the AMD Instinct MI250 can handle high performance tasks like complex deep learning workloads. Lilypad empowers anyone with idle GPU capacity to participate in a global compute network, unlocking affordable high-performance resources for startups, researchers, and innovators—locking users into the fees and restrictions on models like traditional cloud providers.
The difference from traditional HPCs is in how Lilypad is tackling this problem by creating a decentralized, open network, meaning we aim to accommodate a wide range of different types of machines as providers, and build a platform that can coordinate and accommodate these in a way that makes it possible to do these compute- and data- intensive tasks. Using advances such as containerization and better distributed systems tooling, it is now possible to coordinate a worldwide high performance compute network.
The Lilypad Network
One of the unique things about the LIlypad network is that it is a decentralized and open network. Any compute provider, or node, at or above a defined performance threshold can join and be paid (In our native LILY currency) for running jobs sent to the network.
For our MVP, we are prioritizing computers with enough GPU (as well as CPU and memory) that can run one-shot inference jobs, AI agents that use chain-of-thought or sequential prompting, and customized high demand open source models.
Decentralizing Certain HPC Tasks
The Lilypad Network is uniquely suited to certain types of HPC tasks. Because it is a decentralized network, there are certain limitations in terms of latency. In other words, one shouldn’t expect the millisecond responses you get with AI chatbots, but that doesn’t mean there aren’t hundreds of applications that Lilypad is well suited for. Just a few examples of what our network can be used for:
Provide resources for pharmaceutical and biological research for academics who are new to GPU processing and need access to the newest models
Batch data processing for financial analysis, and large scale data processing
RAG-driven model customization - create customized and secure modules that can be connected to business’ private documentation datasets to generate business documents
Processing of edge and IoT device data
Publish fine-tuned models for specific applications that can be quickly run via an API endpoint.
Enabling Small Project Growth
Many new initiatives , whether they are emerging startups, academic research groups, and innovative projects often face a choice: invest significant capital in building compute infrastructure or pay premium rates to cloud provider. Cloud providers enable a quick and easy setup of resources needed, and scalability for when a project’s scope and computing needs grow. The drawback here is the significant cost of these remote options, as well as vendor lock-in (the cost and difficulties that are designed into cloud systems to prevent users from switching to other services). By switching away from large, incumbent cloud providers, projects and companies can save tens to hundreds of thousands [2] on operating costs, depending on the size of their computing needs.
Because of the way that we are designing our Module Marketplace, it is possible for users who have custom workflows and jobs to include their job as a ‘module’ that can be run on Lilypad. This means even censored AI models, models unavailable elsewhere, or models that have been customized with fine tuning (or RAG workflows with custom datasets) can easily be accessed, on-demand, from an API endpoint, in a serverless manner. Traditionally jobs like this are only an option if you take on the cost and time of paying for and configuring your own hardware.
Meeting Specific Needs of the AI Era
What the Lilypad network is doing is enabling any user with needs for certain high performance compute tasks to access it at a much more competitive rate than major cloud providers offer. We also remove time commitments and obfuscate the overhead work of setting up infrastructure, which are real costs to any organzation accessing compute from a GPU rental marketplace [2]. Our platform delivers serverless, on-demand AI compute and model hosting, enabling rapid deployment and experimentation without the burdens of traditional infrastructure. Anyone can add, test, and run their model via an endpoint, so you aren’t restricted to a limited set of models or compute jobs in the way other similar services are, and you pay on-demand, for only the compute you use, instead of monthly for access to services.
There are certain latency limitations with Lilypad, and the network isn’t able to specify the exact hardware in the same way a single-owner HPC network can. It does, however, provide distributed container orchestration, job scheduling, and management with Bacalhau, and has the ability to support massive parallel processing in Docker containers across a peer-to-peer network of hardware. This is why Lilypad has started by targeting a few distinct use cases such as custom AI Inference tasks, and is exploring other uses such as RAG, Fine Tuning, and supporting Agentic AI workflows.
The Lilypad Value
The value of what Lilypad is creating lies in both the customizable, on-demand access to compute, but also the competitive marketplace that results in gains by parties on either end. The resource providers who would otherwise be unable to monetize idle compute power have the opportunity to make the most of their infrastructure investment, and those running jobs benefit from a marketplace where they pick from pre-configured jobs and can bid on lower priced resources. Our open market means job creators will benefit from competitive pricing, as compute providers compete to win jobs to monetize their idle compute. Moving from traditional cloud compute providers means end users can get much closer to on-prem costs, with savings of up to 25%-66% [2], depending on the scale of a project.
Plus, with the Lilypad Module Marketplace, users aren’t locked in to only using certain models. Module Creators can easily containerize, configure, test, and release any model they need on the Lilypad network, then access the job on-demand with an API endpoint. Other API-endpoint services give users and builders a limited selection, which inevitably forces a choice between building something customized to your needs, or, setting up or renting your own infrastructure.
We are also fostering the growth of a massive library of different useful models and compute jobs on our network, by adding in an incentive to our protocol that enables module creators to earn a small piece of the fee for a job each time their module is run. An incentivized open marketplace naturally will allow these creators in our ecosystem to build at the speed of open source.
Conclusion
Lilypad provides the best for all those in our network: Monetization of idle power for those who contribute compute, and a cost savings to end users, while maintaining the advantages of scalability and ease of use provided by cloud computing companies, as well as the ability of those who want to bring models to our marketplace to earn for their contribution.
Lilypad’s mission is to rapidly and affordably bring high-performance AI and ML capabilities to innovators worldwide, leveraging the power of decentralized compute. We are harnessing the power of open communities and crypto-incentivization to create network-effect growth, while meeting the constantly evolving AI and high performance compute landscape.
References
AWS Intel. Challenging the Barriers to High Performance Computing in the Cloud. October 2019. https://d1.awsstatic.com/HPC2019/Challenging-Barriers-to-HPC-in-the-cloud-Oct2019.pdf
Justin Garrison. The New Stack. Cloud vs. On-Prem: Comparing Long-Term Costs. November 2024. https://thenewstack.io/cloud-vs-on-prem-comparing-long-term-costs/
Subscribe to my newsletter
Read articles from Lindsay Walker directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
