Fluid Inference is an applied research lab building the future of edge intelligence. We're bridging the gap between advanced AI models and the hardware they run on.
Our current focus is making it easy for developers to access state-of-the-art voice AI on-device. No proprietary models, gated SDKs, or restricting licenses. The system is already very fragmented, we don't need to make it worse. You can find all of our native SDKs on our GitHub and our models on Hugging Face.
If you're interested in custom solutions for your use case, we'd love to hear from you. We work with chip makers to optimize models for their native runtime and AI accelerators, healthcare providers to deploy airgapped AI applications, and consumer OEMs to develop AI native apps for their next generation of devices.
To get in touch, fill out the form below or email [email protected].