Hugging Face makes it easier for devs to run AI models on third-party clouds
AI dev platform Hugging Face has partnered with third-party cloud vendors including SambaNova to launch Inference Providers, a feature designed to make it easier for devs on Hugging Face to run AI models using the infrastructure of their choice. Other partners involved with the new effort include Fal, Replicate, and Together AI. Hugging Face says its partners have worked with it to build access to their respective data centers for running models into Hugging Face’s platform. Now, developers on Hugging Face can, for example, spin up a DeepSeek model on SambaNova’s servers from a Hugging Face project page in just a few clicks. Hugging Face has long offered its own in-house solution for running AI models. But in a blog post Tuesday, the company explained that its focus has shifted to collaboration, storage, and model distribution capabilities. Inference provider options as they appear on Hugging Face project pages.Image Credits:Hugging Face “Serverless providers have flourished, and the time was right for Hugging Face to offer easy and unified access to serverless inference through a set of …