Hugging Face is more than an emoji: it's an open source data science and machine learning platform. It acts as a hub for AI experts and enthusiasts—like a GitHub for AI.
Originally launched as a chatbot app for teenagers in 2017, Hugging Face evolved over the years to be a place where you can host your own AI models, train them, and collaborate with your team while doing so. It provides the infrastructure to run everything from your first line of code to deploying AI in live apps or services. On top of these features, you can also browse and use models created by other people, search for and use datasets, and test demo projects.
Hugging Face is especially important because of the "we have no moat" vibe of AI. No big tech company will solve AI; it will be solved by open source collaboration. And that's what Hugging Face sets out to do: provide the tools to involve as many people as possible in shaping the artificially intelligent tools of the future.
Create and browse AI models
One of the main features of Hugging Face is the ability to create your own AI models. This model will be hosted on the platform, enabling you to add more information about it, upload all the necessary files, and keep track of versions. You can control whether your models are public or private, so you can decide when to launch them to the world—or even if you'll launch them at all.
It also lets you create discussions directly on the model page, which is handy for collaborating with others and handling pull requests (these are made when contributors suggest updates to the code). Once it's ready to use, you don't have to host the model in another platform: you can run it directly from Hugging Face, send requests, and pull the outputs into any apps you're building.
If you don't want to start from scratch, you can browse Hugging Face's model library. Out of the over 200,000 models available, you'll be able to work with things like:
Natural language processing, including tasks like translation, summarization, and text generation. These features are the core of what, for example, OpenAI's GPT-3 offers in ChatGPT.
Audio lets you complete tasks like automatic speech recognition, voice activity detection, or text-to-speech.
Computer vision is anything that helps computers see the real world and understand it. These tasks include depth estimation, image classification, and image-to-image. This is key for self-driving cars, for instance.
Multimodal models work with multiple types of data (text, images, audio) and can also render multiple kinds of output.
These models aren't here just for show. Hugging Face's Transformer library lets you connect to these models, send tasks, and receive outputs without having to set them up yourself. You can also download models, train them with your own data, or quickly create a Space. This makes it easy to find models to complete any kind of task, connect them with your own code, and start getting results.
Pick a collection of datasets
Let's start at the beginning here: a dataset is a collection of data that's used to train an AI model—this process of training is called machine learning. Datasets have a special format, containing examples connected with labels. The labels give instructions to the model as to how to interpret each example.
As a model trains on a dataset, it will start understanding the relationship between the examples and the labels, identifying patterns and the frequency of words, letters, and sentence structures. Once it's trained for long enough, you can try feeding it with a prompt that doesn't exist in the dataset. The model will then render an output based on the experience it built during the training phase.
The process of creating a great dataset is hard and time-consuming, as the data needs to be a useful and accurate representation of the real world. If it isn't, the model is likely to hallucinate more often or produce unintended results. Hugging Face hosts over 30,000 datasets you can feed into your models, making the training process easier. And, since it's an open source community, you can also contribute with your own datasets and browse new, better ones as they're released.
The same way there are AI models for natural language processing, computer vision, or audio, HuggingFace also has datasets to train for each specific task as well. The contents change based on the task: natural language processing leans on text data, computer vision on images, and audio on audio data.
What do Hugging Face datasets look like? I took a quick tour, and here are a few notable ones:
wikipedia contains labeled Wikipedia data, so you can train your models on the entirety of Wikipedia content.
openai_humaneval contains Python code handwritten by humans, including 164 programming problems, which is good to train AI models to generate code.
diffusiondb packs in 14 million labeled image examples, helping AI text-to-image models become more skillful at creating images from text prompts.
Even if you're a non-technical person like me, it's interesting to see how this data is structured and imagine how an AI model would go through it.
Showcase your work in Spaces
Hugging Face lets you host your models and browse datasets for training them. But this doesn't mean that they're packaged into an experience you can share with a wider audience. That's what Spaces are for: creating showcases and self-contained demos that help visitors test models and see how they perform.
The platform provides the basic computing resources to run the demo (16 GB of RAM, 2 CPU cores, and 50 GB of disk space), and you can upgrade the hardware if you want it to run better and faster. This is great for promoting your and your team's work and attracting more contributors to your projects.
The best part here is that many Spaces don't require any technical skills to use, so anyone can jump straight in and use these models for work (or for fun, I don't judge). Here are a few Spaces you can try:
CLIP Interrogator does image-to-text magic to help you find the prompt for an image you upload. Especially handy if you want to improve your image generation skills by collecting new prompts from great images.
Image to music does… image to music. Yes, it's mind-boggling. Describing it in words doesn't do it justice. Give it a try—just bear in mind it takes a few minutes to generate the output.
OpenAI's Whisper can be used for speech recognition, translation, and language identification.
And these are merely a sample. Dive into Hugging Face's Spaces and browse what the community is working on. Don't do it on a busy day, though. Time burns out faster while you're there.
Connect Hugging Face to your other apps
Let's get the bad news out of the way: technical skills are required to use everything Hugging Face has to offer. But you can use Zapier to send and retrieve data from models hosted at Hugging Face, with no code involved at all. Here's a shortlist of popular workflows people are setting up right now.
Instantly organize Typeform entries in Google Sheets with Hugging Face
Automatically translate Google Docs (and upload a new file to Google Drive) with Hugging Face
When you get a Zendesk ticket generate a response with Hugging Face
There's a lot more to try out too. Create your own workflows to leverage all the latent power of Hugging Face, or browse the integrations page for more inspiration.
Zapier is a no-code automation tool that lets you connect your apps into automated workflows, so that every person and every business can move forward at growth speed. Learn more about how it works.
AI models at your fingertips
If you have technical expertise in the field of AI and machine learning, Hugging Face is a great toolbox to speed up work and research, without you having to worry about the hardware side of things.
But if you're like me, Hugging Face is still a great place to try out new models, expand your horizons, and add a few AI tools to your work toolkit. And who knows, if the platform evolves to offering a no-code approach to machine learning, maybe then we'll all get to play with the big kids.