
| Label | Section | Description |
|---|---|---|
| 1 | Global Navigation | The top header containing the search bar and links to Models, Datasets, Spaces, and Account settings. |
| 2 | Tasks Filter | Allows you to filter models by their specific capability, such as Text Generation (currently selected), Image-to-Text, or Text-to-Video. |
| 3 | Parameters Slider | A filter to find models based on their size (number of parameters), ranging from <1B to >500B. |
| 4 | Libraries Filter | Filters models by the software framework they support, such as PyTorch, Transformers, or GGUF. |
| 5 | Apps / Runtimes | Filter for models compatible with specific deployment tools like vLLM, Ollama, or LM Studio. |
| 6 | Inference Providers | Shows models supported by specific API/Hardware providers. Groq is currently selected in this view. |
| 7 | Results Header | Displays the total number of models found (6), a sub-search for names, and sorting options (Trending). |
| 8 | Model Card 1 | openai/gpt-oss-20b: A 22B parameter text generation model updated in Aug 2025. |
| 9 | Model Card 2 | openai/gpt-oss-120b: A larger 120B parameter version of the GPT-OSS model. |
| 10 | Model Card 3 | moonshotai/Kimi-K2-Instruct-0905: A massive 1T parameter model updated in Nov 2025. |
| 11 | Model Card 4 | openai/gpt-oss-safeguard-20b: A safety-aligned version of the 20B model. |
| 12 | Model Card 5 | meta-llama/Llama-3.3-70B-Instruct: Meta's 71B parameter instruction-tuned model. |
| 13 | Model Card 6 | Qwen/Qwen3-32B: Alibaba's 33B parameter model updated in July 2025. |

| Label | Element Name | Description |
|---|---|---|
| 1 | Namespace & Model Name | Shows the owner/organization (Qwen) and the specific model name. |
| 2 | Metadata Tags | Interactive buttons displaying the model's task (Text-to-Image), library (Diffusers), and supported languages (English, Chinese). |
| 3 | Navigation Tabs | Model card is the current view; Files and versions is where the raw weights and YAML metadata live. |
| 4 | Deployment Buttons | Provides quick code snippets for using the model via the transformers library or API. |
| 5 | Text Description | The "Human-Readable" side of the model card (README) containing images, benchmarks, and usage instructions. |
| 6 | Popularity Stats | Shows the number of downloads over the last month and how many users "liked" the model. |
| 7 | Inference Widget | An interactive area powered by Inference Providers (like fal in your image) where you can test the model's output directly in the browser. |
| 8 | Model Tree | A list of related versions, including adapters, fine-tunes, and quantized versions of this specific model. |
| 9 | Spaces Demos | Lists community-created demo apps hosted on Hugging Face Spaces that utilize this model. |
In the Hugging Face ecosystem, Diffusers is the specialized Python library designed for working with state-of-the-art diffusion models. While the Transformers library is the "brain" for text and language, the Diffusers library is the "paintbrush" for generating visual and audio media.
While they both allow you to interact with a model, they serve completely different stages of your project: one is for quick testing (Inference Widget), and the other is for actual implementation (Deployment Buttons).
| Feature | Inference Widget | Deployment Buttons |
|---|---|---|
| Primary Goal | Testing & Exploration. Quickly see if a model works for your specific prompt. | Integration & Production. Getting the model ready to run in a real application or server. |
| Where it Runs | Directly on the model page in your web browser. | In a production environment like a Cloud API, Hugging Face Space, or local script. |
| Output | Visual result (text, image, or label) shown right in the UI. | Technical code snippets, API endpoints, or a managed server setup. |
| Cost | Usually free for community exploration. | Can range from free (Spaces) to paid (Dedicated Inference Endpoints). |
The Inference Widget is the interactive box on the right side of the model page.2 It is powered by serverless Inference Providers that allow you to try the model without writing any code or installing libraries.
The Deploy or Use this model buttons are found at the top right. They don't run the model for you in the UI; instead, they give you the tools to host it yourself or via a dedicated service.
Common options found under these buttons include: