In-browser pipelines with device & progress
Device / backend:
auto (prefer WebGPU if available)
webgpu
wasm
Note: WebGPU requires a supported browser and OS.
Zero-shot classification
Model (change if desired):
Not started
Text (premise):
This is a course about the Transformers library
Labels (comma-separated):
Classify
Model not loaded
Summarization
Model (change if desired):
Not started
Article / Long text:
The Transformers library by Hugging Face provides state-of-the-art pretrained models for NLP. Running such models in the browser is possible using transformers.js (Xenova) which compiles runtimes to WASM/WebGPU and downloads model weights into the browser. This allows offline inference after initial download, though model size and speed depend on the chosen model and device support in the browser.
Summary max length (tokens):
Summarize
Model not loaded