AI Provider for Hugging Face
·
AI Provider for Hugging Face for the WordPress AI Client. Use open-source models for text and image generation through the standard WordPress AI API.
This plugin registers Hugging Face as an AI provider for the WordPress AI Client introduced in WordPress 7.0. It enables text and image generation through Hugging Face’s Inference API using popular open-source models.
Features:
- Registers Hugging Face on the Settings > Connectors admin screen
- Supports API key management (environment variable, PHP constant, or database)
- Text generation via
wp_ai_client_prompt() - Image generation via
wp_ai_client_prompt()->generate_image() - Dynamically fetches top 20 popular models from HuggingFace (updated every 12 hours)
- Multi-provider support — automatically routes to the best available inference provider (hf-inference, fal-ai, replicate, together, nscale, wavespeed)
- Add custom models from the Settings > Hugging Face admin page
- Extensible model lists via
hugging_face_ai_provider_modelsandhugging_face_ai_provider_image_modelsfilters - Configurable base URL for self-hosted TGI instances
- Clear, actionable error messages for common API issues
Usage:
$result = wp_ai_client_prompt( 'Summarize the benefits of caching.' )
->using_temperature( 0.7 )
->generate_text();
// Generate an image
$image = wp_ai_client_prompt( 'A futuristic WordPress logo in neon colors' )
->generate_image();
External Services
This plugin connects to the Hugging Face Inference API to provide AI-powered text and image generation capabilities within WordPress. The Hugging Face Inference API is a third-party service operated by Hugging Face, Inc.
What data is sent and when
- Your Hugging Face API key is sent with every request for authentication.
- When fetching the model list, a request is sent to the Hugging Face API to retrieve available models.
- When generating text, your prompt and any conversation context are sent to the Hugging Face Inference API.
- When generating images, your text prompt is sent to the Hugging Face Inference API.
- When resolving inference providers, a request is sent to the Hugging Face API to determine available providers for the selected model.
Data is only sent when the plugin is actively used to generate text or images, or when loading the model list on the admin settings page.
Service links
- Hugging Face Website
- Hugging Face Inference API Documentation
- Hugging Face Terms of Service
- Hugging Face Privacy Policy
- Hugging Face Content Policy
All API communication is routed through https://router.huggingface.co and https://huggingface.co/api.
