Best Free AI APIs for Beginners: A Python Quickstart
This guide delivers a production-ready Python script to access OpenRouter, Hugging Face Inference, and Groq free tiers. Skip theoretical overviews and deploy a working connector in under five minutes.
1. Environment Setup & Dependency Installation
Initialize a clean virtual environment to isolate dependencies. Install only the required packages: requests for HTTP routing and python-dotenv for secure credential management. If you are unfamiliar with virtual environments or package managers, review the foundational setup steps in the Python AI Fundamentals for Non-Developers guide before proceeding.
python -m venv ai-env
source ai-env/bin/activate # Windows: ai-env\Scripts\activate
pip install requests python-dotenv
2. Unified Free API Connector Script
Create a .env file in your project root containing your free-tier keys: OPENROUTER_API_KEY, HUGGINGFACE_API_KEY, and GROQ_API_KEY. The following script standardizes authentication, payload formatting, and JSON parsing across all three providers. It uses a single routing function to minimize boilerplate and reduce integration friction.
import os
import requests
from dotenv import load_dotenv
load_dotenv()
API_CONFIG = {
'openrouter': {'url': 'https://openrouter.ai/api/v1/chat/completions', 'model': 'meta-llama/llama-3-8b-instruct:free'},
'huggingface': {'url': 'https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2', 'model': None},
'groq': {'url': 'https://api.groq.com/openai/v1/chat/completions', 'model': 'llama3-8b-8192'}
}
def query_free_api(provider: str, prompt: str) -> str:
config = API_CONFIG[provider]
headers = {
'Authorization': f'Bearer {os.getenv(f"{provider.upper()}_API_KEY")}',
'Content-Type': 'application/json'
}
payload = {'model': config['model'], 'messages': [{'role': 'user', 'content': prompt}]}
if provider == 'huggingface':
payload = {'inputs': prompt}
response = requests.post(config['url'], headers=headers, json=payload)
response.raise_for_status()
return response.json()['choices'][0]['message']['content'] if provider != 'huggingface' else response.json()[0]['generated_text']
3. Execution & Rate Limit Management
Run the connector by calling query_free_api('openrouter', 'Your prompt here'). Free tiers enforce strict token and request caps. When endpoints return 429 status codes, implement exponential backoff to prevent IP bans. Understanding how request routing, tokenization, and endpoint throttling interact is critical for scaling; refer to Understanding LLM APIs for architectural context on rate limit handling.
import time
def safe_query(provider, prompt, retries=3):
for i in range(retries):
try:
return query_free_api(provider, prompt)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
time.sleep(2 ** i)
else:
raise
4. Troubleshooting Common Beginner Errors
Address the three most frequent integration failures immediately:
- 401 Unauthorized: Verify
.envkey formatting and remove trailing whitespace. - 404 Not Found: Confirm the provider URL matches current documentation endpoints.
- JSONDecodeError: Wrap
response.json()in atry/exceptblock to log raw text when APIs return HTML error pages instead of JSON. Keep your payload structure strict and validate responses before parsing.
try:
data = response.json()
except requests.exceptions.JSONDecodeError:
print(f"Raw API response: {response.text}")
raise