Doccupine supports AI integration to enhance your documentation experience. You can use OpenAI, Anthropic, or Google Gemini to power AI features in your documentation site. The AI assistant uses your documentation content as context, allowing users to ask questions about your docs and receive accurate answers based on the documentation.
To enable AI features, create an .env file in the directory where your website is generated. By default, this is the nextjs-app/ directory.
Create an .env file with the following configuration options:
# LLM Provider Configuration
# Choose your preferred LLM provider: openai, anthropic, or google
LLM_PROVIDER=openai
# API Keys (set the one matching your provider)
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GOOGLE_API_KEY=your_google_api_key_here
# Optional: Override default chat model (see your provider's docs for available models)
# LLM_CHAT_MODEL=your-model-id
# Optional: Override default embedding model (see your provider's docs for available models)
# Note: Anthropic doesn't provide embeddings, will fallback to OpenAI
# LLM_EMBEDDING_MODEL=your-embedding-model-id
# Optional: Set temperature (0-1, default: 0)
# LLM_TEMPERATURE=0Set LLM_PROVIDER to one of the following values:
openai - Use OpenAI's modelsanthropic - Use Anthropic's modelsgoogle - Use Google's modelsYou need to set the API key that matches your chosen provider:
OPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEYKeep your API keys secure. Never commit your .env file to version control.
Doccupine automatically adds .env to your .gitignore file.
If you want to use Anthropic as your LLM provider, you must also have an OpenAI API key set. Here's why:
Anthropic (Claude) does not provide an embeddings API. They only offer chat/completion models, not text embeddings.
Your RAG (Retrieval-Augmented Generation) system has two components:
When using Anthropic as your LLM_PROVIDER, Doccupine will use Anthropic for chat/completion tasks, but will automatically fallback to OpenAI for embeddings. This means you need both API keys configured:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your_anthropic_api_key_here
OPENAI_API_KEY=your_openai_api_key_hereThis hybrid approach allows you to leverage Anthropic's powerful chat models while still having access to embeddings functionality through OpenAI.
| Provider | Chat model | Embedding model |
|---|---|---|
| OpenAI | gpt-4.1-nano | text-embedding-3-small |
| Anthropic | claude-sonnet-4-5-20250929 | OpenAI fallback |
gemini-2.5-flash-lite | gemini-embedding-001 |
Override the default chat model by uncommenting and setting LLM_CHAT_MODEL. You can use any available model from your chosen provider. For a complete list of available models, refer to the official documentation:
Override the default embedding model by uncommenting and setting LLM_EMBEDDING_MODEL. For a complete list of available embedding models, refer to the official documentation:
Control the randomness of AI responses by setting LLM_TEMPERATURE to a value between 0 and 1:
0 - More deterministic and focused responses (default)1 - More creative and varied responses