AI implementation for openai and claude.

This commit is contained in:
2026-01-15 21:20:30 +01:00
parent fe70f3892c
commit 5bbec0e240
20 changed files with 623 additions and 24 deletions

16
.env.example Normal file
View File

@@ -0,0 +1,16 @@
# AI Provider API Keys
# Get Claude API key from: https://console.anthropic.com/
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Get OpenAI API key from: https://platform.openai.com/api-keys
OPENAI_API_KEY=your_openai_api_key_here
# Provider Settings
DEFAULT_PROVIDER=claude
CLAUDE_MODEL=claude-3-5-sonnet-20241022
OPENAI_MODEL=gpt-4-turbo-preview
# API Configuration
MAX_TOKENS=4000
TEMPERATURE=0.7
FRONTEND_URL=http://localhost:3000

24
.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Environment variables
.env
# Python
backend/__pycache__/
backend/**/__pycache__/
backend/**/*.pyc
*.py[cod]
*$py.class
# Logs
*.log
# OS
.DS_Store
Thumbs.db
# IDE
.vscode/
.idea/
.cursor/
# Docker
*.pid

View File

@@ -6,11 +6,11 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
DevDen is a self-hosted AI chat platform that enables organizations to provide AI-powered Q&A based on their own knowledge bases. Users interact through a clean chat interface while administrators manage knowledge bases, AI providers, and user access through a terminal-style dashboard.
**Current Status:** Early prototype stage with static frontend (HTML/CSS/JS). Backend infrastructure is not yet implemented.
**Current Status:** MVP with AI integration complete. Backend FastAPI server with Claude and OpenAI support. Frontend streams responses in real-time.
## Architecture
**Planned Technology Stack:**
**Technology Stack:**
- **Frontend:** Vanilla JavaScript (prototype), potential migration to Svelte
- **Backend:** FastAPI (Python)
- **Database:** PostgreSQL (users, conversations, settings)
@@ -25,9 +25,24 @@ DevDen is a self-hosted AI chat platform that enables organizations to provide A
## Current Files
- `index.html` - Main chat interface structure
- `script.js` - Chat interaction logic (mock responses currently)
- `style.css` - UI styling with fox-themed color scheme (accent: `#d4a574`)
**Frontend:**
- `index.html` - Main chat interface structure with welcome screen
- `script.js` - Chat interaction logic with SSE streaming
- `style.css` - Catppuccin Mocha theme with pixel aesthetic
**Backend:**
- `backend/app/main.py` - FastAPI application entry point
- `backend/app/config.py` - Environment configuration
- `backend/app/api/chat.py` - Chat endpoints (POST /api/chat, POST /api/chat/stream)
- `backend/app/services/provider_manager.py` - Provider abstraction and fallback
- `backend/app/services/provider_claude.py` - Claude implementation
- `backend/app/services/provider_openai.py` - OpenAI implementation
- `backend/app/models/schemas.py` - Pydantic request/response models
**Infrastructure:**
- `docker-compose.yml` - Multi-service orchestration
- `backend/Dockerfile.backend` - Backend container
- `.env.example` - Environment template
## Development Phases (from project.md)
@@ -39,14 +54,36 @@ DevDen is a self-hosted AI chat platform that enables organizations to provide A
6. Git Integration - Auto-sync from repositories
7. Polish - Production readiness
## Commands (Future)
## Commands
When Docker infrastructure is implemented:
**Start the application:**
```bash
docker-compose up -d
docker-compose exec devden-web python manage.py init-db
docker-compose exec devden-web python manage.py create-admin --email admin@company.com
docker-compose logs -f devden-web
docker compose up -d --build
```
**View logs:**
```bash
docker compose logs -f backend
docker compose logs -f frontend
```
**Stop the application:**
```bash
docker compose down
```
**Test backend API:**
```bash
# Health check
curl http://localhost:8000/health
# List available providers
curl http://localhost:8000/api/chat/providers
# Test chat (non-streaming)
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello!"}'
```
## Design Decisions

View File

@@ -1,9 +1,50 @@
## 🦊 DevDen
## DevDen
**Your AI assistant, powered by your knowledge**
---
## 🚀 Quick Start
### Prerequisites
- Docker and Docker Compose installed
- API key from [Anthropic](https://console.anthropic.com/) or [OpenAI](https://platform.openai.com/api-keys)
### Setup
1. **Clone the repository**
```bash
git clone https://github.com/yourusername/devden.git
cd devden
```
2. **Configure environment variables**
```bash
cp .env.example .env
# Edit .env and add your API keys
```
3. **Start the services**
```bash
docker compose up -d --build
```
4. **Access DevDen**
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- Health Check: http://localhost:8000/health
### Environment Variables
Required variables in `.env`:
```
ANTHROPIC_API_KEY=your_key_here # For Claude
OPENAI_API_KEY=your_key_here # For OpenAI
DEFAULT_PROVIDER=claude # claude or openai
```
---
## What is DevDen?
DevDen is a self-hosted AI chat platform that lets people ask questions and get answers from AI providers like Claude, OpenAI, Gemini, and others. The key difference? As an administrator, you control what knowledge the AI has access to by connecting your own documentation, manuals, wikis, or any other information sources.

View File

@@ -0,0 +1,19 @@
FROM python:3.11-slim
WORKDIR /app
# Install curl for health checks
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY app/ ./app/
# Expose port
EXPOSE 8000
# Run with uvicorn
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

0
backend/app/__init__.py Normal file
View File

View File

69
backend/app/api/chat.py Normal file
View File

@@ -0,0 +1,69 @@
import json
from fastapi import APIRouter, HTTPException, status
from fastapi.responses import StreamingResponse
from ..config import settings
from ..models.schemas import ChatRequest, ChatResponse, ProviderListResponse
from ..services.provider_manager import provider_manager
router = APIRouter(prefix="/api/chat", tags=["chat"])
@router.post("/", response_model=ChatResponse)
async def chat(request: ChatRequest):
"""
Non-streaming chat endpoint
"""
try:
provider = provider_manager.get_provider(request.provider)
response = await provider.chat(request.message)
return ChatResponse(message=response, provider=provider.get_provider_name())
except ValueError as e:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error processing request: {str(e)}",
)
@router.post("/stream")
async def chat_stream(request: ChatRequest):
"""
Streaming chat endpoint - returns SSE (Server-Sent Events)
"""
try:
provider = provider_manager.get_provider(request.provider)
async def event_generator():
try:
async for chunk in provider.chat_stream(request.message):
yield f"data: {json.dumps({'chunk': chunk})}\n\n"
yield f"data: {json.dumps({'done': True})}\n\n"
except Exception as e:
yield f"data: {json.dumps({'error': str(e)})}\n\n"
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
},
)
except ValueError as e:
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
@router.get("/providers", response_model=ProviderListResponse)
async def list_providers():
"""
List available providers
"""
return ProviderListResponse(
providers=provider_manager.get_available_providers(),
default=settings.DEFAULT_PROVIDER,
)

33
backend/app/config.py Normal file
View File

@@ -0,0 +1,33 @@
from typing import Optional
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
# API Keys
ANTHROPIC_API_KEY: Optional[str] = None
OPENAI_API_KEY: Optional[str] = None
# Provider Settings
DEFAULT_PROVIDER: str = "claude"
CLAUDE_MODEL: str = "claude-3-5-sonnet-20241022"
OPENAI_MODEL: str = "gpt-4-turbo-preview"
# API Settings
MAX_TOKENS: int = 4000
TEMPERATURE: float = 0.7
TIMEOUT: int = 60
# CORS
FRONTEND_URL: str = "http://localhost:3000"
# Rate Limiting
RATE_LIMIT_REQUESTS: int = 10
RATE_LIMIT_WINDOW: int = 60
class Config:
env_file = ".env"
case_sensitive = True
settings = Settings()

51
backend/app/main.py Normal file
View File

@@ -0,0 +1,51 @@
import logging
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from .api import chat
from .config import settings
from .services.provider_manager import provider_manager
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
app = FastAPI(
title="DevDen API", description="AI chat backend for DevDen", version="1.0.0"
)
# CORS Configuration
app.add_middleware(
CORSMiddleware,
allow_origins=[settings.FRONTEND_URL, "http://localhost:3000"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Include routers
app.include_router(chat.router)
@app.get("/health")
async def health_check():
"""Health check endpoint"""
return JSONResponse(
content={
"status": "healthy",
"providers": provider_manager.get_available_providers(),
}
)
@app.on_event("startup")
async def startup_event():
logger.info("DevDen API starting up...")
logger.info(f"Available providers: {provider_manager.get_available_providers()}")
@app.on_event("shutdown")
async def shutdown_event():
logger.info("DevDen API shutting down...")

View File

View File

@@ -0,0 +1,24 @@
from typing import List, Optional
from pydantic import BaseModel, Field, validator
class ChatRequest(BaseModel):
message: str = Field(..., min_length=1, max_length=10000)
provider: Optional[str] = None
@validator("message")
def message_not_empty(cls, v):
if not v.strip():
raise ValueError("Message cannot be empty or whitespace")
return v.strip()
class ChatResponse(BaseModel):
message: str
provider: str
class ProviderListResponse(BaseModel):
providers: List[str]
default: str

View File

View File

@@ -0,0 +1,27 @@
from abc import ABC, abstractmethod
from typing import AsyncGenerator
class AIProvider(ABC):
"""Abstract base class for AI providers"""
def __init__(self, api_key: str, model: str):
self.api_key = api_key
self.model = model
@abstractmethod
async def chat(self, message: str, system_prompt: str = None) -> str:
"""Non-streaming chat"""
pass
@abstractmethod
async def chat_stream(
self, message: str, system_prompt: str = None
) -> AsyncGenerator[str, None]:
"""Streaming chat - yields chunks of text"""
pass
@abstractmethod
def get_provider_name(self) -> str:
"""Return provider identifier"""
pass

View File

@@ -0,0 +1,41 @@
from typing import AsyncGenerator
import anthropic
from .provider_base import AIProvider
class ClaudeProvider(AIProvider):
def __init__(self, api_key: str, model: str):
super().__init__(api_key, model)
self.client = anthropic.AsyncAnthropic(api_key=api_key)
async def chat(self, message: str, system_prompt: str = None) -> str:
"""Non-streaming chat"""
messages = [{"role": "user", "content": message}]
kwargs = {"model": self.model, "max_tokens": 4000, "messages": messages}
if system_prompt:
kwargs["system"] = system_prompt
response = await self.client.messages.create(**kwargs)
return response.content[0].text
async def chat_stream(
self, message: str, system_prompt: str = None
) -> AsyncGenerator[str, None]:
"""Streaming chat"""
messages = [{"role": "user", "content": message}]
kwargs = {"model": self.model, "max_tokens": 4000, "messages": messages}
if system_prompt:
kwargs["system"] = system_prompt
async with self.client.messages.stream(**kwargs) as stream:
async for text in stream.text_stream:
yield text
def get_provider_name(self) -> str:
return "claude"

View File

@@ -0,0 +1,75 @@
from typing import Optional
from ..config import settings
from .provider_base import AIProvider
from .provider_claude import ClaudeProvider
from .provider_openai import OpenAIProvider
class ProviderManager:
"""Manages provider selection and fallback logic"""
def __init__(self):
self.providers = {}
self._initialize_providers()
def _initialize_providers(self):
"""Initialize available providers based on API keys"""
if settings.ANTHROPIC_API_KEY and settings.ANTHROPIC_API_KEY.strip():
self.providers["claude"] = ClaudeProvider(
api_key=settings.ANTHROPIC_API_KEY, model=settings.CLAUDE_MODEL
)
if settings.OPENAI_API_KEY and settings.OPENAI_API_KEY.strip():
self.providers["openai"] = OpenAIProvider(
api_key=settings.OPENAI_API_KEY, model=settings.OPENAI_MODEL
)
def get_provider(self, provider_name: Optional[str] = None) -> AIProvider:
"""
Get a provider by name, or use default.
Raises ValueError if provider not available.
"""
name = provider_name or settings.DEFAULT_PROVIDER
if name not in self.providers:
raise ValueError(
f"Provider '{name}' not available. "
f"Available: {list(self.providers.keys())}"
)
return self.providers[name]
def get_available_providers(self) -> list[str]:
"""Return list of available provider names"""
return list(self.providers.keys())
async def chat_with_fallback(
self, message: str, preferred_provider: Optional[str] = None
) -> tuple[str, str]:
"""
Try to chat with preferred provider, fallback to others if it fails.
Returns (response, provider_used)
"""
providers_to_try = [preferred_provider or settings.DEFAULT_PROVIDER] + [
p
for p in self.providers.keys()
if p != (preferred_provider or settings.DEFAULT_PROVIDER)
]
last_error = None
for provider_name in providers_to_try:
try:
provider = self.get_provider(provider_name)
response = await provider.chat(message)
return response, provider_name
except Exception as e:
last_error = e
continue
raise Exception(f"All providers failed. Last error: {last_error}")
# Singleton instance
provider_manager = ProviderManager()

View File

@@ -0,0 +1,48 @@
from typing import AsyncGenerator
from openai import AsyncOpenAI
from .provider_base import AIProvider
class OpenAIProvider(AIProvider):
def __init__(self, api_key: str, model: str):
super().__init__(api_key, model)
self.client = AsyncOpenAI(api_key=api_key)
async def chat(self, message: str, system_prompt: str = None) -> str:
"""Non-streaming chat"""
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
messages.append({"role": "user", "content": message})
response = await self.client.chat.completions.create(
model=self.model, messages=messages, max_tokens=4000
)
return response.choices[0].message.content
async def chat_stream(
self, message: str, system_prompt: str = None
) -> AsyncGenerator[str, None]:
"""Streaming chat"""
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
messages.append({"role": "user", "content": message})
stream = await self.client.chat.completions.create(
model=self.model, messages=messages, max_tokens=4000, stream=True
)
async for chunk in stream:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
def get_provider_name(self) -> str:
return "openai"

8
backend/requirements.txt Normal file
View File

@@ -0,0 +1,8 @@
fastapi>=0.109.0
uvicorn[standard]>=0.27.0
anthropic>=0.18.1
openai>=1.50.0
pydantic>=2.6.0
pydantic-settings>=2.1.0
python-dotenv>=1.0.0
httpx>=0.27.0

View File

@@ -1,9 +1,36 @@
version: '3.8'
services:
devden-web:
build: .
container_name: devden
# Frontend (nginx serving static files)
frontend:
build:
context: .
dockerfile: Dockerfile
container_name: devden-frontend
ports:
- "3000:80"
restart: unless-stopped
depends_on:
- backend
# Backend (FastAPI)
backend:
build:
context: ./backend
dockerfile: Dockerfile.backend
container_name: devden-backend
ports:
- "8000:8000"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- DEFAULT_PROVIDER=${DEFAULT_PROVIDER:-claude}
- CLAUDE_MODEL=${CLAUDE_MODEL:-claude-3-5-sonnet-20241022}
- OPENAI_MODEL=${OPENAI_MODEL:-gpt-4-turbo-preview}
- FRONTEND_URL=http://localhost:3000
env_file:
- .env
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -4,6 +4,8 @@ const chatMessages = document.getElementById("chatMessages");
const welcomeInput = document.getElementById("welcomeInput");
const chatInput = document.getElementById("chatInput");
const API_URL = "http://localhost:8000/api/chat";
let isInChat = false;
function switchToChat() {
@@ -53,7 +55,7 @@ function escapeHtml(text) {
return div.innerHTML;
}
function sendMessage(text) {
async function sendMessage(text) {
if (!text.trim()) return;
if (!isInChat) {
@@ -66,15 +68,72 @@ function sendMessage(text) {
showTyping();
setTimeout(() => {
try {
const response = await fetch(`${API_URL}/stream`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
message: text,
provider: null,
}),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
hideTyping();
addMessage(
"I'm DevDen - your AI assistant. Soon I'll be connected to your knowledge base!",
"assistant",
);
// Create message element for streaming
const msgDiv = document.createElement("div");
msgDiv.className = "message assistant";
const textDiv = document.createElement("div");
textDiv.className = "message-text";
msgDiv.appendChild(textDiv);
chatMessages.appendChild(msgDiv);
// Read stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
let assistantMessage = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n");
for (const line of lines) {
if (line.startsWith("data: ")) {
const data = JSON.parse(line.slice(6));
if (data.error) {
textDiv.textContent = `Error: ${data.error}`;
break;
}
if (data.chunk) {
assistantMessage += data.chunk;
textDiv.textContent = assistantMessage;
scrollToBottom();
}
if (data.done) {
break;
}
}
}
}
} catch (error) {
hideTyping();
addMessage(`Error: ${error.message}`, "assistant");
} finally {
chatInput.disabled = false;
chatInput.focus();
}, 1000);
}
}
// Welcome screen input handler