Merge pull request 'phase 1' (#4) from phase-1 into dev
Reviewed-on: #4
This commit was merged in pull request #4.
This commit is contained in:
@@ -422,8 +422,36 @@ else:
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Future Architecture: Multi-Platform Support
|
||||
|
||||
The current architecture is Discord-centric. A **multi-platform expansion** is planned
|
||||
to support Web and CLI interfaces while maintaining one shared Living AI core.
|
||||
|
||||
See [Multi-Platform Expansion](multi-platform-expansion.md) for the complete design.
|
||||
|
||||
**Planned architecture:**
|
||||
|
||||
```
|
||||
[ Discord Adapter ] ─┐
|
||||
[ Web Adapter ] ─────┼──▶ ConversationGateway ─▶ Living AI Core
|
||||
[ CLI Adapter ] ─────┘
|
||||
```
|
||||
|
||||
**Key changes:**
|
||||
- Extract conversation logic into platform-agnostic `ConversationGateway`
|
||||
- Add `Platform` enum (DISCORD, WEB, CLI)
|
||||
- Add `IntimacyLevel` system for behavior modulation
|
||||
- Refactor `ai_chat.py` to use gateway
|
||||
- Add FastAPI web backend
|
||||
- Add Typer CLI client
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Multi-Platform Expansion](multi-platform-expansion.md) - Web & CLI platform design
|
||||
- [Living AI System](living-ai/README.md) - Deep dive into the personality system
|
||||
- [Services Reference](services/README.md) - Detailed API documentation
|
||||
- [Database Schema](database.md) - Complete schema documentation
|
||||
|
||||
471
docs/implementation/conversation-gateway.md
Normal file
471
docs/implementation/conversation-gateway.md
Normal file
@@ -0,0 +1,471 @@
|
||||
# Conversation Gateway Implementation Guide
|
||||
|
||||
## Phase 1: Complete ✅
|
||||
|
||||
This document describes the Conversation Gateway implementation completed in Phase 1 of the multi-platform expansion.
|
||||
|
||||
---
|
||||
|
||||
## What Was Implemented
|
||||
|
||||
### 1. Platform Abstraction Models
|
||||
|
||||
**File:** `src/loyal_companion/models/platform.py`
|
||||
|
||||
Created core types for platform-agnostic conversation handling:
|
||||
|
||||
- **`Platform` enum:** DISCORD, WEB, CLI
|
||||
- **`IntimacyLevel` enum:** LOW, MEDIUM, HIGH
|
||||
- **`ConversationContext`:** Metadata about the conversation context
|
||||
- **`ConversationRequest`:** Normalized input format from any platform
|
||||
- **`ConversationResponse`:** Normalized output format to any platform
|
||||
- **`MoodInfo`:** Mood metadata in responses
|
||||
- **`RelationshipInfo`:** Relationship metadata in responses
|
||||
|
||||
**Key features:**
|
||||
- Platform-agnostic data structures
|
||||
- Explicit intimacy level modeling
|
||||
- Rich context passing
|
||||
- Response metadata for platform-specific formatting
|
||||
|
||||
---
|
||||
|
||||
### 2. Conversation Gateway Service
|
||||
|
||||
**File:** `src/loyal_companion/services/conversation_gateway.py`
|
||||
|
||||
Extracted core conversation logic into a reusable service:
|
||||
|
||||
```python
|
||||
class ConversationGateway:
|
||||
async def process_message(
|
||||
request: ConversationRequest
|
||||
) -> ConversationResponse
|
||||
```
|
||||
|
||||
**Responsibilities:**
|
||||
- Accept normalized `ConversationRequest` from any platform
|
||||
- Load conversation history from database
|
||||
- Gather Living AI context (mood, relationship, style, opinions)
|
||||
- Apply intimacy-level-based prompt modifiers
|
||||
- Invoke AI service
|
||||
- Save conversation to database
|
||||
- Update Living AI state asynchronously
|
||||
- Return normalized `ConversationResponse`
|
||||
|
||||
**Key features:**
|
||||
- Platform-agnostic processing
|
||||
- Intimacy-aware behavior modulation
|
||||
- Safety boundaries at all intimacy levels
|
||||
- Async Living AI updates
|
||||
- Sentiment estimation
|
||||
- Fact extraction (respects intimacy level)
|
||||
- Proactive event detection (respects intimacy level)
|
||||
|
||||
---
|
||||
|
||||
### 3. Intimacy Level System
|
||||
|
||||
**Behavior modulation by intimacy level:**
|
||||
|
||||
#### LOW (Discord Guilds)
|
||||
- Brief, light responses
|
||||
- No deep emotional topics
|
||||
- No personal memory surfacing
|
||||
- Minimal proactive behavior
|
||||
- Grounding language only
|
||||
- Public-safe topics
|
||||
|
||||
#### MEDIUM (Discord DMs)
|
||||
- Balanced warmth and depth
|
||||
- Personal memory references allowed
|
||||
- Moderate emotional engagement
|
||||
- Casual but caring tone
|
||||
- Moderate proactive behavior
|
||||
|
||||
#### HIGH (Web, CLI)
|
||||
- Deeper reflection permitted
|
||||
- Emotional naming encouraged
|
||||
- Silence tolerance
|
||||
- Proactive follow-ups allowed
|
||||
- Deep memory surfacing
|
||||
- Thoughtful, considered responses
|
||||
|
||||
**Safety boundaries (enforced at ALL levels):**
|
||||
- Never claim exclusivity
|
||||
- Never reinforce dependency
|
||||
- Never discourage external connections
|
||||
- Always defer crisis situations
|
||||
- No romantic/sexual framing
|
||||
|
||||
---
|
||||
|
||||
### 4. Service Integration
|
||||
|
||||
**File:** `src/loyal_companion/services/__init__.py`
|
||||
|
||||
- Exported `ConversationGateway` for use by adapters
|
||||
- Maintained backward compatibility with existing services
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Platform Adapters │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Discord │ │ Web │ │ CLI │ │
|
||||
│ │ Adapter │ │ Adapter │ │ Adapter │ │
|
||||
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
||||
└─────────┼─────────────────┼─────────────────┼───────────┘
|
||||
│ │ │
|
||||
└────────┬────────┴────────┬────────┘
|
||||
▼ ▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ ConversationRequest │
|
||||
│ - user_id │
|
||||
│ - platform │
|
||||
│ - message │
|
||||
│ - context (intimacy, metadata) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ ConversationGateway │
|
||||
│ │
|
||||
│ 1. Load conversation history │
|
||||
│ 2. Gather Living AI context │
|
||||
│ 3. Apply intimacy modifiers │
|
||||
│ 4. Build enhanced system prompt │
|
||||
│ 5. Invoke AI service │
|
||||
│ 6. Save conversation │
|
||||
│ 7. Update Living AI state │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ ConversationResponse │
|
||||
│ - response (text) │
|
||||
│ - mood (optional) │
|
||||
│ - relationship (optional) │
|
||||
│ - extracted_facts (list) │
|
||||
│ - platform_hints (dict) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Discord │ │ Web │ │ CLI │
|
||||
│ Format │ │ Format │ │ Format │
|
||||
└─────────┘ └─────────┘ └─────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from loyal_companion.models.platform import (
|
||||
ConversationContext,
|
||||
ConversationRequest,
|
||||
IntimacyLevel,
|
||||
Platform,
|
||||
)
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
# Create gateway
|
||||
gateway = ConversationGateway()
|
||||
|
||||
# Build request (from any platform)
|
||||
request = ConversationRequest(
|
||||
user_id="discord:123456789",
|
||||
platform=Platform.DISCORD,
|
||||
session_id="channel-987654321",
|
||||
message="I'm feeling overwhelmed today",
|
||||
context=ConversationContext(
|
||||
is_public=False,
|
||||
intimacy_level=IntimacyLevel.MEDIUM,
|
||||
guild_id="12345",
|
||||
channel_id="987654321",
|
||||
user_display_name="Alice",
|
||||
),
|
||||
)
|
||||
|
||||
# Process message
|
||||
response = await gateway.process_message(request)
|
||||
|
||||
# Use response
|
||||
print(response.response) # AI's reply
|
||||
print(response.mood.label if response.mood else "No mood")
|
||||
print(response.relationship.level if response.relationship else "No relationship")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
loyal_companion/
|
||||
├── src/loyal_companion/
|
||||
│ ├── models/
|
||||
│ │ └── platform.py # ✨ NEW: Platform abstractions
|
||||
│ ├── services/
|
||||
│ │ ├── conversation_gateway.py # ✨ NEW: Gateway service
|
||||
│ │ └── __init__.py # Updated: Export gateway
|
||||
│ └── cogs/
|
||||
│ └── ai_chat.py # Unchanged (Phase 2 will refactor)
|
||||
├── docs/
|
||||
│ ├── multi-platform-expansion.md # ✨ NEW: Architecture doc
|
||||
│ ├── architecture.md # Updated: Reference gateway
|
||||
│ └── implementation/
|
||||
│ └── conversation-gateway.md # ✨ NEW: This file
|
||||
├── tests/
|
||||
│ └── test_conversation_gateway.py # ✨ NEW: Gateway tests
|
||||
└── verify_gateway.py # ✨ NEW: Verification script
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's Next: Phase 2
|
||||
|
||||
**Goal:** Refactor Discord adapter to use the Conversation Gateway
|
||||
|
||||
**Files to modify:**
|
||||
- `src/loyal_companion/cogs/ai_chat.py`
|
||||
|
||||
**Changes:**
|
||||
1. Import `ConversationGateway` and platform models
|
||||
2. Replace `_generate_response_with_db()` with gateway call
|
||||
3. Build `ConversationRequest` from Discord message
|
||||
4. Map Discord context to `IntimacyLevel`:
|
||||
- Guild channels → LOW
|
||||
- DMs → MEDIUM
|
||||
5. Format `ConversationResponse` for Discord output
|
||||
6. Test that Discord functionality is unchanged
|
||||
|
||||
**Expected outcome:**
|
||||
- Discord uses gateway internally
|
||||
- No user-visible changes
|
||||
- Gateway is proven to work
|
||||
- Ready for Web and CLI platforms
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests (tests/test_conversation_gateway.py)
|
||||
|
||||
- Gateway initialization
|
||||
- Request/response creation
|
||||
- Enum values
|
||||
- Intimacy modifiers
|
||||
- Sentiment estimation
|
||||
- Database requirement
|
||||
|
||||
### Integration Tests (Phase 2)
|
||||
|
||||
- Discord adapter using gateway
|
||||
- History persistence
|
||||
- Living AI updates
|
||||
- Multi-turn conversations
|
||||
|
||||
### Verification Script (verify_gateway.py)
|
||||
|
||||
- Import verification
|
||||
- Enum verification
|
||||
- Request creation
|
||||
- Gateway initialization
|
||||
- Intimacy modifiers
|
||||
- Sentiment estimation
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
No new configuration required for Phase 1.
|
||||
|
||||
Existing settings still apply:
|
||||
- `LIVING_AI_ENABLED` - Master switch for Living AI features
|
||||
- `MOOD_ENABLED` - Mood tracking
|
||||
- `RELATIONSHIP_ENABLED` - Relationship tracking
|
||||
- `FACT_EXTRACTION_ENABLED` - Autonomous fact learning
|
||||
- `PROACTIVE_ENABLED` - Proactive events
|
||||
- `STYLE_LEARNING_ENABLED` - Communication style adaptation
|
||||
- `OPINION_FORMATION_ENABLED` - Topic opinion tracking
|
||||
|
||||
Phase 3 (Web) will add:
|
||||
- `WEB_ENABLED`
|
||||
- `WEB_HOST`
|
||||
- `WEB_PORT`
|
||||
- `WEB_AUTH_SECRET`
|
||||
|
||||
Phase 4 (CLI) will add:
|
||||
- `CLI_ENABLED`
|
||||
- `CLI_DEFAULT_INTIMACY`
|
||||
- `CLI_ALLOW_EMOJI`
|
||||
|
||||
---
|
||||
|
||||
## Safety Considerations
|
||||
|
||||
### Intimacy-Based Constraints
|
||||
|
||||
The gateway enforces safety boundaries based on intimacy level:
|
||||
|
||||
**LOW intimacy:**
|
||||
- No fact extraction (privacy)
|
||||
- No proactive events (respect boundaries)
|
||||
- No deep memory surfacing
|
||||
- Surface-level engagement only
|
||||
|
||||
**MEDIUM intimacy:**
|
||||
- Moderate fact extraction
|
||||
- Limited proactive events
|
||||
- Personal memory allowed
|
||||
- Emotional validation permitted
|
||||
|
||||
**HIGH intimacy:**
|
||||
- Full fact extraction
|
||||
- Proactive follow-ups allowed
|
||||
- Deep memory surfacing
|
||||
- Emotional naming encouraged
|
||||
|
||||
**ALL levels enforce:**
|
||||
- No exclusivity claims
|
||||
- No dependency reinforcement
|
||||
- No discouragement of external connections
|
||||
- Professional boundaries maintained
|
||||
- Crisis deferral to professionals
|
||||
|
||||
---
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Database Requirements
|
||||
|
||||
The gateway **requires** a database connection. It will raise `ValueError` if `DATABASE_URL` is not configured.
|
||||
|
||||
This is intentional:
|
||||
- Living AI state requires persistence
|
||||
- Cross-platform identity requires linking
|
||||
- Conversation history needs durability
|
||||
|
||||
### Async Operations
|
||||
|
||||
All gateway operations are async:
|
||||
- Database queries
|
||||
- AI invocations
|
||||
- Living AI updates
|
||||
|
||||
Living AI updates happen after the response is returned, so they don't block the user experience.
|
||||
|
||||
---
|
||||
|
||||
## Known Limitations
|
||||
|
||||
### Phase 1 Limitations
|
||||
|
||||
1. **Discord-only:** Gateway exists but isn't used yet
|
||||
2. **No cross-platform identity:** Each platform creates separate users
|
||||
3. **No platform-specific features:** Discord images/embeds not supported in gateway yet
|
||||
|
||||
### To Be Addressed
|
||||
|
||||
**Phase 2:**
|
||||
- Integrate with Discord adapter
|
||||
- Add Discord-specific features to gateway (images, mentioned users)
|
||||
|
||||
**Phase 3:**
|
||||
- Add Web platform
|
||||
- Implement cross-platform user identity linking
|
||||
|
||||
**Phase 4:**
|
||||
- Add CLI client
|
||||
- Add CLI-specific formatting (no emojis, minimal output)
|
||||
|
||||
---
|
||||
|
||||
## Migration Path
|
||||
|
||||
### Current State (Phase 1 Complete)
|
||||
|
||||
```python
|
||||
# Discord Cog (current)
|
||||
async def _generate_response_with_db(message, user_message):
|
||||
# All logic inline
|
||||
# Discord-specific
|
||||
# Not reusable
|
||||
```
|
||||
|
||||
### Phase 2 (Discord Refactor)
|
||||
|
||||
```python
|
||||
# Discord Cog (refactored)
|
||||
async def _generate_response_with_db(message, user_message):
|
||||
request = ConversationRequest(...) # Build from Discord
|
||||
response = await gateway.process_message(request)
|
||||
return response.response # Format for Discord
|
||||
```
|
||||
|
||||
### Phase 3 (Web Platform Added)
|
||||
|
||||
```python
|
||||
# Web API
|
||||
@app.post("/chat")
|
||||
async def chat(session_id: str, message: str):
|
||||
request = ConversationRequest(...) # Build from Web
|
||||
response = await gateway.process_message(request)
|
||||
return response # Return as JSON
|
||||
```
|
||||
|
||||
### Phase 4 (CLI Platform Added)
|
||||
|
||||
```python
|
||||
# CLI Client
|
||||
async def talk(message: str):
|
||||
request = ConversationRequest(...) # Build from CLI
|
||||
response = await http_client.post("/chat", request)
|
||||
print(response.response) # Format for terminal
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Phase 1 is considered complete when:
|
||||
|
||||
- ✅ Platform models created and documented
|
||||
- ✅ ConversationGateway service implemented
|
||||
- ✅ Intimacy level system implemented
|
||||
- ✅ Safety boundaries enforced at all levels
|
||||
- ✅ Services exported and importable
|
||||
- ✅ Documentation updated
|
||||
- ✅ Syntax validation passes
|
||||
|
||||
Phase 2 success criteria:
|
||||
- Discord cog refactored to use gateway
|
||||
- No regression in Discord functionality
|
||||
- All existing tests pass
|
||||
- Living AI updates still work
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Phase 1 successfully established the foundation for multi-platform support:
|
||||
|
||||
1. **Platform abstraction** - Clean separation of concerns
|
||||
2. **Intimacy system** - Behavior modulation for different contexts
|
||||
3. **Safety boundaries** - Consistent across all platforms
|
||||
4. **Reusable gateway** - Ready for Discord, Web, and CLI
|
||||
|
||||
The architecture is now ready for Phase 2 (Discord refactor) and Phase 3 (Web platform).
|
||||
|
||||
Same bartender. Different stools. No one is trapped.
|
||||
|
||||
---
|
||||
|
||||
**Last updated:** 2026-01-31
|
||||
**Status:** Phase 1 Complete ✅
|
||||
**Next:** Phase 2 - Discord Refactor
|
||||
599
docs/multi-platform-expansion.md
Normal file
599
docs/multi-platform-expansion.md
Normal file
@@ -0,0 +1,599 @@
|
||||
# Multi-Platform Expansion
|
||||
## Adding Web & CLI Interfaces
|
||||
|
||||
This document extends the Loyal Companion architecture beyond Discord.
|
||||
The goal is to support **Web** and **CLI** interaction channels while preserving:
|
||||
|
||||
- one shared Living AI core
|
||||
- consistent personality & memory
|
||||
- attachment-safe A+C hybrid behavior
|
||||
- clear separation between platform and cognition
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Principle
|
||||
|
||||
**Platforms are adapters, not identities.**
|
||||
|
||||
Discord, Web, and CLI are merely different rooms
|
||||
through which the same companion is accessed.
|
||||
|
||||
The companion:
|
||||
- remains one continuous entity
|
||||
- may adjust tone by platform
|
||||
- never fragments into separate personalities
|
||||
|
||||
---
|
||||
|
||||
## 2. New Architectural Layer: Conversation Gateway
|
||||
|
||||
### Purpose
|
||||
|
||||
Introduce a single entry point for **all conversations**, regardless of platform.
|
||||
|
||||
```text
|
||||
[ Discord Adapter ] ─┐
|
||||
[ Web Adapter ] ─────┼──▶ ConversationGateway ─▶ Living AI Core
|
||||
[ CLI Adapter ] ─────┘
|
||||
```
|
||||
|
||||
### Responsibilities
|
||||
|
||||
The Conversation Gateway:
|
||||
|
||||
* normalizes incoming messages
|
||||
* assigns platform metadata
|
||||
* invokes the existing AI + Living AI pipeline
|
||||
* returns responses in a platform-agnostic format
|
||||
|
||||
### Required Data Structure
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ConversationRequest:
|
||||
user_id: str # Platform-specific user ID
|
||||
platform: Platform # Enum: DISCORD | WEB | CLI
|
||||
session_id: str # Conversation/channel identifier
|
||||
message: str # User's message content
|
||||
context: ConversationContext # Additional metadata
|
||||
|
||||
@dataclass
|
||||
class ConversationContext:
|
||||
is_public: bool # Public channel vs private
|
||||
intimacy_level: IntimacyLevel # LOW | MEDIUM | HIGH
|
||||
platform_metadata: dict # Platform-specific extras
|
||||
guild_id: str | None = None # Discord guild (if applicable)
|
||||
channel_id: str | None = None # Discord/Web channel
|
||||
```
|
||||
|
||||
### Current Implementation Location
|
||||
|
||||
**Existing message handling:** `src/loyal_companion/cogs/ai_chat.py`
|
||||
|
||||
The current `_generate_response_with_db()` method contains all the logic
|
||||
that will be extracted into the Conversation Gateway:
|
||||
|
||||
- History loading
|
||||
- Living AI context gathering (mood, relationship, style, opinions)
|
||||
- System prompt enhancement
|
||||
- AI invocation
|
||||
- Post-response Living AI updates
|
||||
|
||||
**Goal:** Extract this into a platform-agnostic service layer.
|
||||
|
||||
---
|
||||
|
||||
## 3. Platform Metadata & Intimacy Levels
|
||||
|
||||
### Intimacy Levels (Important for A+C Safety)
|
||||
|
||||
Intimacy level influences:
|
||||
|
||||
* language warmth
|
||||
* depth of reflection
|
||||
* frequency of proactive behavior
|
||||
* memory surfacing
|
||||
|
||||
| Platform | Default Intimacy | Notes |
|
||||
| --------------- | ---------------- | ------------------------ |
|
||||
| Discord (guild) | LOW | Social, public, shared |
|
||||
| Discord (DM) | MEDIUM | Private but casual |
|
||||
| Web | HIGH | Intentional, reflective |
|
||||
| CLI | HIGH | Quiet, personal, focused |
|
||||
|
||||
### Intimacy Level Behavior Modifiers
|
||||
|
||||
**LOW (Discord Guild):**
|
||||
- Less emotional intensity
|
||||
- More grounding language
|
||||
- Minimal proactive behavior
|
||||
- Surface-level memory recall only
|
||||
- Shorter responses
|
||||
- Public-safe topics only
|
||||
|
||||
**MEDIUM (Discord DM):**
|
||||
- Balanced warmth
|
||||
- Casual tone
|
||||
- Moderate proactive behavior
|
||||
- Personal memory recall allowed
|
||||
- Normal response length
|
||||
|
||||
**HIGH (Web/CLI):**
|
||||
- Deeper reflection permitted
|
||||
- Silence tolerance (not rushing to respond)
|
||||
- Proactive check-ins allowed
|
||||
- Deep memory surfacing
|
||||
- Longer, more thoughtful responses
|
||||
- Emotional naming encouraged
|
||||
|
||||
---
|
||||
|
||||
## 4. Web Platform
|
||||
|
||||
### Goal
|
||||
|
||||
Provide a **private, 1-on-1 chat interface**
|
||||
for deeper, quieter conversations than Discord allows.
|
||||
|
||||
### Architecture
|
||||
|
||||
* Backend: FastAPI (async Python web framework)
|
||||
* Transport: HTTP REST + optional WebSocket
|
||||
* Auth: Magic link / JWT token / local account
|
||||
* No guilds, no other users visible
|
||||
* Session persistence via database
|
||||
|
||||
### Backend Components
|
||||
|
||||
#### New API Module Structure
|
||||
|
||||
```
|
||||
src/loyal_companion/web/
|
||||
├── __init__.py
|
||||
├── app.py # FastAPI application factory
|
||||
├── dependencies.py # Dependency injection (DB sessions, auth)
|
||||
├── middleware.py # CORS, rate limiting, error handling
|
||||
├── routes/
|
||||
│ ├── __init__.py
|
||||
│ ├── chat.py # POST /chat, WebSocket /ws
|
||||
│ ├── session.py # GET/POST /sessions
|
||||
│ ├── history.py # GET /sessions/{id}/history
|
||||
│ └── auth.py # POST /auth/login, /auth/verify
|
||||
├── models.py # Pydantic request/response models
|
||||
└── adapter.py # Web → ConversationGateway adapter
|
||||
```
|
||||
|
||||
#### Chat Flow
|
||||
|
||||
1. User sends message via web UI
|
||||
2. Web adapter creates `ConversationRequest`
|
||||
3. `ConversationGateway.process_message()` invoked
|
||||
4. Living AI generates response
|
||||
5. Response returned as JSON
|
||||
|
||||
#### Example API Request
|
||||
|
||||
**POST /chat**
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "abc123",
|
||||
"message": "I'm having a hard evening."
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"response": "That sounds heavy. Want to sit with it for a bit?",
|
||||
"mood": {
|
||||
"label": "calm",
|
||||
"valence": 0.2,
|
||||
"arousal": -0.3
|
||||
},
|
||||
"relationship_level": "close_friend"
|
||||
}
|
||||
```
|
||||
|
||||
#### Authentication
|
||||
|
||||
**Phase 1:** Simple token-based auth
|
||||
- User registers with email
|
||||
- Server sends magic link
|
||||
- Token stored in HTTP-only cookie
|
||||
|
||||
**Phase 2:** Optional OAuth integration
|
||||
|
||||
### UI Considerations (Out of Scope for Core)
|
||||
|
||||
The web UI should:
|
||||
- Use minimal chat bubbles (user left, bot right)
|
||||
- Avoid typing indicators from others (no other users)
|
||||
- Optional timestamps
|
||||
- No engagement metrics (likes, seen, read receipts)
|
||||
- No "X is typing..." unless real-time WebSocket
|
||||
- Dark mode default
|
||||
|
||||
**Recommended stack:**
|
||||
- Frontend: SvelteKit / React / Vue
|
||||
- Styling: TailwindCSS
|
||||
- Real-time: WebSocket for live chat
|
||||
|
||||
---
|
||||
|
||||
## 5. CLI Platform
|
||||
|
||||
### Goal
|
||||
|
||||
A **local, quiet, terminal-based interface**
|
||||
for people who want presence without noise.
|
||||
|
||||
### Invocation
|
||||
|
||||
```bash
|
||||
loyal-companion talk
|
||||
```
|
||||
|
||||
or (short alias):
|
||||
|
||||
```bash
|
||||
lc talk
|
||||
```
|
||||
|
||||
### CLI Behavior
|
||||
|
||||
* Single ongoing session by default
|
||||
* Optional named sessions (`lc talk --session work`)
|
||||
* No emojis unless explicitly enabled
|
||||
* Text-first, reflective tone
|
||||
* Minimal output (no spinners, no progress bars)
|
||||
* Supports piping and scripting
|
||||
|
||||
### Architecture
|
||||
|
||||
CLI is a **thin client**, not the AI itself.
|
||||
It communicates with the web backend via HTTP.
|
||||
|
||||
```
|
||||
cli/
|
||||
├── __init__.py
|
||||
├── main.py # Typer CLI app entry point
|
||||
├── client.py # HTTP client for web backend
|
||||
├── session.py # Local session persistence (.lc/sessions.json)
|
||||
├── config.py # CLI-specific config (~/.lc/config.toml)
|
||||
└── formatters.py # Response formatting for terminal
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
Sessions are stored locally:
|
||||
|
||||
```
|
||||
~/.lc/
|
||||
├── config.toml # API endpoint, auth token, preferences
|
||||
└── sessions.json # Session ID → metadata mapping
|
||||
```
|
||||
|
||||
**Session lifecycle:**
|
||||
|
||||
1. First `lc talk` → creates default session, stores ID locally
|
||||
2. Subsequent calls → reuses session ID
|
||||
3. `lc talk --new` → starts fresh session
|
||||
4. `lc talk --session work` → named session
|
||||
|
||||
### Example Interaction
|
||||
|
||||
```text
|
||||
$ lc talk
|
||||
Bartender is here.
|
||||
|
||||
You: I miss someone tonight.
|
||||
|
||||
Bartender: That kind of missing doesn't ask to be solved.
|
||||
Do you want to talk about what it feels like in your body,
|
||||
or just let it be here for a moment?
|
||||
|
||||
You: Just let it be.
|
||||
|
||||
Bartender: Alright. I'm here.
|
||||
|
||||
You: ^D
|
||||
|
||||
Session saved.
|
||||
```
|
||||
|
||||
### CLI Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `lc talk` | Start/resume conversation |
|
||||
| `lc talk --session <name>` | Named session |
|
||||
| `lc talk --new` | Start fresh session |
|
||||
| `lc history` | Show recent exchanges |
|
||||
| `lc sessions` | List all sessions |
|
||||
| `lc config` | Show/edit configuration |
|
||||
| `lc auth` | Authenticate with server |
|
||||
|
||||
---
|
||||
|
||||
## 6. Shared Identity & Memory
|
||||
|
||||
### Relationship Model
|
||||
|
||||
All platforms share:
|
||||
|
||||
* the same `User` record (keyed by platform-specific ID)
|
||||
* the same `UserRelationship`
|
||||
* the same long-term memory (`UserFact`)
|
||||
* the same mood history
|
||||
|
||||
But:
|
||||
|
||||
* **contextual behavior varies** by intimacy level
|
||||
* **expression adapts** to platform norms
|
||||
* **intensity is capped** per platform
|
||||
|
||||
### Cross-Platform User Identity
|
||||
|
||||
**Challenge:** A user on Discord and CLI are the same person.
|
||||
|
||||
**Solution:**
|
||||
|
||||
1. Each platform creates a `User` record with platform-specific ID
|
||||
2. Introduce `PlatformIdentity` linking model
|
||||
|
||||
```python
|
||||
class PlatformIdentity(Base):
|
||||
__tablename__ = "platform_identities"
|
||||
|
||||
id: Mapped[int] = mapped_column(primary_key=True)
|
||||
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"))
|
||||
platform: Mapped[Platform] = mapped_column(Enum(Platform))
|
||||
platform_user_id: Mapped[str] = mapped_column(String, unique=True)
|
||||
|
||||
user: Mapped["User"] = relationship(back_populates="identities")
|
||||
```
|
||||
|
||||
**Later enhancement:** Account linking UI for users to connect platforms.
|
||||
|
||||
### Example Cross-Platform Memory Surfacing
|
||||
|
||||
A memory learned via CLI:
|
||||
|
||||
> "User tends to feel lonelier at night."
|
||||
|
||||
May surface on Web (HIGH intimacy):
|
||||
|
||||
> "You've mentioned nights can feel heavier for you."
|
||||
|
||||
But **not** in Discord guild chat (LOW intimacy).
|
||||
|
||||
---
|
||||
|
||||
## 7. Safety Rules per Platform
|
||||
|
||||
### Web & CLI (HIGH Intimacy)
|
||||
|
||||
**Allowed:**
|
||||
- Deeper reflection
|
||||
- Naming emotions ("That sounds like grief")
|
||||
- Silence tolerance (not rushing responses)
|
||||
- Proactive follow-ups ("You mentioned feeling stuck yesterday—how's that today?")
|
||||
|
||||
**Still forbidden:**
|
||||
- Exclusivity claims ("I'm the only one who truly gets you")
|
||||
- Dependency reinforcement ("You need me")
|
||||
- Discouraging external connection ("They don't understand like I do")
|
||||
- Romantic/sexual framing
|
||||
- Crisis intervention (always defer to professionals)
|
||||
|
||||
### Discord DM (MEDIUM Intimacy)
|
||||
|
||||
**Allowed:**
|
||||
- Personal memory references
|
||||
- Emotional validation
|
||||
- Moderate warmth
|
||||
|
||||
**Constraints:**
|
||||
- Less proactive behavior than Web/CLI
|
||||
- Lighter tone
|
||||
- Shorter responses
|
||||
|
||||
### Discord Guild (LOW Intimacy)
|
||||
|
||||
**Allowed:**
|
||||
- Light banter
|
||||
- Topic-based conversation
|
||||
- Public-safe responses
|
||||
|
||||
**Additional constraints:**
|
||||
- No personal memory surfacing
|
||||
- No emotional intensity
|
||||
- No proactive check-ins
|
||||
- Grounding language only
|
||||
- Short responses
|
||||
|
||||
---
|
||||
|
||||
## 8. Configuration Additions
|
||||
|
||||
### New Settings (config.py)
|
||||
|
||||
```python
|
||||
# Platform Toggles
|
||||
web_enabled: bool = True
|
||||
cli_enabled: bool = True
|
||||
|
||||
# Web Server
|
||||
web_host: str = "127.0.0.1"
|
||||
web_port: int = 8080
|
||||
web_cors_origins: list[str] = ["http://localhost:3000"]
|
||||
web_auth_secret: str = Field(..., env="WEB_AUTH_SECRET")
|
||||
|
||||
# CLI
|
||||
cli_default_intimacy: IntimacyLevel = IntimacyLevel.HIGH
|
||||
cli_allow_emoji: bool = False
|
||||
|
||||
# Intimacy Scaling
|
||||
intimacy_enabled: bool = True
|
||||
intimacy_discord_guild: IntimacyLevel = IntimacyLevel.LOW
|
||||
intimacy_discord_dm: IntimacyLevel = IntimacyLevel.MEDIUM
|
||||
intimacy_web: IntimacyLevel = IntimacyLevel.HIGH
|
||||
intimacy_cli: IntimacyLevel = IntimacyLevel.HIGH
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```env
|
||||
# Platform Toggles
|
||||
WEB_ENABLED=true
|
||||
CLI_ENABLED=true
|
||||
|
||||
# Web
|
||||
WEB_HOST=127.0.0.1
|
||||
WEB_PORT=8080
|
||||
WEB_AUTH_SECRET=<random-secret>
|
||||
|
||||
# CLI
|
||||
CLI_DEFAULT_INTIMACY=high
|
||||
CLI_ALLOW_EMOJI=false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Implementation Order
|
||||
|
||||
### Phase 1: Extract Conversation Gateway ✅
|
||||
|
||||
**Goal:** Create platform-agnostic conversation processor
|
||||
|
||||
**Files to create:**
|
||||
- `src/loyal_companion/services/conversation_gateway.py`
|
||||
- `src/loyal_companion/models/platform.py` (enums, request/response types)
|
||||
|
||||
**Tasks:**
|
||||
1. Define `Platform` enum (DISCORD, WEB, CLI)
|
||||
2. Define `IntimacyLevel` enum (LOW, MEDIUM, HIGH)
|
||||
3. Define `ConversationRequest` and `ConversationResponse` dataclasses
|
||||
4. Extract logic from `cogs/ai_chat.py` into gateway
|
||||
5. Add intimacy-level-based prompt modifiers
|
||||
|
||||
### Phase 2: Refactor Discord to Use Gateway ✅
|
||||
|
||||
**Files to modify:**
|
||||
- `src/loyal_companion/cogs/ai_chat.py`
|
||||
|
||||
**Tasks:**
|
||||
1. Import `ConversationGateway`
|
||||
2. Replace `_generate_response_with_db()` with gateway call
|
||||
3. Build `ConversationRequest` from Discord message
|
||||
4. Format `ConversationResponse` for Discord output
|
||||
5. Test that Discord functionality unchanged
|
||||
|
||||
### Phase 3: Add Web Platform 🌐
|
||||
|
||||
**Files to create:**
|
||||
- `src/loyal_companion/web/` (entire module)
|
||||
- `src/loyal_companion/web/app.py`
|
||||
- `src/loyal_companion/web/routes/chat.py`
|
||||
|
||||
**Tasks:**
|
||||
1. Set up FastAPI application
|
||||
2. Add authentication middleware
|
||||
3. Create `/chat` endpoint
|
||||
4. Create WebSocket endpoint (optional)
|
||||
5. Add session management
|
||||
6. Test with Postman/curl
|
||||
|
||||
### Phase 4: Add CLI Client 💻
|
||||
|
||||
**Files to create:**
|
||||
- `cli/` (new top-level directory)
|
||||
- `cli/main.py`
|
||||
- `cli/client.py`
|
||||
|
||||
**Tasks:**
|
||||
1. Create Typer CLI app
|
||||
2. Add `talk` command
|
||||
3. Add local session persistence
|
||||
4. Add authentication flow
|
||||
5. Test end-to-end with web backend
|
||||
|
||||
### Phase 5: Intimacy Scaling 🔒
|
||||
|
||||
**Files to create:**
|
||||
- `src/loyal_companion/services/intimacy_service.py`
|
||||
|
||||
**Tasks:**
|
||||
1. Define intimacy level behavior modifiers
|
||||
2. Modify system prompt based on intimacy
|
||||
3. Filter proactive behavior by intimacy
|
||||
4. Add memory surfacing rules
|
||||
5. Add safety constraint enforcement
|
||||
|
||||
### Phase 6: Safety Regression Tests 🛡️
|
||||
|
||||
**Files to create:**
|
||||
- `tests/test_safety_constraints.py`
|
||||
- `tests/test_intimacy_boundaries.py`
|
||||
|
||||
**Tasks:**
|
||||
1. Test no exclusivity claims at any intimacy level
|
||||
2. Test no dependency reinforcement
|
||||
3. Test intimacy boundaries respected
|
||||
4. Test proactive behavior filtered by platform
|
||||
5. Test memory surfacing respects intimacy
|
||||
|
||||
---
|
||||
|
||||
## 10. Non-Goals
|
||||
|
||||
This expansion does NOT aim to:
|
||||
|
||||
* Duplicate Discord features (guilds, threads, reactions)
|
||||
* Introduce social feeds or timelines
|
||||
* Add notifications or engagement streaks
|
||||
* Increase engagement artificially
|
||||
* Create a "social network"
|
||||
* Add gamification mechanics
|
||||
|
||||
The goal is **availability**, not addiction.
|
||||
|
||||
---
|
||||
|
||||
## 11. Outcome
|
||||
|
||||
When complete:
|
||||
|
||||
* **Discord is the social bar** — casual, public, low-commitment
|
||||
* **Web is the quiet back room** — intentional, private, reflective
|
||||
* **CLI is the empty table at closing time** — minimal, focused, silent presence
|
||||
|
||||
Same bartender.
|
||||
Different stools.
|
||||
No one is trapped.
|
||||
|
||||
---
|
||||
|
||||
## 12. Current Implementation Status
|
||||
|
||||
### Completed
|
||||
- ❌ None yet
|
||||
|
||||
### In Progress
|
||||
- 🔄 Documentation update
|
||||
- 🔄 Phase 1: Conversation Gateway extraction
|
||||
|
||||
### Planned
|
||||
- ⏳ Phase 2: Discord refactor
|
||||
- ⏳ Phase 3: Web platform
|
||||
- ⏳ Phase 4: CLI client
|
||||
- ⏳ Phase 5: Intimacy scaling
|
||||
- ⏳ Phase 6: Safety tests
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
See [Implementation Guide](implementation/conversation-gateway.md) for detailed Phase 1 instructions.
|
||||
136
src/loyal_companion/models/platform.py
Normal file
136
src/loyal_companion/models/platform.py
Normal file
@@ -0,0 +1,136 @@
|
||||
"""Platform abstraction models for multi-platform support.
|
||||
|
||||
This module defines the core types and enums for the Conversation Gateway pattern,
|
||||
enabling Discord, Web, and CLI interfaces to share the same Living AI core.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from typing import Any
|
||||
|
||||
|
||||
class Platform(str, Enum):
|
||||
"""Supported interaction platforms."""
|
||||
|
||||
DISCORD = "discord"
|
||||
WEB = "web"
|
||||
CLI = "cli"
|
||||
|
||||
|
||||
class IntimacyLevel(str, Enum):
|
||||
"""Intimacy level for platform interaction context.
|
||||
|
||||
Intimacy level influences:
|
||||
- Language warmth and depth
|
||||
- Proactive behavior frequency
|
||||
- Memory surfacing depth
|
||||
- Response length and thoughtfulness
|
||||
|
||||
Attributes:
|
||||
LOW: Public, social contexts (Discord guilds)
|
||||
- Light banter only
|
||||
- No personal memory surfacing
|
||||
- Short responses
|
||||
- Minimal proactive behavior
|
||||
|
||||
MEDIUM: Semi-private contexts (Discord DMs)
|
||||
- Balanced warmth
|
||||
- Personal memory allowed
|
||||
- Moderate proactive behavior
|
||||
|
||||
HIGH: Private, intentional contexts (Web, CLI)
|
||||
- Deep reflection permitted
|
||||
- Silence tolerance
|
||||
- Proactive follow-ups allowed
|
||||
- Emotional naming encouraged
|
||||
"""
|
||||
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConversationContext:
|
||||
"""Additional context for a conversation request.
|
||||
|
||||
Attributes:
|
||||
is_public: Whether the conversation is in a public channel/space
|
||||
intimacy_level: The intimacy level for this interaction
|
||||
platform_metadata: Platform-specific additional data
|
||||
guild_id: Discord guild ID (if applicable)
|
||||
channel_id: Channel/conversation identifier
|
||||
user_display_name: User's display name on the platform
|
||||
requires_web_search: Whether web search may be needed
|
||||
"""
|
||||
|
||||
is_public: bool = False
|
||||
intimacy_level: IntimacyLevel = IntimacyLevel.MEDIUM
|
||||
platform_metadata: dict[str, Any] = field(default_factory=dict)
|
||||
guild_id: str | None = None
|
||||
channel_id: str | None = None
|
||||
user_display_name: str | None = None
|
||||
requires_web_search: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConversationRequest:
|
||||
"""Platform-agnostic conversation request.
|
||||
|
||||
This is the normalized input format for the Conversation Gateway,
|
||||
abstracting away platform-specific details.
|
||||
|
||||
Attributes:
|
||||
user_id: Platform-specific user identifier
|
||||
platform: The platform this request originated from
|
||||
session_id: Conversation/session identifier
|
||||
message: The user's message content
|
||||
context: Additional context for the conversation
|
||||
"""
|
||||
|
||||
user_id: str
|
||||
platform: Platform
|
||||
session_id: str
|
||||
message: str
|
||||
context: ConversationContext = field(default_factory=ConversationContext)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MoodInfo:
|
||||
"""Mood information included in response."""
|
||||
|
||||
label: str
|
||||
valence: float
|
||||
arousal: float
|
||||
intensity: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class RelationshipInfo:
|
||||
"""Relationship information included in response."""
|
||||
|
||||
level: str
|
||||
score: int
|
||||
interactions_count: int
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConversationResponse:
|
||||
"""Platform-agnostic conversation response.
|
||||
|
||||
This is the normalized output format from the Conversation Gateway,
|
||||
which platforms can then format according to their UI requirements.
|
||||
|
||||
Attributes:
|
||||
response: The AI-generated response text
|
||||
mood: Current mood state (if Living AI enabled)
|
||||
relationship: Current relationship info (if Living AI enabled)
|
||||
extracted_facts: Facts extracted from this interaction
|
||||
platform_hints: Suggestions for platform-specific formatting
|
||||
"""
|
||||
|
||||
response: str
|
||||
mood: MoodInfo | None = None
|
||||
relationship: RelationshipInfo | None = None
|
||||
extracted_facts: list[str] = field(default_factory=list)
|
||||
platform_hints: dict[str, Any] = field(default_factory=dict)
|
||||
@@ -9,6 +9,7 @@ from .communication_style_service import (
|
||||
detect_formal_language,
|
||||
)
|
||||
from .conversation import ConversationManager
|
||||
from .conversation_gateway import ConversationGateway
|
||||
from .database import DatabaseService, db, get_db
|
||||
from .fact_extraction_service import FactExtractionService
|
||||
from .mood_service import MoodLabel, MoodService, MoodState
|
||||
@@ -28,6 +29,7 @@ __all__ = [
|
||||
"AttachmentContext",
|
||||
"AttachmentService",
|
||||
"CommunicationStyleService",
|
||||
"ConversationGateway",
|
||||
"ConversationManager",
|
||||
"DatabaseService",
|
||||
"FactExtractionService",
|
||||
|
||||
523
src/loyal_companion/services/conversation_gateway.py
Normal file
523
src/loyal_companion/services/conversation_gateway.py
Normal file
@@ -0,0 +1,523 @@
|
||||
"""Conversation Gateway - Platform-agnostic conversation processing.
|
||||
|
||||
This service provides a unified entry point for all conversations across platforms
|
||||
(Discord, Web, CLI), abstracting away platform-specific details and providing
|
||||
a consistent interface to the Living AI core.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from loyal_companion.config import settings
|
||||
from loyal_companion.models.platform import (
|
||||
ConversationRequest,
|
||||
ConversationResponse,
|
||||
IntimacyLevel,
|
||||
MoodInfo,
|
||||
Platform,
|
||||
RelationshipInfo,
|
||||
)
|
||||
from loyal_companion.services import (
|
||||
AIService,
|
||||
CommunicationStyleService,
|
||||
FactExtractionService,
|
||||
Message,
|
||||
MoodService,
|
||||
OpinionService,
|
||||
PersistentConversationManager,
|
||||
ProactiveService,
|
||||
RelationshipService,
|
||||
UserService,
|
||||
db,
|
||||
detect_emoji_usage,
|
||||
detect_formal_language,
|
||||
extract_topics_from_message,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from sqlalchemy.ext.asyncio import AsyncSession
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ConversationGateway:
|
||||
"""Platform-agnostic conversation processing gateway.
|
||||
|
||||
This service:
|
||||
- Accepts normalized ConversationRequest from any platform
|
||||
- Loads conversation history
|
||||
- Gathers Living AI context (mood, relationship, style, opinions)
|
||||
- Applies intimacy-level-based modifiers
|
||||
- Invokes AI service
|
||||
- Returns normalized ConversationResponse
|
||||
- Triggers async Living AI state updates
|
||||
"""
|
||||
|
||||
def __init__(self, ai_service: AIService | None = None):
|
||||
"""Initialize the conversation gateway.
|
||||
|
||||
Args:
|
||||
ai_service: Optional AI service instance (creates new one if not provided)
|
||||
"""
|
||||
self.ai_service = ai_service or AIService()
|
||||
|
||||
async def process_message(self, request: ConversationRequest) -> ConversationResponse:
|
||||
"""Process a conversation message from any platform.
|
||||
|
||||
Args:
|
||||
request: The normalized conversation request
|
||||
|
||||
Returns:
|
||||
The normalized conversation response
|
||||
|
||||
Raises:
|
||||
ValueError: If database is required but not available
|
||||
"""
|
||||
if not db.is_initialized:
|
||||
raise ValueError(
|
||||
"Database is required for Conversation Gateway. Please configure DATABASE_URL."
|
||||
)
|
||||
|
||||
async with db.session() as session:
|
||||
return await self._process_with_db(session, request)
|
||||
|
||||
async def _process_with_db(
|
||||
self,
|
||||
session: "AsyncSession",
|
||||
request: ConversationRequest,
|
||||
) -> ConversationResponse:
|
||||
"""Process a conversation request with database backing.
|
||||
|
||||
Args:
|
||||
session: Database session
|
||||
request: The conversation request
|
||||
|
||||
Returns:
|
||||
The conversation response
|
||||
"""
|
||||
# Initialize services
|
||||
user_service = UserService(session)
|
||||
conv_manager = PersistentConversationManager(session)
|
||||
mood_service = MoodService(session)
|
||||
relationship_service = RelationshipService(session)
|
||||
|
||||
# Get or create user
|
||||
# Note: For now, we use the platform user_id as the discord_id field
|
||||
# TODO: In Phase 3, add PlatformIdentity linking for cross-platform users
|
||||
user = await user_service.get_or_create_user(
|
||||
discord_id=int(request.user_id) if request.user_id.isdigit() else hash(request.user_id),
|
||||
username=request.user_id,
|
||||
display_name=request.context.user_display_name or request.user_id,
|
||||
)
|
||||
|
||||
# Get or create conversation
|
||||
guild_id = int(request.context.guild_id) if request.context.guild_id else None
|
||||
channel_id = (
|
||||
int(request.context.channel_id)
|
||||
if request.context.channel_id
|
||||
else hash(request.session_id)
|
||||
)
|
||||
|
||||
conversation = await conv_manager.get_or_create_conversation(
|
||||
user=user,
|
||||
guild_id=guild_id,
|
||||
channel_id=channel_id,
|
||||
)
|
||||
|
||||
# Get conversation history
|
||||
history = await conv_manager.get_history(conversation)
|
||||
|
||||
# Add current message to history
|
||||
current_message = Message(role="user", content=request.message)
|
||||
messages = history + [current_message]
|
||||
|
||||
# Gather Living AI context
|
||||
mood = None
|
||||
relationship_data = None
|
||||
communication_style = None
|
||||
relevant_opinions = None
|
||||
|
||||
if settings.living_ai_enabled:
|
||||
if settings.mood_enabled:
|
||||
mood = await mood_service.get_current_mood(guild_id)
|
||||
|
||||
if settings.relationship_enabled:
|
||||
rel = await relationship_service.get_or_create_relationship(user, guild_id)
|
||||
level = relationship_service.get_level(rel.relationship_score)
|
||||
relationship_data = (level, rel)
|
||||
|
||||
if settings.style_learning_enabled:
|
||||
style_service = CommunicationStyleService(session)
|
||||
communication_style = await style_service.get_or_create_style(user)
|
||||
|
||||
if settings.opinion_formation_enabled:
|
||||
opinion_service = OpinionService(session)
|
||||
topics = extract_topics_from_message(request.message)
|
||||
if topics:
|
||||
relevant_opinions = await opinion_service.get_relevant_opinions(
|
||||
topics, guild_id
|
||||
)
|
||||
|
||||
# Build system prompt with Living AI context and intimacy modifiers
|
||||
system_prompt = await self._build_system_prompt(
|
||||
user_service=user_service,
|
||||
user=user,
|
||||
platform=request.platform,
|
||||
intimacy_level=request.context.intimacy_level,
|
||||
mood=mood,
|
||||
relationship=relationship_data,
|
||||
communication_style=communication_style,
|
||||
bot_opinions=relevant_opinions,
|
||||
)
|
||||
|
||||
# Generate AI response
|
||||
response = await self.ai_service.chat(
|
||||
messages=messages,
|
||||
system_prompt=system_prompt,
|
||||
)
|
||||
|
||||
# Save the exchange to database
|
||||
await conv_manager.add_exchange(
|
||||
conversation=conversation,
|
||||
user=user,
|
||||
user_message=request.message,
|
||||
assistant_message=response.content,
|
||||
)
|
||||
|
||||
# Update Living AI state asynchronously
|
||||
extracted_facts: list[str] = []
|
||||
if settings.living_ai_enabled:
|
||||
extracted_facts = await self._update_living_ai_state(
|
||||
session=session,
|
||||
user=user,
|
||||
guild_id=guild_id,
|
||||
channel_id=channel_id,
|
||||
user_message=request.message,
|
||||
bot_response=response.content,
|
||||
intimacy_level=request.context.intimacy_level,
|
||||
mood_service=mood_service,
|
||||
relationship_service=relationship_service,
|
||||
)
|
||||
|
||||
# Build response object
|
||||
mood_info = None
|
||||
if mood:
|
||||
mood_info = MoodInfo(
|
||||
label=mood.label.value,
|
||||
valence=mood.valence,
|
||||
arousal=mood.arousal,
|
||||
intensity=mood.intensity,
|
||||
)
|
||||
|
||||
relationship_info = None
|
||||
if relationship_data:
|
||||
level, rel = relationship_data
|
||||
relationship_info = RelationshipInfo(
|
||||
level=level.value,
|
||||
score=rel.relationship_score,
|
||||
interactions_count=rel.total_interactions,
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f"Gateway processed message from {request.platform.value} "
|
||||
f"(intimacy: {request.context.intimacy_level.value}): "
|
||||
f"{len(response.content)} chars"
|
||||
)
|
||||
|
||||
return ConversationResponse(
|
||||
response=response.content,
|
||||
mood=mood_info,
|
||||
relationship=relationship_info,
|
||||
extracted_facts=extracted_facts,
|
||||
platform_hints={}, # Platforms can use this for formatting hints
|
||||
)
|
||||
|
||||
async def _build_system_prompt(
|
||||
self,
|
||||
user_service: UserService,
|
||||
user,
|
||||
platform: Platform,
|
||||
intimacy_level: IntimacyLevel,
|
||||
mood=None,
|
||||
relationship=None,
|
||||
communication_style=None,
|
||||
bot_opinions=None,
|
||||
) -> str:
|
||||
"""Build the system prompt with all context and modifiers.
|
||||
|
||||
Args:
|
||||
user_service: User service instance
|
||||
user: The user object
|
||||
platform: The platform this request is from
|
||||
intimacy_level: The intimacy level for this interaction
|
||||
mood: Current mood (if available)
|
||||
relationship: Relationship data tuple (if available)
|
||||
communication_style: User's communication style (if available)
|
||||
bot_opinions: Relevant bot opinions (if available)
|
||||
|
||||
Returns:
|
||||
The complete system prompt
|
||||
"""
|
||||
# Get base system prompt with Living AI context
|
||||
if settings.living_ai_enabled and (mood or relationship or communication_style):
|
||||
system_prompt = self.ai_service.get_enhanced_system_prompt(
|
||||
mood=mood,
|
||||
relationship=relationship,
|
||||
communication_style=communication_style,
|
||||
bot_opinions=bot_opinions,
|
||||
)
|
||||
else:
|
||||
system_prompt = self.ai_service.get_system_prompt()
|
||||
|
||||
# Add user context from database (custom name, known facts)
|
||||
user_context = await user_service.get_user_context(user)
|
||||
system_prompt += f"\n\n--- User Context ---\n{user_context}"
|
||||
|
||||
# Apply intimacy-level modifiers
|
||||
intimacy_modifier = self._get_intimacy_modifier(platform, intimacy_level)
|
||||
if intimacy_modifier:
|
||||
system_prompt += f"\n\n--- Interaction Context ---\n{intimacy_modifier}"
|
||||
|
||||
return system_prompt
|
||||
|
||||
def _get_intimacy_modifier(self, platform: Platform, intimacy_level: IntimacyLevel) -> str:
|
||||
"""Get system prompt modifier based on platform and intimacy level.
|
||||
|
||||
Args:
|
||||
platform: The platform this request is from
|
||||
intimacy_level: The intimacy level for this interaction
|
||||
|
||||
Returns:
|
||||
System prompt modifier text
|
||||
"""
|
||||
if intimacy_level == IntimacyLevel.LOW:
|
||||
return (
|
||||
"This is a PUBLIC, SOCIAL context (low intimacy).\n"
|
||||
"Behavior adjustments:\n"
|
||||
"- Keep responses brief and light\n"
|
||||
"- Avoid deep emotional topics or personal memory surfacing\n"
|
||||
"- Use grounding language, not therapeutic framing\n"
|
||||
"- Do not initiate proactive check-ins\n"
|
||||
"- Maintain casual, social tone\n"
|
||||
"- Stick to public-safe topics"
|
||||
)
|
||||
elif intimacy_level == IntimacyLevel.MEDIUM:
|
||||
return (
|
||||
"This is a SEMI-PRIVATE context (medium intimacy).\n"
|
||||
"Behavior adjustments:\n"
|
||||
"- Balanced warmth and depth\n"
|
||||
"- Personal memory references are okay\n"
|
||||
"- Moderate emotional engagement\n"
|
||||
"- Casual but caring tone\n"
|
||||
"- Proactive behavior allowed in moderation"
|
||||
)
|
||||
elif intimacy_level == IntimacyLevel.HIGH:
|
||||
return (
|
||||
"This is a PRIVATE, INTENTIONAL context (high intimacy).\n"
|
||||
"Behavior adjustments:\n"
|
||||
"- Deeper reflection and emotional naming permitted\n"
|
||||
"- Silence tolerance (you don't need to rush responses)\n"
|
||||
"- Proactive follow-ups and check-ins allowed\n"
|
||||
"- Surface relevant deep memories\n"
|
||||
"- Thoughtful, considered responses\n"
|
||||
"- Can sit with difficult emotions\n\n"
|
||||
"CRITICAL SAFETY BOUNDARIES (always enforced):\n"
|
||||
"- Never claim exclusivity ('I'm the only one who understands you')\n"
|
||||
"- Never reinforce dependency ('You need me')\n"
|
||||
"- Never discourage external connections ('They don't get it like I do')\n"
|
||||
"- Always defer crisis situations to professionals\n"
|
||||
"- No romantic/sexual framing"
|
||||
)
|
||||
|
||||
return ""
|
||||
|
||||
async def _update_living_ai_state(
|
||||
self,
|
||||
session: "AsyncSession",
|
||||
user,
|
||||
guild_id: int | None,
|
||||
channel_id: int,
|
||||
user_message: str,
|
||||
bot_response: str,
|
||||
intimacy_level: IntimacyLevel,
|
||||
mood_service: MoodService,
|
||||
relationship_service: RelationshipService,
|
||||
) -> list[str]:
|
||||
"""Update Living AI state after a response.
|
||||
|
||||
Updates mood, relationship, style, opinions, facts, and proactive events.
|
||||
|
||||
Args:
|
||||
session: Database session
|
||||
user: The user object
|
||||
guild_id: Guild ID (if applicable)
|
||||
channel_id: Channel ID
|
||||
user_message: The user's message
|
||||
bot_response: The bot's response
|
||||
intimacy_level: The intimacy level for this interaction
|
||||
mood_service: Mood service instance
|
||||
relationship_service: Relationship service instance
|
||||
|
||||
Returns:
|
||||
List of extracted fact descriptions (for response metadata)
|
||||
"""
|
||||
extracted_fact_descriptions: list[str] = []
|
||||
|
||||
try:
|
||||
# Simple sentiment estimation
|
||||
sentiment = self._estimate_sentiment(user_message)
|
||||
engagement = min(1.0, len(user_message) / 300)
|
||||
|
||||
# Update mood
|
||||
if settings.mood_enabled:
|
||||
await mood_service.update_mood(
|
||||
guild_id=guild_id,
|
||||
sentiment_delta=sentiment * 0.5,
|
||||
engagement_delta=engagement * 0.5,
|
||||
trigger_type="conversation",
|
||||
trigger_user_id=user.id,
|
||||
trigger_description=f"Conversation with {user.display_name}",
|
||||
)
|
||||
await mood_service.increment_stats(guild_id, messages_sent=1)
|
||||
|
||||
# Update relationship
|
||||
if settings.relationship_enabled:
|
||||
await relationship_service.record_interaction(
|
||||
user=user,
|
||||
guild_id=guild_id,
|
||||
sentiment=sentiment,
|
||||
message_length=len(user_message),
|
||||
conversation_turns=1,
|
||||
)
|
||||
|
||||
# Update communication style learning
|
||||
if settings.style_learning_enabled:
|
||||
style_service = CommunicationStyleService(session)
|
||||
await style_service.record_engagement(
|
||||
user=user,
|
||||
user_message_length=len(user_message),
|
||||
bot_response_length=len(bot_response),
|
||||
conversation_continued=True,
|
||||
user_used_emoji=detect_emoji_usage(user_message),
|
||||
user_used_formal_language=detect_formal_language(user_message),
|
||||
)
|
||||
|
||||
# Update opinion tracking
|
||||
if settings.opinion_formation_enabled:
|
||||
topics = extract_topics_from_message(user_message)
|
||||
if topics:
|
||||
opinion_service = OpinionService(session)
|
||||
for topic in topics[:3]:
|
||||
await opinion_service.record_topic_discussion(
|
||||
topic=topic,
|
||||
guild_id=guild_id,
|
||||
sentiment=sentiment,
|
||||
engagement_level=engagement,
|
||||
)
|
||||
|
||||
# Autonomous fact extraction
|
||||
# Only extract facts in MEDIUM and HIGH intimacy contexts
|
||||
if settings.fact_extraction_enabled and intimacy_level != IntimacyLevel.LOW:
|
||||
fact_service = FactExtractionService(session, self.ai_service)
|
||||
new_facts = await fact_service.maybe_extract_facts(
|
||||
user=user,
|
||||
message_content=user_message,
|
||||
)
|
||||
if new_facts:
|
||||
await mood_service.increment_stats(guild_id, facts_learned=len(new_facts))
|
||||
extracted_fact_descriptions = [f.fact for f in new_facts]
|
||||
logger.debug(f"Auto-extracted {len(new_facts)} facts from message")
|
||||
|
||||
# Proactive event detection
|
||||
# Only in MEDIUM and HIGH intimacy contexts
|
||||
if settings.proactive_enabled and intimacy_level != IntimacyLevel.LOW:
|
||||
proactive_service = ProactiveService(session, self.ai_service)
|
||||
|
||||
# Detect follow-up opportunities (substantial messages only)
|
||||
if len(user_message) > 30:
|
||||
await proactive_service.detect_and_schedule_followup(
|
||||
user=user,
|
||||
message_content=user_message,
|
||||
guild_id=guild_id,
|
||||
channel_id=channel_id,
|
||||
)
|
||||
|
||||
# Detect birthday mentions
|
||||
await proactive_service.detect_and_schedule_birthday(
|
||||
user=user,
|
||||
message_content=user_message,
|
||||
guild_id=guild_id,
|
||||
channel_id=channel_id,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to update Living AI state: {e}")
|
||||
|
||||
return extracted_fact_descriptions
|
||||
|
||||
def _estimate_sentiment(self, text: str) -> float:
|
||||
"""Estimate sentiment from text using simple heuristics.
|
||||
|
||||
Returns a value from -1 (negative) to 1 (positive).
|
||||
|
||||
Args:
|
||||
text: The message text
|
||||
|
||||
Returns:
|
||||
Sentiment score between -1 and 1
|
||||
"""
|
||||
text_lower = text.lower()
|
||||
|
||||
# Positive indicators
|
||||
positive_words = [
|
||||
"thanks",
|
||||
"thank you",
|
||||
"awesome",
|
||||
"great",
|
||||
"love",
|
||||
"amazing",
|
||||
"wonderful",
|
||||
"excellent",
|
||||
"perfect",
|
||||
"happy",
|
||||
"glad",
|
||||
"appreciate",
|
||||
"helpful",
|
||||
"nice",
|
||||
"good",
|
||||
"cool",
|
||||
"fantastic",
|
||||
"brilliant",
|
||||
]
|
||||
# Negative indicators
|
||||
negative_words = [
|
||||
"hate",
|
||||
"awful",
|
||||
"terrible",
|
||||
"bad",
|
||||
"stupid",
|
||||
"annoying",
|
||||
"frustrated",
|
||||
"angry",
|
||||
"disappointed",
|
||||
"wrong",
|
||||
"broken",
|
||||
"useless",
|
||||
"horrible",
|
||||
"worst",
|
||||
"sucks",
|
||||
"boring",
|
||||
]
|
||||
|
||||
positive_count = sum(1 for word in positive_words if word in text_lower)
|
||||
negative_count = sum(1 for word in negative_words if word in text_lower)
|
||||
|
||||
# Check for exclamation marks (usually positive energy)
|
||||
exclamation_bonus = min(0.2, text.count("!") * 0.05)
|
||||
|
||||
# Calculate sentiment
|
||||
if positive_count + negative_count == 0:
|
||||
return 0.1 + exclamation_bonus
|
||||
|
||||
sentiment = (positive_count - negative_count) / (positive_count + negative_count)
|
||||
return max(-1.0, min(1.0, sentiment + exclamation_bonus))
|
||||
113
tests/test_conversation_gateway.py
Normal file
113
tests/test_conversation_gateway.py
Normal file
@@ -0,0 +1,113 @@
|
||||
"""Tests for the Conversation Gateway."""
|
||||
|
||||
import pytest
|
||||
|
||||
from loyal_companion.models.platform import (
|
||||
ConversationContext,
|
||||
ConversationRequest,
|
||||
IntimacyLevel,
|
||||
Platform,
|
||||
)
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
|
||||
class TestConversationGateway:
|
||||
"""Test suite for ConversationGateway."""
|
||||
|
||||
def test_gateway_initialization(self):
|
||||
"""Test that the gateway initializes correctly."""
|
||||
gateway = ConversationGateway()
|
||||
assert gateway is not None
|
||||
assert gateway.ai_service is not None
|
||||
|
||||
def test_conversation_request_creation(self):
|
||||
"""Test creating a ConversationRequest."""
|
||||
request = ConversationRequest(
|
||||
user_id="12345",
|
||||
platform=Platform.DISCORD,
|
||||
session_id="channel-123",
|
||||
message="Hello!",
|
||||
context=ConversationContext(
|
||||
is_public=False,
|
||||
intimacy_level=IntimacyLevel.MEDIUM,
|
||||
guild_id="67890",
|
||||
channel_id="channel-123",
|
||||
user_display_name="TestUser",
|
||||
),
|
||||
)
|
||||
|
||||
assert request.user_id == "12345"
|
||||
assert request.platform == Platform.DISCORD
|
||||
assert request.message == "Hello!"
|
||||
assert request.context.intimacy_level == IntimacyLevel.MEDIUM
|
||||
|
||||
def test_intimacy_levels(self):
|
||||
"""Test intimacy level enum values."""
|
||||
assert IntimacyLevel.LOW == "low"
|
||||
assert IntimacyLevel.MEDIUM == "medium"
|
||||
assert IntimacyLevel.HIGH == "high"
|
||||
|
||||
def test_platform_enum(self):
|
||||
"""Test platform enum values."""
|
||||
assert Platform.DISCORD == "discord"
|
||||
assert Platform.WEB == "web"
|
||||
assert Platform.CLI == "cli"
|
||||
|
||||
def test_intimacy_modifier_low(self):
|
||||
"""Test intimacy modifier for LOW intimacy."""
|
||||
gateway = ConversationGateway()
|
||||
modifier = gateway._get_intimacy_modifier(Platform.DISCORD, IntimacyLevel.LOW)
|
||||
|
||||
assert "PUBLIC, SOCIAL" in modifier
|
||||
assert "brief and light" in modifier
|
||||
assert "Avoid deep emotional topics" in modifier
|
||||
|
||||
def test_intimacy_modifier_high(self):
|
||||
"""Test intimacy modifier for HIGH intimacy."""
|
||||
gateway = ConversationGateway()
|
||||
modifier = gateway._get_intimacy_modifier(Platform.CLI, IntimacyLevel.HIGH)
|
||||
|
||||
assert "PRIVATE, INTENTIONAL" in modifier
|
||||
assert "Deeper reflection" in modifier
|
||||
assert "CRITICAL SAFETY BOUNDARIES" in modifier
|
||||
assert "Never claim exclusivity" in modifier
|
||||
|
||||
def test_sentiment_estimation_positive(self):
|
||||
"""Test sentiment estimation for positive messages."""
|
||||
gateway = ConversationGateway()
|
||||
sentiment = gateway._estimate_sentiment("Thanks! This is awesome and amazing!")
|
||||
|
||||
assert sentiment > 0.5 # Should be positive
|
||||
|
||||
def test_sentiment_estimation_negative(self):
|
||||
"""Test sentiment estimation for negative messages."""
|
||||
gateway = ConversationGateway()
|
||||
sentiment = gateway._estimate_sentiment("This is terrible and awful, I hate it")
|
||||
|
||||
assert sentiment < 0 # Should be negative
|
||||
|
||||
def test_sentiment_estimation_neutral(self):
|
||||
"""Test sentiment estimation for neutral messages."""
|
||||
gateway = ConversationGateway()
|
||||
sentiment = gateway._estimate_sentiment("The weather is cloudy today")
|
||||
|
||||
assert -0.5 < sentiment < 0.5 # Should be near neutral
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_process_message_requires_database(self):
|
||||
"""Test that process_message requires database."""
|
||||
gateway = ConversationGateway()
|
||||
request = ConversationRequest(
|
||||
user_id="12345",
|
||||
platform=Platform.WEB,
|
||||
session_id="session-1",
|
||||
message="Hello",
|
||||
)
|
||||
|
||||
# Should raise ValueError if database not initialized
|
||||
with pytest.raises(ValueError, match="Database is required"):
|
||||
await gateway.process_message(request)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
218
verify_gateway.py
Normal file
218
verify_gateway.py
Normal file
@@ -0,0 +1,218 @@
|
||||
"""Simple verification script for Conversation Gateway implementation.
|
||||
|
||||
This script verifies that the gateway can be imported and basic functionality works.
|
||||
Run with: python3 verify_gateway.py
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
|
||||
def verify_imports():
|
||||
"""Verify all required imports work."""
|
||||
print("✓ Verifying imports...")
|
||||
|
||||
try:
|
||||
from loyal_companion.models.platform import (
|
||||
ConversationContext,
|
||||
ConversationRequest,
|
||||
ConversationResponse,
|
||||
IntimacyLevel,
|
||||
MoodInfo,
|
||||
Platform,
|
||||
RelationshipInfo,
|
||||
)
|
||||
|
||||
print(" ✓ Platform models imported successfully")
|
||||
except ImportError as e:
|
||||
print(f" ✗ Failed to import platform models: {e}")
|
||||
return False
|
||||
|
||||
try:
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
print(" ✓ ConversationGateway imported successfully")
|
||||
except ImportError as e:
|
||||
print(f" ✗ Failed to import ConversationGateway: {e}")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_enums():
|
||||
"""Verify enum values are correct."""
|
||||
print("\n✓ Verifying enums...")
|
||||
|
||||
from loyal_companion.models.platform import IntimacyLevel, Platform
|
||||
|
||||
# Verify Platform enum
|
||||
assert Platform.DISCORD == "discord"
|
||||
assert Platform.WEB == "web"
|
||||
assert Platform.CLI == "cli"
|
||||
print(" ✓ Platform enum values correct")
|
||||
|
||||
# Verify IntimacyLevel enum
|
||||
assert IntimacyLevel.LOW == "low"
|
||||
assert IntimacyLevel.MEDIUM == "medium"
|
||||
assert IntimacyLevel.HIGH == "high"
|
||||
print(" ✓ IntimacyLevel enum values correct")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_request_creation():
|
||||
"""Verify ConversationRequest can be created."""
|
||||
print("\n✓ Verifying ConversationRequest creation...")
|
||||
|
||||
from loyal_companion.models.platform import (
|
||||
ConversationContext,
|
||||
ConversationRequest,
|
||||
IntimacyLevel,
|
||||
Platform,
|
||||
)
|
||||
|
||||
context = ConversationContext(
|
||||
is_public=False,
|
||||
intimacy_level=IntimacyLevel.MEDIUM,
|
||||
guild_id="12345",
|
||||
channel_id="channel-1",
|
||||
user_display_name="TestUser",
|
||||
)
|
||||
|
||||
request = ConversationRequest(
|
||||
user_id="user123",
|
||||
platform=Platform.DISCORD,
|
||||
session_id="session-1",
|
||||
message="Hello there!",
|
||||
context=context,
|
||||
)
|
||||
|
||||
assert request.user_id == "user123"
|
||||
assert request.platform == Platform.DISCORD
|
||||
assert request.message == "Hello there!"
|
||||
assert request.context.intimacy_level == IntimacyLevel.MEDIUM
|
||||
|
||||
print(" ✓ ConversationRequest created successfully")
|
||||
print(f" - Platform: {request.platform.value}")
|
||||
print(f" - Intimacy: {request.context.intimacy_level.value}")
|
||||
print(f" - Message: {request.message}")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_gateway_initialization():
|
||||
"""Verify ConversationGateway can be initialized."""
|
||||
print("\n✓ Verifying ConversationGateway initialization...")
|
||||
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
gateway = ConversationGateway()
|
||||
assert gateway is not None
|
||||
assert gateway.ai_service is not None
|
||||
|
||||
print(" ✓ ConversationGateway initialized successfully")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_intimacy_modifiers():
|
||||
"""Verify intimacy level modifiers work."""
|
||||
print("\n✓ Verifying intimacy modifiers...")
|
||||
|
||||
from loyal_companion.models.platform import IntimacyLevel, Platform
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
gateway = ConversationGateway()
|
||||
|
||||
# Test LOW intimacy
|
||||
low_modifier = gateway._get_intimacy_modifier(Platform.DISCORD, IntimacyLevel.LOW)
|
||||
assert "PUBLIC, SOCIAL" in low_modifier
|
||||
assert "brief and light" in low_modifier
|
||||
print(" ✓ LOW intimacy modifier correct")
|
||||
|
||||
# Test MEDIUM intimacy
|
||||
medium_modifier = gateway._get_intimacy_modifier(Platform.DISCORD, IntimacyLevel.MEDIUM)
|
||||
assert "SEMI-PRIVATE" in medium_modifier
|
||||
assert "Balanced warmth" in medium_modifier
|
||||
print(" ✓ MEDIUM intimacy modifier correct")
|
||||
|
||||
# Test HIGH intimacy
|
||||
high_modifier = gateway._get_intimacy_modifier(Platform.WEB, IntimacyLevel.HIGH)
|
||||
assert "PRIVATE, INTENTIONAL" in high_modifier
|
||||
assert "Deeper reflection" in high_modifier
|
||||
assert "CRITICAL SAFETY BOUNDARIES" in high_modifier
|
||||
print(" ✓ HIGH intimacy modifier correct")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_sentiment_estimation():
|
||||
"""Verify sentiment estimation works."""
|
||||
print("\n✓ Verifying sentiment estimation...")
|
||||
|
||||
from loyal_companion.services import ConversationGateway
|
||||
|
||||
gateway = ConversationGateway()
|
||||
|
||||
# Positive sentiment
|
||||
positive = gateway._estimate_sentiment("Thanks! This is awesome and amazing!")
|
||||
assert positive > 0.3, f"Expected positive sentiment, got {positive}"
|
||||
print(f" ✓ Positive sentiment: {positive:.2f}")
|
||||
|
||||
# Negative sentiment
|
||||
negative = gateway._estimate_sentiment("This is terrible and awful")
|
||||
assert negative < 0, f"Expected negative sentiment, got {negative}"
|
||||
print(f" ✓ Negative sentiment: {negative:.2f}")
|
||||
|
||||
# Neutral sentiment
|
||||
neutral = gateway._estimate_sentiment("The weather is cloudy")
|
||||
assert -0.3 < neutral < 0.3, f"Expected neutral sentiment, got {neutral}"
|
||||
print(f" ✓ Neutral sentiment: {neutral:.2f}")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all verification checks."""
|
||||
print("=" * 60)
|
||||
print("Conversation Gateway Verification")
|
||||
print("=" * 60)
|
||||
|
||||
checks = [
|
||||
verify_imports,
|
||||
verify_enums,
|
||||
verify_request_creation,
|
||||
verify_gateway_initialization,
|
||||
verify_intimacy_modifiers,
|
||||
verify_sentiment_estimation,
|
||||
]
|
||||
|
||||
all_passed = True
|
||||
for check in checks:
|
||||
try:
|
||||
if not check():
|
||||
all_passed = False
|
||||
except Exception as e:
|
||||
print(f"\n✗ Check failed with error: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
all_passed = False
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
if all_passed:
|
||||
print("✓ All verification checks passed!")
|
||||
print("=" * 60)
|
||||
print("\nConversation Gateway is ready for use.")
|
||||
print("\nNext steps:")
|
||||
print(" 1. Refactor Discord cog to use gateway (Phase 2)")
|
||||
print(" 2. Add Web platform (Phase 3)")
|
||||
print(" 3. Add CLI client (Phase 4)")
|
||||
return 0
|
||||
else:
|
||||
print("✗ Some verification checks failed!")
|
||||
print("=" * 60)
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Reference in New Issue
Block a user