224 lines
4.4 KiB
Markdown
224 lines
4.4 KiB
Markdown
# Enterprise Features
|
|
|
|
Advanced features for enterprise deployments.
|
|
|
|
## Audit Logging
|
|
|
|
All AI actions are logged for compliance and debugging.
|
|
|
|
### Configuration
|
|
|
|
```yaml
|
|
enterprise:
|
|
audit_log: true
|
|
audit_path: "/var/log/ai-review/"
|
|
```
|
|
|
|
### Log Format
|
|
|
|
Logs are stored as JSONL (JSON Lines) with daily rotation:
|
|
|
|
```
|
|
/var/log/ai-review/audit-2024-01-15.jsonl
|
|
```
|
|
|
|
Each line is a JSON object:
|
|
|
|
```json
|
|
{
|
|
"timestamp": "2024-01-15T10:30:45.123Z",
|
|
"action": "review_pr",
|
|
"agent": "PRAgent",
|
|
"repository": "org/repo",
|
|
"success": true,
|
|
"details": {
|
|
"pr_number": 123,
|
|
"severity": "MEDIUM",
|
|
"issues_found": 3
|
|
}
|
|
}
|
|
```
|
|
|
|
### Actions Logged
|
|
|
|
| Action | Description |
|
|
|--------|-------------|
|
|
| `review_pr` | PR review completed |
|
|
| `triage_issue` | Issue triaged |
|
|
| `llm_call` | LLM API call made |
|
|
| `comment_posted` | Comment created/updated |
|
|
| `labels_applied` | Labels added |
|
|
| `security_scan` | Security scan completed |
|
|
|
|
### Querying Logs
|
|
|
|
```python
|
|
from enterprise import get_audit_logger
|
|
|
|
logger = get_audit_logger()
|
|
|
|
# Get all logs for a date range
|
|
logs = logger.get_logs(
|
|
start_date="2024-01-01",
|
|
end_date="2024-01-31",
|
|
action="review_pr",
|
|
repository="org/repo",
|
|
)
|
|
|
|
# Generate summary report
|
|
report = logger.generate_report(
|
|
start_date="2024-01-01",
|
|
end_date="2024-01-31",
|
|
)
|
|
print(f"Total events: {report['total_events']}")
|
|
print(f"Success rate: {report['success_rate']:.1%}")
|
|
```
|
|
|
|
---
|
|
|
|
## Metrics & Observability
|
|
|
|
Track performance and usage metrics.
|
|
|
|
### Configuration
|
|
|
|
```yaml
|
|
enterprise:
|
|
metrics_enabled: true
|
|
```
|
|
|
|
### Available Metrics
|
|
|
|
**Counters:**
|
|
- `ai_review_requests_total` - Total requests processed
|
|
- `ai_review_requests_success` - Successful requests
|
|
- `ai_review_requests_failed` - Failed requests
|
|
- `ai_review_llm_calls_total` - Total LLM API calls
|
|
- `ai_review_llm_tokens_total` - Total tokens consumed
|
|
- `ai_review_comments_posted` - Comments posted
|
|
- `ai_review_security_findings` - Security issues found
|
|
|
|
**Gauges:**
|
|
- `ai_review_active_requests` - Currently processing
|
|
|
|
**Histograms:**
|
|
- `ai_review_request_duration_seconds` - Request latency
|
|
- `ai_review_llm_duration_seconds` - LLM call latency
|
|
|
|
### Getting Metrics
|
|
|
|
```python
|
|
from enterprise import get_metrics
|
|
|
|
metrics = get_metrics()
|
|
|
|
# Get summary
|
|
summary = metrics.get_summary()
|
|
print(f"Total requests: {summary['requests']['total']}")
|
|
print(f"Success rate: {summary['requests']['success_rate']:.1%}")
|
|
print(f"Avg latency: {summary['latency']['avg_ms']:.0f}ms")
|
|
print(f"P95 latency: {summary['latency']['p95_ms']:.0f}ms")
|
|
print(f"LLM tokens used: {summary['llm']['tokens']}")
|
|
|
|
# Export Prometheus format
|
|
prometheus_output = metrics.export_prometheus()
|
|
```
|
|
|
|
### Prometheus Integration
|
|
|
|
Expose metrics endpoint:
|
|
|
|
```python
|
|
from flask import Flask
|
|
from enterprise import get_metrics
|
|
|
|
app = Flask(__name__)
|
|
|
|
@app.route("/metrics")
|
|
def metrics():
|
|
return get_metrics().export_prometheus()
|
|
```
|
|
|
|
---
|
|
|
|
## Rate Limiting
|
|
|
|
Prevent API overload and manage costs.
|
|
|
|
### Configuration
|
|
|
|
```yaml
|
|
enterprise:
|
|
rate_limit:
|
|
requests_per_minute: 30
|
|
max_concurrent: 4
|
|
```
|
|
|
|
### Built-in Rate Limiting
|
|
|
|
The `BaseAgent` class includes automatic rate limiting:
|
|
|
|
```python
|
|
class BaseAgent:
|
|
def __init__(self):
|
|
self._min_request_interval = 1.0 # seconds
|
|
|
|
def _rate_limit(self):
|
|
elapsed = time.time() - self._last_request_time
|
|
if elapsed < self._min_request_interval:
|
|
time.sleep(self._min_request_interval - elapsed)
|
|
```
|
|
|
|
---
|
|
|
|
## Queue Management
|
|
|
|
The dispatcher handles concurrent execution:
|
|
|
|
```python
|
|
dispatcher = Dispatcher(max_workers=4)
|
|
```
|
|
|
|
For high-volume environments, use async dispatch:
|
|
|
|
```python
|
|
future = dispatcher.dispatch_async(event_type, event_data, owner, repo)
|
|
# Continue with other work
|
|
result = future.result() # Block when needed
|
|
```
|
|
|
|
---
|
|
|
|
## Security Considerations
|
|
|
|
### Token Permissions
|
|
|
|
Minimum required permissions for `AI_REVIEW_TOKEN`:
|
|
- `repo:read` - Read repository contents
|
|
- `repo:write` - Create branches (if needed)
|
|
- `issue:read` - Read issues and PRs
|
|
- `issue:write` - Create comments, labels
|
|
|
|
### Network Isolation
|
|
|
|
For air-gapped environments, use Ollama:
|
|
|
|
```yaml
|
|
provider: ollama
|
|
|
|
# Internal network address
|
|
# Set via environment: OLLAMA_HOST=http://ollama.internal:11434
|
|
```
|
|
|
|
### Data Privacy
|
|
|
|
By default:
|
|
- Code is sent to LLM provider for analysis
|
|
- Review comments are stored in Gitea
|
|
- Audit logs are stored locally
|
|
|
|
For sensitive codebases:
|
|
1. Use self-hosted Ollama
|
|
2. Disable external LLM providers
|
|
3. Review audit log retention policies
|