Merge pull request 'dev' (#22) from dev into main
CI / ci (push) Successful in 27s
Docker / docker (push) Successful in 15s

Reviewed-on: #22
This commit was merged in pull request #22.
This commit is contained in:
2026-03-02 18:40:18 +00:00
13 changed files with 0 additions and 1232 deletions
-41
View File
@@ -1,41 +0,0 @@
<!-- Title: Proposal / Design RFC -->
# Proposal: [Short title]
## Summary
Concise description of the proposal and what outcome you want.
## Problem statement
What problem are we solving and why is it important? Include links to related issues.
## Goals
- Primary goals (what success looks like)
- Non-goals (explicitly out of scope)
## Proposed design
Describe the design in detail. Include:
- Architecture diagrams or ASCII art
- API changes (requests/responses)
- Data model changes or migrations
- UX flows or wireframes
## Alternatives considered
Short list of alternatives and tradeoffs.
## Backwards compatibility & migration plan
Describe how to migrate existing data and any compatibility impacts.
## Security considerations
List potential security/privacy implications.
## Testing & rollout plan
How will this be tested? Phased rollout plan if needed.
## Implementation plan & timeline
High-level tasks and owners.
## Open questions
List any unresolved questions.
**Checklist**
- [ ] Linked related issues
- [ ] Prototype or PoC (if available)
-61
View File
@@ -1,61 +0,0 @@
name: AI Chat (Bartender)
# WORKFLOW ROUTING:
# This workflow handles FREE-FORM questions/chat (no specific command)
# Other workflows: ai-issue-triage.yml (@codebot triage), ai-comment-reply.yml (specific commands)
# This is the FALLBACK for any @codebot mention that isn't a known command
on:
issue_comment:
types: [created]
# CUSTOMIZE YOUR BOT NAME:
# Change '@codebot' in all conditions below to match your config.yml mention_prefix
# Examples: '@bartender', '@uni', '@joey', '@codebot'
jobs:
ai-chat:
# Only run if comment mentions the bot but NOT a specific command
# This prevents duplicate runs with ai-comment-reply.yml and ai-issue-triage.yml
# CRITICAL: Ignore bot's own comments to prevent infinite loops (bot username: Bartender)
if: |
github.event.comment.user.login != 'Bartender' &&
contains(github.event.comment.body, '@codebot') &&
!contains(github.event.comment.body, '@codebot triage') &&
!contains(github.event.comment.body, '@codebot help') &&
!contains(github.event.comment.body, '@codebot explain') &&
!contains(github.event.comment.body, '@codebot suggest') &&
!contains(github.event.comment.body, '@codebot security') &&
!contains(github.event.comment.body, '@codebot summarize') &&
!contains(github.event.comment.body, '@codebot changelog') &&
!contains(github.event.comment.body, '@codebot explain-diff') &&
!contains(github.event.comment.body, '@codebot review-again') &&
!contains(github.event.comment.body, '@codebot setup-labels')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4
with:
repository: Hiddenden/openrabbit
path: .ai-review
token: ${{ secrets.AI_REVIEW_TOKEN }}
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install requests pyyaml
- name: Run AI Chat
env:
AI_REVIEW_TOKEN: ${{ secrets.AI_REVIEW_TOKEN }}
AI_REVIEW_REPO: ${{ gitea.repository }}
AI_REVIEW_API_URL: https://git.hiddenden.cafe/api/v1
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}
SEARXNG_URL: ${{ secrets.SEARXNG_URL }}
run: |
cd .ai-review/tools/ai-review
python main.py comment ${{ gitea.repository }} ${{ gitea.event.issue.number }} "${{ gitea.event.comment.body }}"
-58
View File
@@ -1,58 +0,0 @@
name: AI Codebase Quality Review
on:
# Weekly scheduled run
# schedule:
# - cron: "0 0 * * 0" # Every Sunday at midnight
# Manual trigger
workflow_dispatch:
inputs:
report_type:
description: "Type of report to generate"
required: false
default: "full"
type: choice
options:
- full
- security
- quick
jobs:
ai-codebase-review:
runs-on: ubuntu-latest
steps:
# Checkout the repository
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for analysis
# Checkout central AI tooling
- uses: actions/checkout@v4
with:
repository: Hiddenden/openrabbit
path: .ai-review
token: ${{ secrets.AI_REVIEW_TOKEN }}
# Setup Python
- uses: actions/setup-python@v5
with:
python-version: "3.11"
# Install dependencies
- run: pip install requests pyyaml
# Run AI codebase analysis
- name: Run AI Codebase Analysis
env:
AI_REVIEW_TOKEN: ${{ secrets.AI_REVIEW_TOKEN }}
AI_REVIEW_REPO: ${{ gitea.repository }}
AI_REVIEW_API_URL: https://git.hiddenden.cafe/api/v1
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}
run: |
cd .ai-review/tools/ai-review
python main.py codebase ${{ gitea.repository }}
-98
View File
@@ -1,98 +0,0 @@
name: AI Comment Reply
# WORKFLOW ROUTING:
# This workflow handles SPECIFIC commands: help, explain, suggest, security, summarize, changelog, explain-diff, review-again, setup-labels
# Other workflows: ai-issue-triage.yml (@codebot triage), ai-chat.yml (free-form questions)
on:
issue_comment:
types: [created]
# CUSTOMIZE YOUR BOT NAME:
# Change '@codebot' in the 'if' condition below to match your config.yml mention_prefix
# Examples: '@bartender', '@uni', '@joey', '@codebot'
jobs:
ai-reply:
runs-on: ubuntu-latest
# Only run for specific commands (not free-form chat or triage)
# This prevents duplicate runs with ai-chat.yml and ai-issue-triage.yml
# CRITICAL: Ignore bot's own comments to prevent infinite loops (bot username: Bartender)
if: |
github.event.comment.user.login != 'Bartender' &&
(contains(github.event.comment.body, '@codebot help') ||
contains(github.event.comment.body, '@codebot explain') ||
contains(github.event.comment.body, '@codebot suggest') ||
contains(github.event.comment.body, '@codebot security') ||
contains(github.event.comment.body, '@codebot summarize') ||
contains(github.event.comment.body, '@codebot changelog') ||
contains(github.event.comment.body, '@codebot explain-diff') ||
contains(github.event.comment.body, '@codebot review-again') ||
contains(github.event.comment.body, '@codebot setup-labels'))
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4
with:
repository: Hiddenden/openrabbit
path: .ai-review
token: ${{ secrets.AI_REVIEW_TOKEN }}
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install requests pyyaml
- name: Run AI Comment Response
env:
AI_REVIEW_TOKEN: ${{ secrets.AI_REVIEW_TOKEN }}
AI_REVIEW_API_URL: https://git.hiddenden.cafe/api/v1
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}
run: |
cd .ai-review/tools/ai-review
# Determine if this is a PR or issue comment
IS_PR="${{ gitea.event.issue.pull_request != null }}"
REPO="${{ gitea.repository }}"
ISSUE_NUMBER="${{ gitea.event.issue.number }}"
# Validate inputs
if [ -z "$REPO" ] || [ -z "$ISSUE_NUMBER" ]; then
echo "Error: Missing required parameters"
exit 1
fi
# Validate repository format (owner/repo)
if ! echo "$REPO" | grep -qE '^[a-zA-Z0-9_-]+/[a-zA-Z0-9_-]+$'; then
echo "Error: Invalid repository format: $REPO"
exit 1
fi
if [ "$IS_PR" = "true" ]; then
# This is a PR comment - use safe dispatch with minimal event data
# Build minimal event payload (does not include sensitive user data)
EVENT_DATA=$(cat <<EOF
{
"action": "created",
"issue": {
"number": ${{ gitea.event.issue.number }},
"pull_request": {}
},
"comment": {
"id": ${{ gitea.event.comment.id }},
"body": $(echo '${{ gitea.event.comment.body }}' | jq -Rs .)
}
}
EOF
)
# Use safe dispatch utility
python utils/safe_dispatch.py issue_comment "$REPO" "$EVENT_DATA"
else
# This is an issue comment - use the comment command
COMMENT_BODY='${{ gitea.event.comment.body }}'
python main.py comment "$REPO" "$ISSUE_NUMBER" "$COMMENT_BODY"
fi
-44
View File
@@ -1,44 +0,0 @@
name: AI Issue Triage
# WORKFLOW ROUTING:
# This workflow handles ONLY the 'triage' command
# Other workflows: ai-comment-reply.yml (specific commands), ai-chat.yml (free-form questions)
on:
issue_comment:
types: [created]
jobs:
ai-triage:
runs-on: ubuntu-latest
# Only run if comment contains @codebot triage
# CRITICAL: Ignore bot's own comments to prevent infinite loops (bot username: Bartender)
if: |
github.event.comment.user.login != 'Bartender' &&
contains(github.event.comment.body, '@codebot triage')
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v4
with:
repository: Hiddenden/openrabbit
path: .ai-review
token: ${{ secrets.AI_REVIEW_TOKEN }}
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: pip install requests pyyaml
- name: Run AI Issue Triage
env:
AI_REVIEW_TOKEN: ${{ secrets.AI_REVIEW_TOKEN }}
AI_REVIEW_REPO: ${{ gitea.repository }}
AI_REVIEW_API_URL: https://git.hiddenden.cafe/api/v1
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}
run: |
cd .ai-review/tools/ai-review
python main.py issue ${{ gitea.repository }} ${{ gitea.event.issue.number }}
-449
View File
@@ -1,449 +0,0 @@
# =============================================================================
# Deploy Workflow — Automated Deployment to VPS
# =============================================================================
#
# PURPOSE:
# Deploy your application to a VPS after a successful push to the default
# branch (main). Supports two deployment modes:
#
# (A) local-runner — The deploy job runs directly on a self-hosted act_runner
# installed ON the VPS. No SSH needed. The runner is selected by a label
# (DEPLOY_RUNNER_LABEL). This is the recommended mode.
#
# (B) ssh — The deploy job runs on any runner and SSHs into the VPS to
# execute commands remotely. Requires SSH secrets. Use as fallback when
# you can't install a runner on the target VPS.
#
# SAFE BY DEFAULT:
# ENABLE_DEPLOY=false in .ci/config.env. Deploy never runs unless you
# explicitly enable it. It also never runs on pull_request events.
#
# DEPLOY STRATEGIES:
# compose — docker compose pull && docker compose up -d
# systemd — systemctl restart <service>
# script — run a custom deploy script
#
# TRIGGERS:
# - push to DEFAULT_BRANCH (main) → deploy if enabled
# - tag v* (only if DEPLOY_ON_TAG=true) → deploy if enabled
# - pull_request → NEVER (not in trigger list)
#
# REQUIRED SECRETS (ssh mode only):
# DEPLOY_SSH_KEY — private SSH key (ed25519 or RSA)
# DEPLOY_HOST — VPS hostname or IP
# DEPLOY_USER — SSH username on VPS
# DEPLOY_KNOWN_HOSTS — (optional) known_hosts entry for the VPS
#
# For local-runner mode: NO secrets needed. The runner already has local
# access. Just ensure the runner is registered with the correct label.
#
# See docs/DEPLOY.md for full setup instructions.
# =============================================================================
name: Deploy
# ---------------------------------------------------------------------------
# TRIGGERS
# ---------------------------------------------------------------------------
# Only push events — never pull_request.
# Branch filter is further enforced in the "branch guard" step below,
# because config.env may specify a different DEFAULT_BRANCH.
on:
push:
branches:
- main
tags:
- "v*"
# =============================================================================
# JOB: deploy-local-runner
# =============================================================================
# Runs directly on the VPS via a labeled self-hosted act_runner.
# This job is skipped if DEPLOY_MODE != local-runner.
# ---------------------------------------------------------------------------
jobs:
deploy-local-runner:
# -------------------------------------------------------------------------
# DISABLED: Runner 'deploy-ovh' is not yet configured.
# To re-enable:
# 1. Remove the 'if: false' line below
# 2. Change runs-on back to your runner label (e.g. deploy-ovh)
# -------------------------------------------------------------------------
if: false
runs-on: ubuntu-latest # placeholder — real label: deploy-ovh
steps:
# -----------------------------------------------------------------------
# Step 1: Load configuration
# -----------------------------------------------------------------------
- name: Checkout (for config only)
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Load config
id: config
run: |
# Source config.env
if [ -f .ci/config.env ]; then
set -a
source .ci/config.env
set +a
echo "Config loaded."
else
echo "WARNING: .ci/config.env not found, using defaults."
fi
# Export all deploy-related vars with safe defaults
echo "ENABLE_DEPLOY=${ENABLE_DEPLOY:-false}" >> "$GITHUB_ENV"
echo "DEPLOY_MODE=${DEPLOY_MODE:-local-runner}" >> "$GITHUB_ENV"
echo "DEPLOY_WORKDIR=${DEPLOY_WORKDIR:-/opt/app}" >> "$GITHUB_ENV"
echo "DEPLOY_STRATEGY=${DEPLOY_STRATEGY:-compose}" >> "$GITHUB_ENV"
echo "DEPLOY_COMPOSE_FILE=${DEPLOY_COMPOSE_FILE:-docker-compose.yml}" >> "$GITHUB_ENV"
echo "DEPLOY_SYSTEMD_SERVICE=${DEPLOY_SYSTEMD_SERVICE:-}" >> "$GITHUB_ENV"
echo "DEPLOY_SCRIPT=${DEPLOY_SCRIPT:-scripts/deploy.sh}" >> "$GITHUB_ENV"
echo "DEPLOY_ON_TAG=${DEPLOY_ON_TAG:-false}" >> "$GITHUB_ENV"
echo "DEFAULT_BRANCH=${DEFAULT_BRANCH:-main}" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 2: Gate checks — abort early if deploy should not run
# -----------------------------------------------------------------------
- name: Check if deploy is enabled
run: |
if [ "$ENABLE_DEPLOY" != "true" ]; then
echo "========================================="
echo " Deploy is DISABLED (ENABLE_DEPLOY=$ENABLE_DEPLOY)"
echo " To enable: set ENABLE_DEPLOY=true in .ci/config.env"
echo "========================================="
exit 0
fi
# Ensure this job is for the correct mode
if [ "$DEPLOY_MODE" != "local-runner" ]; then
echo "DEPLOY_MODE=$DEPLOY_MODE (not local-runner). This job is a no-op."
echo "The ssh job will handle deployment instead."
exit 0
fi
echo "DEPLOY_ACTIVE=true" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 3: Branch guard
# Only deploy from DEFAULT_BRANCH. For tags, check DEPLOY_ON_TAG.
# This is a SAFETY net — even though 'on.push.branches' is set above,
# DEFAULT_BRANCH might differ from 'main'.
# -----------------------------------------------------------------------
- name: Branch guard
if: env.DEPLOY_ACTIVE == 'true'
run: |
REF="${GITHUB_REF:-}"
# Tag push?
if echo "$REF" | grep -q '^refs/tags/v'; then
if [ "$DEPLOY_ON_TAG" != "true" ]; then
echo "Tag push detected but DEPLOY_ON_TAG=$DEPLOY_ON_TAG. Skipping."
echo "DEPLOY_ACTIVE=false" >> "$GITHUB_ENV"
exit 0
fi
echo "Deploying on tag: $REF"
exit 0
fi
# Branch push — verify it's DEFAULT_BRANCH
BRANCH="$(echo "$REF" | sed 's|refs/heads/||')"
if [ "$BRANCH" != "$DEFAULT_BRANCH" ]; then
echo "Branch '$BRANCH' is not DEFAULT_BRANCH '$DEFAULT_BRANCH'. Skipping."
echo "DEPLOY_ACTIVE=false" >> "$GITHUB_ENV"
exit 0
fi
echo "Deploying on branch: $BRANCH"
# -----------------------------------------------------------------------
# Step 4: Execute deploy strategy (LOCAL — runs on the VPS itself)
# -----------------------------------------------------------------------
- name: "Deploy: compose"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'compose'
run: |
echo ">>> Deploy strategy: compose"
echo ">>> Working directory: $DEPLOY_WORKDIR"
echo ">>> Compose file: $DEPLOY_COMPOSE_FILE"
cd "$DEPLOY_WORKDIR" || { echo "ERROR: Cannot cd to $DEPLOY_WORKDIR"; exit 1; }
echo ">>> docker compose -f $DEPLOY_COMPOSE_FILE pull"
docker compose -f "$DEPLOY_COMPOSE_FILE" pull
echo ">>> docker compose -f $DEPLOY_COMPOSE_FILE up -d"
docker compose -f "$DEPLOY_COMPOSE_FILE" up -d
echo "Deploy (compose) complete."
- name: "Deploy: systemd"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'systemd'
run: |
echo ">>> Deploy strategy: systemd"
if [ -z "$DEPLOY_SYSTEMD_SERVICE" ]; then
echo "ERROR: DEPLOY_SYSTEMD_SERVICE is not set."
echo "Set it in .ci/config.env for strategy=systemd."
exit 1
fi
echo ">>> systemctl restart $DEPLOY_SYSTEMD_SERVICE"
sudo systemctl restart "$DEPLOY_SYSTEMD_SERVICE"
echo ">>> systemctl status $DEPLOY_SYSTEMD_SERVICE"
sudo systemctl status "$DEPLOY_SYSTEMD_SERVICE" --no-pager
echo "Deploy (systemd) complete."
- name: "Deploy: script"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'script'
run: |
echo ">>> Deploy strategy: script"
echo ">>> Script: $DEPLOY_SCRIPT"
echo ">>> Workdir arg: $DEPLOY_WORKDIR"
if [ ! -f "$DEPLOY_SCRIPT" ]; then
echo "ERROR: Deploy script not found: $DEPLOY_SCRIPT"
exit 1
fi
chmod +x "$DEPLOY_SCRIPT"
"./$DEPLOY_SCRIPT" "$DEPLOY_WORKDIR"
echo "Deploy (script) complete."
# -----------------------------------------------------------------------
# Step 5: Summary
# -----------------------------------------------------------------------
- name: Deploy summary
if: always()
run: |
echo "=============================="
echo " Deploy (local-runner)"
echo " Enabled: ${ENABLE_DEPLOY:-false}"
echo " Mode: ${DEPLOY_MODE:-local-runner}"
echo " Strategy: ${DEPLOY_STRATEGY:-compose}"
echo " Workdir: ${DEPLOY_WORKDIR:-/opt/app}"
echo " Active: ${DEPLOY_ACTIVE:-false}"
echo "=============================="
# ===========================================================================
# JOB: deploy-ssh
# ===========================================================================
# Runs on a normal runner and SSHs into the VPS to deploy.
# This job is skipped if DEPLOY_MODE != ssh.
# ---------------------------------------------------------------------------
deploy-ssh:
runs-on: ubuntu-latest
steps:
# -----------------------------------------------------------------------
# Step 1: Load configuration
# -----------------------------------------------------------------------
- name: Checkout (for config + scripts)
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Load config
run: |
if [ -f .ci/config.env ]; then
set -a
source .ci/config.env
set +a
echo "Config loaded."
else
echo "WARNING: .ci/config.env not found, using defaults."
fi
echo "ENABLE_DEPLOY=${ENABLE_DEPLOY:-false}" >> "$GITHUB_ENV"
echo "DEPLOY_MODE=${DEPLOY_MODE:-local-runner}" >> "$GITHUB_ENV"
echo "DEPLOY_WORKDIR=${DEPLOY_WORKDIR:-/opt/app}" >> "$GITHUB_ENV"
echo "DEPLOY_STRATEGY=${DEPLOY_STRATEGY:-compose}" >> "$GITHUB_ENV"
echo "DEPLOY_COMPOSE_FILE=${DEPLOY_COMPOSE_FILE:-docker-compose.yml}" >> "$GITHUB_ENV"
echo "DEPLOY_SYSTEMD_SERVICE=${DEPLOY_SYSTEMD_SERVICE:-}" >> "$GITHUB_ENV"
echo "DEPLOY_SCRIPT=${DEPLOY_SCRIPT:-scripts/deploy.sh}" >> "$GITHUB_ENV"
echo "DEPLOY_ON_TAG=${DEPLOY_ON_TAG:-false}" >> "$GITHUB_ENV"
echo "DEFAULT_BRANCH=${DEFAULT_BRANCH:-main}" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 2: Gate checks
# -----------------------------------------------------------------------
- name: Check if deploy is enabled
run: |
if [ "$ENABLE_DEPLOY" != "true" ]; then
echo "========================================="
echo " Deploy is DISABLED (ENABLE_DEPLOY=$ENABLE_DEPLOY)"
echo " To enable: set ENABLE_DEPLOY=true in .ci/config.env"
echo "========================================="
exit 0
fi
if [ "$DEPLOY_MODE" != "ssh" ]; then
echo "DEPLOY_MODE=$DEPLOY_MODE (not ssh). This job is a no-op."
echo "The local-runner job will handle deployment instead."
exit 0
fi
echo "DEPLOY_ACTIVE=true" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 3: Branch guard (same logic as local-runner)
# -----------------------------------------------------------------------
- name: Branch guard
if: env.DEPLOY_ACTIVE == 'true'
run: |
REF="${GITHUB_REF:-}"
if echo "$REF" | grep -q '^refs/tags/v'; then
if [ "$DEPLOY_ON_TAG" != "true" ]; then
echo "Tag push detected but DEPLOY_ON_TAG=$DEPLOY_ON_TAG. Skipping."
echo "DEPLOY_ACTIVE=false" >> "$GITHUB_ENV"
exit 0
fi
echo "Deploying on tag: $REF"
exit 0
fi
BRANCH="$(echo "$REF" | sed 's|refs/heads/||')"
if [ "$BRANCH" != "$DEFAULT_BRANCH" ]; then
echo "Branch '$BRANCH' is not DEFAULT_BRANCH '$DEFAULT_BRANCH'. Skipping."
echo "DEPLOY_ACTIVE=false" >> "$GITHUB_ENV"
exit 0
fi
echo "Deploying on branch: $BRANCH"
# -----------------------------------------------------------------------
# Step 4: Set up SSH
#
# Secrets required:
# DEPLOY_SSH_KEY — private key (ed25519 recommended)
# DEPLOY_HOST — VPS IP or hostname
# DEPLOY_USER — SSH username
# DEPLOY_KNOWN_HOSTS — (optional) output of ssh-keyscan for the host
#
# If DEPLOY_KNOWN_HOSTS is not set, StrictHostKeyChecking is disabled.
# This is less secure but avoids first-connect failures in CI.
# For production, always set DEPLOY_KNOWN_HOSTS.
# -----------------------------------------------------------------------
- name: Set up SSH
if: env.DEPLOY_ACTIVE == 'true'
run: |
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Write private key (never echo it)
echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
# Known hosts — if provided, use it; otherwise disable strict checking
KNOWN_HOSTS="${{ secrets.DEPLOY_KNOWN_HOSTS }}"
if [ -n "$KNOWN_HOSTS" ]; then
echo "$KNOWN_HOSTS" > ~/.ssh/known_hosts
chmod 644 ~/.ssh/known_hosts
echo "known_hosts configured from secret."
else
echo "WARNING: DEPLOY_KNOWN_HOSTS not set. Disabling StrictHostKeyChecking."
echo "For production, set DEPLOY_KNOWN_HOSTS (run: ssh-keyscan your-host)"
{
echo "Host *"
echo " StrictHostKeyChecking no"
echo " UserKnownHostsFile /dev/null"
} > ~/.ssh/config
chmod 600 ~/.ssh/config
fi
# Build SSH command for reuse
SSH_CMD="ssh -i ~/.ssh/deploy_key ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}"
echo "SSH_CMD=${SSH_CMD}" >> "$GITHUB_ENV"
# Verify connectivity
echo "Testing SSH connection..."
$SSH_CMD "echo 'SSH connection successful.'" || {
echo "ERROR: SSH connection failed."
exit 1
}
# -----------------------------------------------------------------------
# Step 5: Execute deploy strategy (REMOTE — via SSH)
# -----------------------------------------------------------------------
- name: "Deploy via SSH: compose"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'compose'
run: |
echo ">>> Deploy strategy: compose (via SSH)"
$SSH_CMD << DEPLOY_EOF
set -e
echo ">>> cd $DEPLOY_WORKDIR"
cd "$DEPLOY_WORKDIR" || { echo "ERROR: Cannot cd to $DEPLOY_WORKDIR"; exit 1; }
echo ">>> docker compose -f $DEPLOY_COMPOSE_FILE pull"
docker compose -f "$DEPLOY_COMPOSE_FILE" pull
echo ">>> docker compose -f $DEPLOY_COMPOSE_FILE up -d"
docker compose -f "$DEPLOY_COMPOSE_FILE" up -d
echo "Deploy (compose) complete."
DEPLOY_EOF
- name: "Deploy via SSH: systemd"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'systemd'
run: |
echo ">>> Deploy strategy: systemd (via SSH)"
if [ -z "$DEPLOY_SYSTEMD_SERVICE" ]; then
echo "ERROR: DEPLOY_SYSTEMD_SERVICE is not set."
exit 1
fi
$SSH_CMD << DEPLOY_EOF
set -e
echo ">>> sudo systemctl restart $DEPLOY_SYSTEMD_SERVICE"
sudo systemctl restart "$DEPLOY_SYSTEMD_SERVICE"
echo ">>> systemctl status $DEPLOY_SYSTEMD_SERVICE"
sudo systemctl status "$DEPLOY_SYSTEMD_SERVICE" --no-pager
echo "Deploy (systemd) complete."
DEPLOY_EOF
- name: "Deploy via SSH: script"
if: env.DEPLOY_ACTIVE == 'true' && env.DEPLOY_STRATEGY == 'script'
run: |
echo ">>> Deploy strategy: script (via SSH)"
# Copy the deploy script to the VPS
scp -i ~/.ssh/deploy_key \
"$DEPLOY_SCRIPT" \
"${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }}:/tmp/_deploy_script.sh"
$SSH_CMD << DEPLOY_EOF
set -e
chmod +x /tmp/_deploy_script.sh
/tmp/_deploy_script.sh "$DEPLOY_WORKDIR"
rm -f /tmp/_deploy_script.sh
echo "Deploy (script) complete."
DEPLOY_EOF
# -----------------------------------------------------------------------
# Step 6: Clean up SSH key (always runs)
# -----------------------------------------------------------------------
- name: Clean up SSH
if: always()
run: |
rm -rf ~/.ssh/deploy_key ~/.ssh/config 2>/dev/null || true
# -----------------------------------------------------------------------
# Step 7: Summary
# -----------------------------------------------------------------------
- name: Deploy summary
if: always()
run: |
echo "=============================="
echo " Deploy (ssh)"
echo " Enabled: ${ENABLE_DEPLOY:-false}"
echo " Mode: ${DEPLOY_MODE:-ssh}"
echo " Strategy: ${DEPLOY_STRATEGY:-compose}"
echo " Workdir: ${DEPLOY_WORKDIR:-/opt/app}"
echo " Active: ${DEPLOY_ACTIVE:-false}"
echo "=============================="
-53
View File
@@ -1,53 +0,0 @@
name: Enterprise AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
# Checkout the PR repository
- uses: actions/checkout@v4
with:
fetch-depth: 0
# Checkout the CENTRAL AI tooling repo
- uses: actions/checkout@v4
with:
repository: Hiddenden/openrabbit
path: .ai-review
token: ${{ secrets.AI_REVIEW_TOKEN }}
# Setup Python
- uses: actions/setup-python@v5
with:
python-version: "3.11"
# Install dependencies
- run: pip install requests pyyaml
# Run the AI review
- name: Run Enterprise AI Review
env:
AI_REVIEW_TOKEN: ${{ secrets.AI_REVIEW_TOKEN }}
AI_REVIEW_REPO: ${{ gitea.repository }}
AI_REVIEW_API_URL: https://git.hiddenden.cafe/api/v1
AI_REVIEW_PR_NUMBER: ${{ gitea.event.pull_request.number }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}
run: |
cd .ai-review/tools/ai-review
python main.py pr ${{ gitea.repository }} ${{ gitea.event.pull_request.number }} \
--title "${{ gitea.event.pull_request.title }}"
# Fail CI on HIGH severity (optional)
- name: Check Review Result
if: failure()
run: |
echo "AI Review found HIGH severity issues. Please address them before merging."
exit 1
-107
View File
@@ -1,107 +0,0 @@
# =============================================================================
# Renovate Workflow — Automated Dependency Updates
# =============================================================================
#
# DISABLED BY DEFAULT (ENABLE_RENOVATE=false in .ci/config.env).
#
# When enabled, this workflow runs Renovate to:
# - Detect outdated dependencies (pip, npm, Docker FROM, etc.)
# - Open PRs with updates, respecting schedule and PR limits
#
# REQUIRED SECRET:
# RENOVATE_TOKEN — A Gitea PAT (Personal Access Token) with repo scope
# for the Renovate bot user. Set in repo/org secrets.
#
# CONFIG:
# - .ci/config.env → RENOVATE_SCHEDULE, RENOVATE_PR_LIMIT
# - renovate.json → Renovate-specific config (grouping, labels, etc.)
#
# See docs/RENOVATE.md for setup instructions.
# =============================================================================
name: Renovate
on:
# Run on a schedule (default: weekly on Mondays at 04:00 UTC)
schedule:
- cron: "0 4 * * 1"
# Allow manual trigger
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
steps:
# -----------------------------------------------------------------------
# Step 1: Checkout
# -----------------------------------------------------------------------
- name: Checkout
uses: actions/checkout@v4
# -----------------------------------------------------------------------
# Step 2: Load config
# -----------------------------------------------------------------------
- name: Load config
run: |
if [ -f .ci/config.env ]; then
set -a
source .ci/config.env
set +a
fi
echo "ENABLE_RENOVATE=${ENABLE_RENOVATE:-false}" >> "$GITHUB_ENV"
echo "RENOVATE_SCHEDULE=${RENOVATE_SCHEDULE:-weekly}" >> "$GITHUB_ENV"
echo "RENOVATE_PR_LIMIT=${RENOVATE_PR_LIMIT:-5}" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 3: Check if Renovate is enabled
# -----------------------------------------------------------------------
- name: Check if enabled
run: |
if [ "$ENABLE_RENOVATE" != "true" ]; then
echo "Renovate is disabled (ENABLE_RENOVATE=$ENABLE_RENOVATE)."
echo "To enable, set ENABLE_RENOVATE=true in .ci/config.env"
echo "SKIP_RENOVATE=true" >> "$GITHUB_ENV"
fi
# -----------------------------------------------------------------------
# Step 4: Run Renovate
#
# Uses the official Renovate CLI via npx. Configures it to point at
# the Gitea instance and the current repository.
# -----------------------------------------------------------------------
- name: Run Renovate
if: env.SKIP_RENOVATE != 'true'
env:
RENOVATE_TOKEN: ${{ secrets.RENOVATE_TOKEN }}
run: |
if [ -z "$RENOVATE_TOKEN" ]; then
echo "ERROR: RENOVATE_TOKEN secret is not set."
echo "Please create a Gitea PAT and add it as a repository secret."
exit 1
fi
# Determine repository path
FULL_REPO="${GITEA_REPOSITORY:-${{ github.repository }}}"
echo "Running Renovate for ${FULL_REPO} on ${REGISTRY_HOST:-git.hiddenden.cafe}..."
npx renovate \
--platform gitea \
--endpoint "https://${REGISTRY_HOST:-git.hiddenden.cafe}/api/v1" \
--token "$RENOVATE_TOKEN" \
--pr-hourly-limit "$RENOVATE_PR_LIMIT" \
"$FULL_REPO"
# -----------------------------------------------------------------------
# Step 5: Summary
# -----------------------------------------------------------------------
- name: Renovate Summary
if: always()
run: |
echo "=============================="
echo " Renovate Workflow Complete"
echo " Enabled: ${ENABLE_RENOVATE:-false}"
echo " Schedule: ${RENOVATE_SCHEDULE:-weekly}"
echo " PR Limit: ${RENOVATE_PR_LIMIT:-5}"
echo "=============================="
-211
View File
@@ -1,211 +0,0 @@
# =============================================================================
# Security Workflow — Secret Scanning & Vulnerability Detection
# =============================================================================
#
# DISABLED BY DEFAULT (ENABLE_SECURITY=false in .ci/config.env).
#
# When enabled, this workflow runs:
# 1. gitleaks — scans for hardcoded secrets in the repo
# 2. osv-scanner — checks dependencies for known vulnerabilities
# 3. trivy — scans Docker images for CVEs (if a built image exists)
#
# STRICT_SECURITY=true → any finding fails the workflow
# STRICT_SECURITY=false → findings are logged as warnings (default)
#
# This is "best effort" — tools that aren't available are skipped.
# See docs/SECURITY.md for full details.
# =============================================================================
name: Security
on:
push:
branches:
- main
pull_request:
jobs:
security:
runs-on: ubuntu-latest
steps:
# -----------------------------------------------------------------------
# Step 1: Checkout
# -----------------------------------------------------------------------
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
# -----------------------------------------------------------------------
# Step 2: Load configuration
# -----------------------------------------------------------------------
- name: Load config
run: |
if [ -f .ci/config.env ]; then
set -a
source .ci/config.env
set +a
fi
echo "ENABLE_SECURITY=${ENABLE_SECURITY:-false}" >> "$GITHUB_ENV"
echo "STRICT_SECURITY=${STRICT_SECURITY:-false}" >> "$GITHUB_ENV"
# -----------------------------------------------------------------------
# Step 3: Check if security scanning is enabled
# -----------------------------------------------------------------------
- name: Check if enabled
run: |
if [ "$ENABLE_SECURITY" != "true" ]; then
echo "Security scanning is disabled (ENABLE_SECURITY=$ENABLE_SECURITY)."
echo "To enable, set ENABLE_SECURITY=true in .ci/config.env"
echo "SKIP_SECURITY=true" >> "$GITHUB_ENV"
fi
# -----------------------------------------------------------------------
# Step 4: Gitleaks — Secret scanning
#
# Scans the git history for accidentally committed secrets
# (API keys, passwords, tokens, etc.)
# -----------------------------------------------------------------------
- name: Run gitleaks
if: env.SKIP_SECURITY != 'true'
run: |
FINDINGS=0
# Install gitleaks
echo "Installing gitleaks..."
GITLEAKS_VERSION="8.18.4"
curl -sSfL "https://github.com/gitleaks/gitleaks/releases/download/v${GITLEAKS_VERSION}/gitleaks_${GITLEAKS_VERSION}_linux_x64.tar.gz" | \
tar xz -C /usr/local/bin gitleaks || {
echo "WARNING: Failed to install gitleaks, skipping secret scan."
exit 0
}
echo ">>> gitleaks detect"
if ! gitleaks detect --source . --verbose; then
FINDINGS=1
echo "gitleaks found potential secrets!"
fi
if [ "$FINDINGS" -ne 0 ]; then
if [ "$STRICT_SECURITY" = "true" ]; then
echo "ERROR: Secret scan found issues (STRICT_SECURITY=true)"
exit 1
else
echo "WARNING: Secret scan found issues (STRICT_SECURITY=false, continuing)"
fi
else
echo "gitleaks: no secrets found."
fi
# -----------------------------------------------------------------------
# Step 5: OSV-Scanner — Dependency vulnerability scanning
#
# Checks lockfiles (requirements.txt, package-lock.json, etc.) against
# the OSV database for known vulnerabilities.
# -----------------------------------------------------------------------
- name: Run osv-scanner
if: env.SKIP_SECURITY != 'true'
run: |
FINDINGS=0
# Check if there's anything to scan
HAS_DEPS=false
for f in requirements.txt package-lock.json yarn.lock pnpm-lock.yaml go.sum Cargo.lock; do
if [ -f "$f" ]; then
HAS_DEPS=true
break
fi
done
if [ "$HAS_DEPS" = "false" ]; then
echo "SKIP: No dependency lockfiles found for osv-scanner."
exit 0
fi
# Install osv-scanner
echo "Installing osv-scanner..."
OSV_VERSION="1.8.3"
curl -sSfL "https://github.com/google/osv-scanner/releases/download/v${OSV_VERSION}/osv-scanner_linux_amd64" \
-o /usr/local/bin/osv-scanner && chmod +x /usr/local/bin/osv-scanner || {
echo "WARNING: Failed to install osv-scanner, skipping."
exit 0
}
echo ">>> osv-scanner --recursive ."
if ! osv-scanner --recursive .; then
FINDINGS=1
echo "osv-scanner found vulnerabilities!"
fi
if [ "$FINDINGS" -ne 0 ]; then
if [ "$STRICT_SECURITY" = "true" ]; then
echo "ERROR: Dependency scan found issues (STRICT_SECURITY=true)"
exit 1
else
echo "WARNING: Dependency scan found issues (STRICT_SECURITY=false, continuing)"
fi
else
echo "osv-scanner: no vulnerabilities found."
fi
# -----------------------------------------------------------------------
# Step 6: Trivy — Container image scanning
#
# Scans a Docker image for OS and library CVEs.
# Only runs if a Dockerfile exists (assumes image was built).
# -----------------------------------------------------------------------
- name: Run trivy
if: env.SKIP_SECURITY != 'true'
run: |
if [ ! -f Dockerfile ]; then
echo "SKIP: No Dockerfile found, skipping Trivy image scan."
exit 0
fi
FINDINGS=0
# Install trivy
echo "Installing trivy..."
curl -sSfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | \
sh -s -- -b /usr/local/bin || {
echo "WARNING: Failed to install trivy, skipping."
exit 0
}
# Build the image first so Trivy can scan it
IMAGE_TAG="security-scan:local"
echo ">>> docker build -t ${IMAGE_TAG} ."
docker build -t "${IMAGE_TAG}" . || {
echo "WARNING: Docker build failed, skipping Trivy scan."
exit 0
}
echo ">>> trivy image ${IMAGE_TAG}"
if ! trivy image --exit-code 1 --severity HIGH,CRITICAL "${IMAGE_TAG}"; then
FINDINGS=1
echo "Trivy found vulnerabilities in the Docker image!"
fi
if [ "$FINDINGS" -ne 0 ]; then
if [ "$STRICT_SECURITY" = "true" ]; then
echo "ERROR: Image scan found issues (STRICT_SECURITY=true)"
exit 1
else
echo "WARNING: Image scan found issues (STRICT_SECURITY=false, continuing)"
fi
else
echo "trivy: no HIGH/CRITICAL vulnerabilities found."
fi
# -----------------------------------------------------------------------
# Step 7: Summary
# -----------------------------------------------------------------------
- name: Security Summary
if: always()
run: |
echo "=============================="
echo " Security Workflow Complete"
echo " Enabled: ${ENABLE_SECURITY:-false}"
echo " Strict: ${STRICT_SECURITY:-false}"
echo "=============================="
-14
View File
@@ -1,14 +0,0 @@
# =============================================================================
# CODEOWNERS — Optional
# =============================================================================
# Gitea supports CODEOWNERS for automatic review assignment.
# Uncomment and customize the lines below.
#
# Format: <pattern> <@user-or-team> [<@user-or-team> ...]
#
# Examples:
# * @default-reviewer
# /docs/ @docs-team
# *.py @python-team
# .gitea/ @devops-team
# .ci/ @devops-team
-50
View File
@@ -1,50 +0,0 @@
# Contributing to ${REPO_NAME}
Thank you for your interest in contributing! Here's how to get started.
## Getting Started
1. Fork this repository on [git.hiddenden.cafe](https://git.hiddenden.cafe).
2. Clone your fork locally.
3. Create a feature branch: `git checkout -b feature/my-change`
4. Make your changes and commit with clear messages.
5. Push to your fork and open a Pull Request.
## Development
```bash
# Install dependencies
pip install -r requirements.txt # Python
npm ci # Node (if applicable)
# Run checks locally before pushing
make fmt
make lint
make test
```
## Pull Request Guidelines
- Fill out the PR template completely.
- Keep PRs focused — one logical change per PR.
- Ensure CI passes (lint + tests).
- Update documentation if your change affects behavior.
## Code Style
- Python: Follow PEP 8. We use **ruff** for linting and **black** for formatting.
- JavaScript/TypeScript: Follow the project's ESLint config if present.
- Use `.editorconfig` settings (your editor should pick them up automatically).
## Reporting Issues
Use the issue templates provided:
- **Bug Report** — for defects
- **Feature Request** — for new ideas
- **Question / Support** — for help
For security issues, see [SECURITY.md](SECURITY.md).
## Code of Conduct
Please read and follow our [Code of Conduct](CODE_OF_CONDUCT.md).
-19
View File
@@ -1,19 +0,0 @@
## Description
<!-- What does this PR do? Why is it needed? -->
## Changes
- [ ] ...
## Related Issues
<!-- Link related issues: Closes #123, Fixes #456 -->
## Checklist
- [ ] I have tested my changes locally
- [ ] Linting passes (`make lint`)
- [ ] Tests pass (`make test`)
- [ ] Documentation updated (if applicable)
- [ ] No secrets or credentials are committed
-27
View File
@@ -1,27 +0,0 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended"
],
"description": "Renovate config — groups minor/patch, limits PRs, updates Docker base images.",
"schedule": ["before 6am on Monday"],
"prHourlyLimit": 5,
"prConcurrentLimit": 5,
"labels": ["dependencies"],
"packageRules": [
{
"description": "Group all minor and patch updates to reduce PR noise",
"matchUpdateTypes": ["minor", "patch"],
"groupName": "minor-and-patch",
"groupSlug": "minor-patch"
},
{
"description": "Update Docker base images (FROM ...)",
"matchDatasources": ["docker"],
"enabled": true
}
],
"docker": {
"enabled": true
}
}