Complete documentation for future sessions

- CLAUDE.md for AI agents to understand the codebase
- GITEA-GUIDE.md centralizes all Gitea operations (API, Registry, Auth)
- DEVELOPMENT-WORKFLOW.md explains complete dev process
- ROADMAP.md, NEXT-SESSION.md for planning
- QUICK-REFERENCE.md, TROUBLESHOOTING.md for daily use
- 40+ detailed docs in /docs folder
- Backend as submodule from Gitea

Everything documented for autonomous operation.

Co-Authored-By: Claude Sonnet 4.5 (1M context) <noreply@anthropic.com>
This commit is contained in:
Hector Ros
2026-01-20 00:36:53 +01:00
commit db71705842
49 changed files with 19162 additions and 0 deletions

View File

@@ -0,0 +1,316 @@
# Flujo de Datos
## Arquitectura de Comunicación
```
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Frontend │ ◄─────► │ Backend │ ◄─────► │ MySQL │
└──────────┘ └──────────┘ └──────────┘
│ │ │
│ ├──────────────────────┤
│ │ Redis │
│ └──────────────────────┘
│ │
│ ┌─────┴─────┐
│ │ │
│ ┌────▼────┐ ┌───▼────┐
│ │ Gitea │ │ K8s │
│ └─────────┘ └───┬────┘
│ │
│ ┌────▼────────┐
└────────────────────┤ Claude Code │
WebSocket │ Agents │
└─────────────┘
```
## 1. Flujo Completo: Creación de Tarea
### 1.1 Usuario Crea Tarea
```
Frontend Backend MySQL Redis
│ │ │ │
│ POST /api/tasks │ │ │
├──────────────────────►│ │ │
│ │ INSERT task │ │
│ ├──────────────────►│ │
│ │ │ │
│ │ PUBLISH task.new │ │
│ ├───────────────────┼────────────────►│
│ │ │ │
│ { taskId, status } │ │ │
│◄──────────────────────┤ │ │
│ │ │ │
│ WS: task_created │ │ │
│◄──────────────────────┤ │ │
```
**Detalle**:
1. Frontend envía POST a `/api/tasks` con JSON:
```json
{
"projectId": "uuid",
"title": "Implementar login",
"description": "Crear sistema de autenticación..."
}
```
2. Backend:
- Valida datos
- Inserta en MySQL tabla `tasks`
- Publica evento en Redis: `task:new`
- Añade job a cola BullMQ: `task-queue`
- Responde con task creada
3. WebSocket notifica a todos los clientes conectados
### 1.2 Agente Toma Tarea
```
Agent (K8s) Backend (MCP) MySQL BullMQ
│ │ │ │
│ MCP: get_next_task │ │ │
├──────────────────────►│ │ │
│ │ SELECT task │ │
│ ├──────────────────►│ │
│ │ │ │
│ │ UPDATE status │ │
│ ├──────────────────►│ │
│ │ │ │
│ { task details } │ DEQUEUE job │ │
│◄──────────────────────┤◄─────────────────┼─────────────────┤
│ │ │ │
```
**Detalle**:
1. Agente llama herramienta MCP `get_next_task`
2. Backend:
- Query: `SELECT * FROM tasks WHERE state='backlog' ORDER BY created_at LIMIT 1`
- Actualiza: `UPDATE tasks SET state='in_progress', assigned_agent_id=?`
- Elimina job de BullMQ
3. Responde con detalles completos de la tarea
## 2. Flujo: Agente Pide Información
```
Agent Backend (MCP) MySQL Frontend (WS)
│ │ │ │
│ ask_user_question │ │ │
├─────────────────────►│ │ │
│ │ UPDATE task │ │
│ ├──────────────────►│ │
│ │ state=needs_input │ │
│ │ │ │
│ │ INSERT question │ │
│ ├──────────────────►│ │
│ │ │ │
│ { success: true } │ WS: needs_input │ │
│◄─────────────────────┤──────────────────┼───────────────────►│
│ │ │ │
│ │ │ [Usuario ve] │
│ │ │ [notificación] │
│ │ │ │
│ │ POST /api/tasks/ │ │
│ │ :id/respond │ │
│ │◄──────────────────┼────────────────────┤
│ │ │ │
│ MCP: check_response │ UPDATE response │ │
├─────────────────────►├──────────────────►│ │
│ │ state=in_progress │ │
│ { response: "..." } │ │ │
│◄─────────────────────┤ │ │
```
**Detalle**:
1. Agente detecta necesita info (ej: "¿Qué librería usar para auth?")
2. Llama `ask_user_question(taskId, question)`
3. Backend:
- Actualiza `tasks.state = 'needs_input'`
- Inserta en tabla `task_questions`
- Emite WebSocket `task:needs_input`
4. Frontend muestra notificación y badge en kanban
5. Usuario responde vía UI
6. Backend guarda respuesta
7. Agente hace polling o recibe notificación vía MCP
## 3. Flujo: Completar Tarea y Deploy Preview
```
Agent Backend(MCP) Gitea API MySQL K8s API Frontend
│ │ │ │ │ │
│ create_branch │ │ │ │ │
├─────────────────►│ │ │ │ │
│ │ POST /repos/│ │ │ │
│ │ :owner/:repo│ │ │ │
│ │ /branches │ │ │ │
│ ├────────────►│ │ │ │
│ { branch } │ │ │ │ │
│◄─────────────────┤ │ │ │ │
│ │ │ │ │ │
│ [agent works] │ │ │ │ │
│ [commits code] │ │ │ │ │
│ │ │ │ │ │
│ create_pr │ │ │ │ │
├─────────────────►│ │ │ │ │
│ │ POST /pulls │ │ │ │
│ ├────────────►│ │ │ │
│ { pr_url } │ │ │ │ │
│◄─────────────────┤ │ │ │ │
│ │ │ │ │ │
│ trigger_preview │ │ │ │ │
├─────────────────►│ │ │ │ │
│ │ UPDATE task │ │ │ │
│ ├────────────┼────────────►│ │ │
│ │ │ │ │ │
│ │ CREATE │ │ CREATE │ │
│ │ namespace │ │ Deployment │
│ ├────────────┼────────────┼──────────►│ │
│ │ │ │ │ │
│ { preview_url } │ │ WS:ready_to_test │ │
│◄─────────────────┤─────────────┼───────────┼───────────┼───────────►│
```
**Detalle**:
1. **create_branch**: Backend usa Gitea API para crear rama `task-{id}-feature`
2. **Agente trabaja**: Clone, cambios, commits, push
3. **create_pr**: Crea PR con descripción generada
4. **trigger_preview**:
- Backend crea namespace K8s: `preview-task-{id}`
- Aplica deployment con imagen del proyecto
- Configura ingress con URL: `task-{id}.preview.aiworker.dev`
- Actualiza `tasks.state = 'ready_to_test'`
5. Frontend muestra botón "Ver Preview" con URL
## 4. Flujo: Merge a Staging
```
User (Frontend) Backend Gitea API K8s API ArgoCD
│ │ │ │ │
│ POST /merge │ │ │ │
│ taskIds[] │ │ │ │
├──────────────►│ │ │ │
│ │ Validate │ │ │
│ │ all approved │ │ │
│ │ │ │ │
│ │ POST /pulls │ │ │
│ │ (merge PRs) │ │ │
│ ├──────────────►│ │ │
│ │ │ │ │
│ │ POST /branches│ │ │
│ │ staging │ │ │
│ ├──────────────►│ │ │
│ │ │ │ │
│ │ Trigger │ Apply │ │
│ │ ArgoCD sync │ manifests │ │
│ ├───────────────┼──────────────┼────────────►│
│ │ │ │ │
│ { status } │ │ │ [Deploys] │
│◄──────────────┤ │ │ │
```
**Detalle**:
1. Usuario selecciona 2-3 tareas aprobadas
2. Click "Merge a Staging"
3. Backend:
- Valida todas están en estado `approved`
- Mergea cada PR a `staging` branch
- Actualiza estado a `staging`
- Triggerea ArgoCD sync
4. ArgoCD detecta cambios y deploya a namespace `staging`
## 5. Comunicación Real-Time (WebSocket)
### Eventos emitidos por Backend:
```typescript
// Usuario se conecta
socket.on('connect', () => {
socket.emit('auth', { userId, token })
})
// Backend emite eventos
socket.emit('task:created', { taskId, projectId })
socket.emit('task:status_changed', { taskId, oldState, newState })
socket.emit('task:needs_input', { taskId, question })
socket.emit('task:ready_to_test', { taskId, previewUrl })
socket.emit('agent:status', { agentId, status, currentTaskId })
socket.emit('deploy:started', { environment, taskIds })
socket.emit('deploy:completed', { environment, url })
```
### Cliente subscribe:
```typescript
socket.on('task:status_changed', (data) => {
// Actualiza UI del kanban
queryClient.invalidateQueries(['tasks'])
})
socket.on('task:needs_input', (data) => {
// Muestra notificación
toast.info('Un agente necesita tu ayuda')
// Mueve card a columna "Needs Input"
})
```
## 6. Caching Strategy
### Redis Cache Keys:
```
task:{id} → TTL 5min (task details)
task:list:{projectId} → TTL 2min (lista de tareas)
agent:{id}:status → TTL 30s (estado agente)
project:{id} → TTL 10min (config proyecto)
```
### Invalidación:
```typescript
// Al actualizar tarea
await redis.del(`task:${taskId}`)
await redis.del(`task:list:${projectId}`)
// Al cambiar estado agente
await redis.setex(`agent:${agentId}:status`, 30, status)
```
## 7. Queue System (BullMQ)
### Colas:
```
task-queue → Tareas pendientes de asignar
deploy-queue → Deploys a ejecutar
merge-queue → Merges programados
cleanup-queue → Limpieza de preview envs antiguos
```
### Workers:
```typescript
// task-worker.ts
taskQueue.process(async (job) => {
const { taskId } = job.data
// Notifica agentes disponibles vía MCP
await notifyAgents({ taskId })
})
// deploy-worker.ts
deployQueue.process(async (job) => {
const { taskId, environment } = job.data
await k8sClient.createDeployment(...)
})
```
## Resumen de Protocolos
| Comunicación | Protocolo | Uso |
|--------------|-----------|-----|
| Frontend ↔ Backend | HTTP REST + WebSocket | CRUD + Real-time |
| Backend ↔ MySQL | TCP/MySQL Protocol | Persistencia |
| Backend ↔ Redis | RESP | Cache + PubSub |
| Backend ↔ Gitea | HTTP REST | Git operations |
| Backend ↔ K8s | HTTP + Kubernetes API | Orquestación |
| Backend ↔ Agents | MCP (stdio/HTTP) | Herramientas |
| Agents ↔ Gitea | Git Protocol (SSH) | Clone/Push |

View File

@@ -0,0 +1,430 @@
# Modelo de Datos (MySQL)
## Diagrama ER
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Projects │───────│ Tasks │───────│ Agents │
└─────────────┘ 1:N └─────────────┘ N:1 └─────────────┘
│ 1:N
┌────▼────────┐
│ Questions │
└─────────────┘
┌─────────────┐ ┌─────────────┐
│ TaskGroups │───────│ Deploys │
└─────────────┘ 1:N └─────────────┘
```
## Schema SQL
### Tabla: projects
```sql
CREATE TABLE projects (
id VARCHAR(36) PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
-- Gitea integration
gitea_repo_id INT,
gitea_repo_url VARCHAR(512),
gitea_owner VARCHAR(100),
gitea_repo_name VARCHAR(100),
default_branch VARCHAR(100) DEFAULT 'main',
-- Kubernetes
k8s_namespace VARCHAR(63) NOT NULL UNIQUE,
-- Infrastructure config (JSON)
docker_image VARCHAR(512),
env_vars JSON,
replicas INT DEFAULT 1,
cpu_limit VARCHAR(20) DEFAULT '500m',
memory_limit VARCHAR(20) DEFAULT '512Mi',
-- MCP config (JSON)
mcp_tools JSON,
mcp_permissions JSON,
-- Status
status ENUM('active', 'paused', 'archived') DEFAULT 'active',
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_status (status),
INDEX idx_k8s_namespace (k8s_namespace)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: tasks
```sql
CREATE TABLE tasks (
id VARCHAR(36) PRIMARY KEY,
project_id VARCHAR(36) NOT NULL,
-- Task info
title VARCHAR(255) NOT NULL,
description TEXT,
priority ENUM('low', 'medium', 'high', 'urgent') DEFAULT 'medium',
-- State machine
state ENUM(
'backlog',
'in_progress',
'needs_input',
'ready_to_test',
'approved',
'staging',
'production',
'cancelled'
) DEFAULT 'backlog',
-- Assignment
assigned_agent_id VARCHAR(36),
assigned_at TIMESTAMP NULL,
-- Git info
branch_name VARCHAR(255),
pr_number INT,
pr_url VARCHAR(512),
-- Preview deployment
preview_namespace VARCHAR(63),
preview_url VARCHAR(512),
preview_deployed_at TIMESTAMP NULL,
-- Metadata
estimated_complexity ENUM('trivial', 'simple', 'medium', 'complex') DEFAULT 'medium',
actual_duration_minutes INT,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
started_at TIMESTAMP NULL,
completed_at TIMESTAMP NULL,
deployed_staging_at TIMESTAMP NULL,
deployed_production_at TIMESTAMP NULL,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
FOREIGN KEY (assigned_agent_id) REFERENCES agents(id) ON DELETE SET NULL,
INDEX idx_project_state (project_id, state),
INDEX idx_state (state),
INDEX idx_assigned_agent (assigned_agent_id),
INDEX idx_created_at (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: task_questions
```sql
CREATE TABLE task_questions (
id VARCHAR(36) PRIMARY KEY,
task_id VARCHAR(36) NOT NULL,
-- Question
question TEXT NOT NULL,
context TEXT,
asked_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
-- Response
response TEXT,
responded_at TIMESTAMP NULL,
responded_by VARCHAR(36),
-- Status
status ENUM('pending', 'answered', 'skipped') DEFAULT 'pending',
FOREIGN KEY (task_id) REFERENCES tasks(id) ON DELETE CASCADE,
INDEX idx_task_status (task_id, status),
INDEX idx_status (status)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: agents
```sql
CREATE TABLE agents (
id VARCHAR(36) PRIMARY KEY,
-- K8s info
pod_name VARCHAR(253) NOT NULL UNIQUE,
k8s_namespace VARCHAR(63) DEFAULT 'agents',
node_name VARCHAR(253),
-- Status
status ENUM('idle', 'busy', 'error', 'offline', 'initializing') DEFAULT 'initializing',
current_task_id VARCHAR(36),
-- Capabilities
capabilities JSON, -- ['javascript', 'python', 'react', ...]
max_concurrent_tasks INT DEFAULT 1,
-- Health
last_heartbeat TIMESTAMP NULL,
error_message TEXT,
restarts_count INT DEFAULT 0,
-- Metrics
tasks_completed INT DEFAULT 0,
total_runtime_minutes INT DEFAULT 0,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
FOREIGN KEY (current_task_id) REFERENCES tasks(id) ON DELETE SET NULL,
INDEX idx_status (status),
INDEX idx_pod_name (pod_name),
INDEX idx_last_heartbeat (last_heartbeat)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: task_groups
```sql
CREATE TABLE task_groups (
id VARCHAR(36) PRIMARY KEY,
project_id VARCHAR(36) NOT NULL,
-- Grouping
task_ids JSON NOT NULL, -- ['task-id-1', 'task-id-2', ...]
-- Staging
staging_branch VARCHAR(255),
staging_pr_number INT,
staging_pr_url VARCHAR(512),
staging_deployed_at TIMESTAMP NULL,
-- Production
production_deployed_at TIMESTAMP NULL,
production_rollback_available BOOLEAN DEFAULT TRUE,
-- Status
status ENUM('pending', 'staging', 'production', 'rolled_back') DEFAULT 'pending',
-- Metadata
created_by VARCHAR(36),
notes TEXT,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
INDEX idx_project_status (project_id, status),
INDEX idx_status (status)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: deployments
```sql
CREATE TABLE deployments (
id VARCHAR(36) PRIMARY KEY,
project_id VARCHAR(36) NOT NULL,
task_group_id VARCHAR(36),
-- Deployment info
environment ENUM('preview', 'staging', 'production') NOT NULL,
deployment_type ENUM('manual', 'automatic', 'rollback') DEFAULT 'manual',
-- Git info
branch VARCHAR(255),
commit_hash VARCHAR(40),
-- K8s info
k8s_namespace VARCHAR(63),
k8s_deployment_name VARCHAR(253),
image_tag VARCHAR(255),
-- Status
status ENUM('pending', 'in_progress', 'completed', 'failed', 'rolled_back') DEFAULT 'pending',
-- Results
url VARCHAR(512),
error_message TEXT,
logs TEXT,
-- Timing
started_at TIMESTAMP NULL,
completed_at TIMESTAMP NULL,
duration_seconds INT,
-- Metadata
triggered_by VARCHAR(36),
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
FOREIGN KEY (task_group_id) REFERENCES task_groups(id) ON DELETE SET NULL,
INDEX idx_project_env (project_id, environment),
INDEX idx_status (status),
INDEX idx_created_at (created_at)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
### Tabla: agent_logs
```sql
CREATE TABLE agent_logs (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
agent_id VARCHAR(36) NOT NULL,
task_id VARCHAR(36),
-- Log entry
level ENUM('debug', 'info', 'warn', 'error') DEFAULT 'info',
message TEXT NOT NULL,
metadata JSON,
-- Timestamp
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (agent_id) REFERENCES agents(id) ON DELETE CASCADE,
FOREIGN KEY (task_id) REFERENCES tasks(id) ON DELETE SET NULL,
INDEX idx_agent_created (agent_id, created_at),
INDEX idx_task_created (task_id, created_at),
INDEX idx_level (level)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
```
## Índices y Optimizaciones
### Índices Compuestos Importantes
```sql
-- Búsqueda de tareas por proyecto y estado
CREATE INDEX idx_tasks_project_state ON tasks(project_id, state, created_at);
-- Búsqueda de agentes disponibles
CREATE INDEX idx_agents_available ON agents(status, last_heartbeat)
WHERE status = 'idle';
-- Logs recientes por agente
CREATE INDEX idx_agent_logs_recent ON agent_logs(agent_id, created_at DESC)
USING BTREE;
```
### Particionamiento (para logs)
```sql
-- Particionar agent_logs por mes
ALTER TABLE agent_logs PARTITION BY RANGE (YEAR(created_at) * 100 + MONTH(created_at)) (
PARTITION p202601 VALUES LESS THAN (202602),
PARTITION p202602 VALUES LESS THAN (202603),
PARTITION p202603 VALUES LESS THAN (202604),
-- ... auto-crear con script
PARTITION p_future VALUES LESS THAN MAXVALUE
);
```
## Queries Comunes
### Obtener siguiente tarea disponible
```sql
SELECT * FROM tasks
WHERE state = 'backlog'
AND project_id = ?
ORDER BY
priority DESC,
created_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED;
```
### Agentes disponibles
```sql
SELECT * FROM agents
WHERE status = 'idle'
AND last_heartbeat > DATE_SUB(NOW(), INTERVAL 60 SECOND)
ORDER BY tasks_completed ASC
LIMIT 1;
```
### Dashboard: Métricas de proyecto
```sql
SELECT
COUNT(*) as total_tasks,
SUM(CASE WHEN state = 'backlog' THEN 1 ELSE 0 END) as backlog,
SUM(CASE WHEN state = 'in_progress' THEN 1 ELSE 0 END) as in_progress,
SUM(CASE WHEN state = 'needs_input' THEN 1 ELSE 0 END) as needs_input,
SUM(CASE WHEN state = 'ready_to_test' THEN 1 ELSE 0 END) as ready_to_test,
SUM(CASE WHEN state = 'production' THEN 1 ELSE 0 END) as completed,
AVG(actual_duration_minutes) as avg_duration
FROM tasks
WHERE project_id = ?;
```
### Historial de deployments
```sql
SELECT
d.*,
tg.task_ids,
COUNT(t.id) as tasks_count
FROM deployments d
LEFT JOIN task_groups tg ON d.task_group_id = tg.id
LEFT JOIN tasks t ON JSON_CONTAINS(tg.task_ids, CONCAT('"', t.id, '"'))
WHERE d.project_id = ?
AND d.environment = 'production'
GROUP BY d.id
ORDER BY d.created_at DESC
LIMIT 20;
```
## Migraciones con Drizzle
```typescript
// drizzle/schema.ts
import { mysqlTable, varchar, text, timestamp, json, int, mysqlEnum } from 'drizzle-orm/mysql-core'
export const projects = mysqlTable('projects', {
id: varchar('id', { length: 36 }).primaryKey(),
name: varchar('name', { length: 255 }).notNull(),
description: text('description'),
giteaRepoId: int('gitea_repo_id'),
giteaRepoUrl: varchar('gitea_repo_url', { length: 512 }),
// ... resto campos
createdAt: timestamp('created_at').defaultNow(),
updatedAt: timestamp('updated_at').defaultNow().onUpdateNow(),
})
export const tasks = mysqlTable('tasks', {
id: varchar('id', { length: 36 }).primaryKey(),
projectId: varchar('project_id', { length: 36 }).notNull().references(() => projects.id),
title: varchar('title', { length: 255 }).notNull(),
state: mysqlEnum('state', [
'backlog', 'in_progress', 'needs_input',
'ready_to_test', 'approved', 'staging', 'production', 'cancelled'
]).default('backlog'),
// ... resto campos
})
```
## Backup Strategy
```bash
# Daily backup
mysqldump -u root -p aiworker \
--single-transaction \
--quick \
--lock-tables=false \
> backup-$(date +%Y%m%d).sql
# Restore
mysql -u root -p aiworker < backup-20260119.sql
```

View File

@@ -0,0 +1,140 @@
# Overview General - AiWorker
## Concepto
AiWorker es un sistema de orquestación de agentes IA que automatiza el ciclo completo de desarrollo de software mediante:
1. **Dashboard Web**: Interfaz central para gestionar proyectos y tareas
2. **Consolas Web Persistentes**: Terminales web conectadas a pods de Claude Code en K8s
3. **Kanban Board Inteligente**: Gestión visual de tareas con estados automáticos
4. **Agentes Autónomos**: Claude Code trabajando en tareas asignadas
5. **Deployments Automatizados**: Preview, staging y producción orquestados
## Arquitectura de Alto Nivel
```
┌─────────────────────────────────────────────────────────────────┐
│ Dashboard Web │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Kanban │ │ Consolas │ │ Project │ │
│ │ Board │ │ Web │ │ Manager │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└────────────────────────┬────────────────────────────────────────┘
│ HTTP/WebSocket
┌────────────────────────▼────────────────────────────────────────┐
│ Backend (Bun + Express) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ API │ │ MCP │ │ Gitea │ │ K8s │ │
│ │ Server │ │ Server │ │ Client │ │ Client │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└────────┬───────────────┬─────────────┬─────────────┬───────────┘
│ │ │ │
┌────▼────┐ ┌───▼────┐ ┌───▼────┐ ┌────▼─────┐
│ MySQL │ │ Redis │ │ Gitea │ │ K8s │
└─────────┘ └────────┘ └────────┘ └──────────┘
┌───────────────────────────────┘
┌──────────▼──────────────────────────────────────┐
│ Kubernetes Cluster │
│ ┌──────────────┐ ┌─────────────────────────┐ │
│ │ Agents │ │ Project Namespaces │ │
│ │ Namespace │ │ ├── dev │ │
│ │ │ │ ├── preview/<task-id> │ │
│ │ Claude Code │ │ ├── staging │ │
│ │ Pods │ │ └── production │ │
│ └──────────────┘ └─────────────────────────┘ │
└─────────────────────────────────────────────────┘
```
## Componentes Principales
### 1. Dashboard Web (Frontend)
- **Tecnología**: React 19.2 + TailwindCSS + Vite
- **Funciones**:
- Kanban board para gestión de tareas
- Consolas web interactivas (xterm.js)
- Gestión de proyectos
- Monitoring en tiempo real
### 2. Backend API
- **Tecnología**: Bun 1.3.6 + Express + TypeScript
- **Funciones**:
- API REST para frontend
- MCP Server para agentes
- Orquestación de tareas
- Integración con Gitea y K8s
### 3. Base de Datos
- **MySQL 8.0**: Almacenamiento persistente
- **Redis**: Colas, cache, pub/sub
### 4. Gitea
- **Servidor Git auto-alojado**
- **API compatible con GitHub**
- **Gestión de repos, branches, PRs**
### 5. Kubernetes Cluster
- **Orquestación de contenedores**
- **Namespaces por proyecto y entorno**
- **Auto-scaling de agentes**
### 6. Claude Code Agents
- **Pods persistentes en K8s**
- **Conectados vía MCP Server**
- **Workspace aislado por agente**
## Estados de Tareas
```
Backlog → En Progreso → Necesita Respuestas
Usuario responde
┌───────────────┘
Listo para Probar
(Preview deploy)
Aprobado
Staging (merge grupal)
Producción
```
## Flujo de Trabajo Típico
1. **Usuario crea proyecto** → Sistema crea repo en Gitea + namespace en K8s
2. **Usuario crea tareas** → Se añaden al backlog del kanban
3. **Agente disponible** → Toma siguiente tarea vía MCP
4. **Agente trabaja** → Clone, branch, código, commits
5. **¿Necesita info?** → Cambia estado a "Necesita Respuestas"
6. **Completa tarea** → Push + PR + deploy preview
7. **Usuario prueba** → En ambiente preview aislado
8. **Aprueba** → Marca para staging
9. **Merge grupal** → Agrega 2-3 tareas + merge a staging
10. **Deploy staging** → Tests automáticos
11. **Deploy producción** → Aprobación final
## Ventajas del Sistema
**Automatización completa**: Desde tarea hasta producción
**Aislamiento**: Cada tarea en su propio preview environment
**Trazabilidad**: Todo cambio vinculado a tarea y PR
**Escalabilidad**: Agentes auto-escalables en K8s
**Flexibilidad**: Agentes pueden pedir ayuda al usuario
**Control**: Usuario aprueba cada fase importante
## Seguridad
- Namespaces aislados en K8s
- RBAC por agente
- Secrets management
- Network policies
- Auditoría de acciones
## Próximos Pasos
Ver documentación específica de cada componente en las secciones correspondientes.

View File

@@ -0,0 +1,208 @@
# Stack Tecnológico
## Frontend
### Core
- **React 19.2**: Framework UI principal
- **Vite**: Build tool y dev server
- **TypeScript**: Type safety
- **TailwindCSS 4.x**: Styling utility-first
### Librerías UI
- **@dnd-kit/core**: Drag and drop para kanban
- **xterm.js**: Emulador de terminal web
- **lucide-react**: Iconos modernos
- **react-hot-toast**: Notificaciones
- **recharts**: Gráficas y métricas
### Estado y Data Fetching
- **@tanstack/react-query**: Server state management
- **zustand**: Client state management (ligero y simple)
- **socket.io-client**: WebSocket para real-time
### Routing
- **react-router-dom**: Navegación SPA
## Backend
### Core
- **Bun 1.3.6**: Runtime JavaScript ultra-rápido
- **Express**: Framework HTTP
- **TypeScript**: Type safety
### Database
- **MySQL 8.0**: Base de datos relacional principal
- **mysql2**: Driver MySQL para Node.js
- **Drizzle ORM**: ORM TypeScript-first moderno
- Type-safe
- Ligero
- Excelente DX con Bun
### Cache y Colas
- **Redis 7.x**: Cache y message broker
- **BullMQ**: Sistema de colas robusto
- **ioredis**: Cliente Redis
### Comunicación con Agentes
- **@modelcontextprotocol/sdk**: SDK oficial MCP
- **socket.io**: WebSocket server
### Integraciones
- **@kubernetes/client-node**: Cliente oficial K8s
- **octokit** (adaptado): Cliente API Gitea
- **axios**: HTTP client
### Desarrollo
- **tsx**: TypeScript execution
- **nodemon**: Hot reload
- **prettier**: Code formatting
- **eslint**: Linting
## Infrastructure
### Containerización
- **Docker 24.x**: Containerización
- **Docker Compose**: Orquestación local
### Orchestration
- **Kubernetes 1.28+**: Orquestación de contenedores
- **kubectl**: CLI
- **helm**: Package manager
- **kustomize**: Configuration management
### Git Server
- **Gitea latest**: Servidor Git auto-alojado
- Ligero (~100MB)
- API REST compatible GitHub
- Webhooks nativos
### CI/CD y GitOps
- **ArgoCD**: GitOps continuous delivery
- **GitHub Actions** (o Gitea Actions): CI pipelines
### Monitoring y Logging
- **Prometheus**: Métricas
- **Grafana**: Visualización
- **Loki**: Logs aggregation
- **Jaeger**: Distributed tracing (opcional)
### Networking
- **Nginx Ingress Controller**: Routing
- **cert-manager**: TLS certificates
## Agentes
### Claude Code
- **Claude Code CLI**: Herramienta oficial de Anthropic
- **Model**: Claude Sonnet 4.5
- **MCP Tools**: Comunicación con backend
## Development Tools
### Package Management
- **bun**: Package manager principal
- **npm**: Fallback para compatibilidad
### Testing
- **Vitest**: Unit testing (compatible con Bun)
- **@testing-library/react**: React testing
- **Playwright**: E2E testing
### Code Quality
- **TypeScript 5.x**: Type checking
- **ESLint**: Linting
- **Prettier**: Formatting
- **husky**: Git hooks
## Versiones Específicas
```json
{
"frontend": {
"react": "19.2.0",
"vite": "^6.0.0",
"typescript": "^5.6.0",
"tailwindcss": "^4.0.0"
},
"backend": {
"bun": "1.3.6",
"express": "^4.19.0",
"mysql2": "^3.11.0",
"drizzle-orm": "^0.36.0",
"bullmq": "^5.23.0",
"@modelcontextprotocol/sdk": "^1.0.0"
},
"infrastructure": {
"kubernetes": "1.28+",
"docker": "24.0+",
"gitea": "1.22+",
"redis": "7.2+",
"mysql": "8.0+"
}
}
```
## Justificación de Tecnologías
### ¿Por qué Bun?
- **Velocidad**: 3-4x más rápido que Node.js
- **TypeScript nativo**: Sin configuración adicional
- **APIs modernas**: Compatibilidad Web Standard
- **Tooling integrado**: Bundler, test runner, package manager
### ¿Por qué MySQL?
- **Madurez**: Batalla-probado en producción
- **Rendimiento**: Excelente para lecturas/escrituras
- **Transacciones**: ACID compliance
- **Ecosistema**: Herramientas maduras (backup, replicación)
### ¿Por qué Drizzle ORM?
- **Type-safety**: Inferencia total de tipos
- **Performance**: Query builder sin overhead
- **DX**: Migraciones automáticas
- **Bun compatible**: Primera clase
### ¿Por qué Gitea?
- **Ligero**: Binario único, bajo consumo
- **Auto-alojado**: Control total
- **API familiar**: Compatible con GitHub
- **Simple**: Instalación en minutos
### ¿Por qué React 19.2 sin Next.js?
- **Simplicidad**: SPA sin server-side complexity
- **Control total**: Sin abstracciones extra
- **Rendimiento**: Nuevo compilador React
- **Features**: Transitions, Server Actions cliente-side
## Alternativas Consideradas
| Necesidad | Elegido | Alternativas | Razón |
|-----------|---------|--------------|-------|
| Runtime | Bun | Node, Deno | Velocidad + DX |
| DB | MySQL | PostgreSQL, MongoDB | Familiaridad + Madurez |
| ORM | Drizzle | Prisma, TypeORM | Type-safety + Performance |
| Git | Gitea | GitLab, Gogs | Simplicidad + Features |
| Frontend | React | Vue, Svelte | Ecosistema + React 19 |
| Orchestration | K8s | Docker Swarm, Nomad | Industry standard |
## Dependencias Críticas
```bash
# Backend
bun add express mysql2 drizzle-orm ioredis bullmq
bun add @modelcontextprotocol/sdk socket.io
bun add @kubernetes/client-node axios
# Frontend
bun add react@19.2.0 react-dom@19.2.0
bun add @tanstack/react-query zustand
bun add socket.io-client xterm
bun add @dnd-kit/core react-router-dom
```
## Roadmap Tecnológico
**Fase 1 (MVP)**: Stack actual
**Fase 2**: Añadir Prometheus + Grafana
**Fase 3**: Implementar tracing con Jaeger
**Fase 4**: Multi-tenancy y sharding de DB

View File

@@ -0,0 +1,484 @@
# API Endpoints
## Base URL
```
http://localhost:3000/api
```
## Authentication
Todos los endpoints (excepto `/health`) requieren autenticación JWT:
```
Authorization: Bearer <token>
```
---
## Projects
### GET /projects
Lista todos los proyectos.
**Response**:
```json
{
"projects": [
{
"id": "uuid",
"name": "My Project",
"description": "Project description",
"giteaRepoUrl": "http://gitea/owner/repo",
"k8sNamespace": "my-project",
"status": "active",
"createdAt": "2026-01-19T10:00:00Z"
}
]
}
```
### GET /projects/:id
Obtiene detalles de un proyecto.
### POST /projects
Crea un nuevo proyecto.
**Body**:
```json
{
"name": "My New Project",
"description": "Project description",
"dockerImage": "node:20-alpine",
"envVars": {
"NODE_ENV": "production"
},
"replicas": 2,
"cpuLimit": "1000m",
"memoryLimit": "1Gi"
}
```
**Response**:
```json
{
"project": {
"id": "uuid",
"name": "My New Project",
"giteaRepoUrl": "http://gitea/owner/my-new-project",
"k8sNamespace": "my-new-project-abc123"
}
}
```
### PATCH /projects/:id
Actualiza un proyecto.
### DELETE /projects/:id
Elimina un proyecto y todos sus recursos.
---
## Tasks
### GET /tasks
Lista tareas con filtros opcionales.
**Query params**:
- `projectId`: Filtrar por proyecto
- `state`: Filtrar por estado (`backlog`, `in_progress`, etc.)
- `assignedAgentId`: Filtrar por agente
- `limit`: Límite de resultados (default: 50)
- `offset`: Offset para paginación
**Response**:
```json
{
"tasks": [
{
"id": "uuid",
"projectId": "uuid",
"title": "Implement login",
"description": "Create authentication system",
"state": "in_progress",
"priority": "high",
"assignedAgentId": "agent-123",
"branchName": "task-abc-implement-login",
"prNumber": 42,
"prUrl": "http://gitea/owner/repo/pulls/42",
"previewUrl": "https://task-abc.preview.aiworker.dev",
"createdAt": "2026-01-19T10:00:00Z"
}
],
"total": 10,
"limit": 50,
"offset": 0
}
```
### GET /tasks/:id
Obtiene detalles completos de una tarea incluyendo preguntas.
**Response**:
```json
{
"task": {
"id": "uuid",
"title": "Implement login",
"state": "needs_input",
"questions": [
{
"id": "q-uuid",
"question": "Which auth library should I use?",
"context": "Need to choose between JWT or session-based",
"askedAt": "2026-01-19T11:00:00Z",
"status": "pending"
}
],
"project": {
"name": "My Project",
"giteaRepoUrl": "..."
}
}
}
```
### POST /tasks
Crea una nueva tarea.
**Body**:
```json
{
"projectId": "uuid",
"title": "Implement feature X",
"description": "Detailed description...",
"priority": "high"
}
```
### PATCH /tasks/:id
Actualiza una tarea.
**Body**:
```json
{
"state": "approved",
"notes": "Looks good!"
}
```
### POST /tasks/:id/respond
Responde a una pregunta del agente.
**Body**:
```json
{
"questionId": "q-uuid",
"response": "Use JWT with jsonwebtoken library"
}
```
**Response**:
```json
{
"success": true,
"question": {
"id": "q-uuid",
"status": "answered",
"respondedAt": "2026-01-19T11:05:00Z"
}
}
```
### POST /tasks/:id/approve
Aprueba una tarea en estado `ready_to_test`.
### POST /tasks/:id/reject
Rechaza una tarea y la regresa a `in_progress`.
**Body**:
```json
{
"reason": "Needs more tests"
}
```
---
## Task Groups (Merges)
### POST /task-groups
Crea un grupo de tareas para merge a staging/production.
**Body**:
```json
{
"projectId": "uuid",
"taskIds": ["task-1", "task-2", "task-3"],
"targetBranch": "staging",
"notes": "Sprint 1 features"
}
```
**Response**:
```json
{
"taskGroup": {
"id": "uuid",
"taskIds": ["task-1", "task-2", "task-3"],
"status": "pending",
"stagingBranch": "release/sprint-1"
}
}
```
### GET /task-groups/:id
Obtiene detalles de un task group.
### POST /task-groups/:id/deploy-staging
Despliega el task group a staging.
### POST /task-groups/:id/deploy-production
Despliega el task group a production.
---
## Agents
### GET /agents
Lista todos los agentes.
**Response**:
```json
{
"agents": [
{
"id": "agent-123",
"podName": "claude-agent-abc123",
"status": "busy",
"currentTaskId": "task-uuid",
"capabilities": ["javascript", "react", "node"],
"tasksCompleted": 42,
"lastHeartbeat": "2026-01-19T12:00:00Z"
}
]
}
```
### GET /agents/:id
Obtiene detalles de un agente incluyendo logs recientes.
### GET /agents/:id/logs
Obtiene logs del agente.
**Query params**:
- `limit`: Número de logs (default: 100)
- `level`: Filtrar por nivel (`debug`, `info`, `warn`, `error`)
---
## Deployments
### GET /deployments
Lista deployments con filtros.
**Query params**:
- `projectId`: Filtrar por proyecto
- `environment`: Filtrar por entorno
- `status`: Filtrar por estado
### GET /deployments/:id
Obtiene detalles de un deployment.
### POST /deployments/:id/rollback
Hace rollback de un deployment.
**Response**:
```json
{
"success": true,
"rollbackDeploymentId": "new-uuid"
}
```
---
## Health & Status
### GET /health
Health check del backend.
**Response**:
```json
{
"status": "ok",
"timestamp": "2026-01-19T12:00:00Z",
"services": {
"mysql": "connected",
"redis": "connected",
"gitea": "reachable",
"kubernetes": "connected"
},
"version": "1.0.0"
}
```
### GET /metrics
Métricas del sistema (Prometheus format).
---
## WebSocket Events
Conectar a: `ws://localhost:3000`
### Client → Server
```json
{
"event": "auth",
"data": {
"token": "jwt-token"
}
}
```
```json
{
"event": "subscribe",
"data": {
"projectId": "uuid"
}
}
```
### Server → Client
```json
{
"event": "task:created",
"data": {
"taskId": "uuid",
"projectId": "uuid",
"title": "New task"
}
}
```
```json
{
"event": "task:status_changed",
"data": {
"taskId": "uuid",
"oldState": "in_progress",
"newState": "ready_to_test",
"previewUrl": "https://..."
}
}
```
```json
{
"event": "task:needs_input",
"data": {
"taskId": "uuid",
"questionId": "q-uuid",
"question": "Which library?"
}
}
```
```json
{
"event": "agent:status",
"data": {
"agentId": "agent-123",
"status": "idle",
"lastTaskId": "task-uuid"
}
}
```
```json
{
"event": "deploy:started",
"data": {
"deploymentId": "uuid",
"environment": "staging"
}
}
```
```json
{
"event": "deploy:completed",
"data": {
"deploymentId": "uuid",
"environment": "staging",
"url": "https://staging-project.aiworker.dev"
}
}
```
---
## Error Responses
Todos los endpoints pueden retornar estos errores:
### 400 Bad Request
```json
{
"error": "Validation error",
"details": {
"field": "projectId",
"message": "Required"
}
}
```
### 401 Unauthorized
```json
{
"error": "Invalid or expired token"
}
```
### 404 Not Found
```json
{
"error": "Resource not found"
}
```
### 500 Internal Server Error
```json
{
"error": "Internal server error",
"requestId": "req-uuid"
}
```

View File

@@ -0,0 +1,462 @@
# Database Schema con Drizzle ORM
## Schema Definitions
```typescript
// db/schema.ts
import { relations } from 'drizzle-orm'
import {
mysqlTable,
varchar,
text,
timestamp,
json,
int,
mysqlEnum,
boolean,
bigint,
index,
unique,
} from 'drizzle-orm/mysql-core'
// ============================================
// PROJECTS TABLE
// ============================================
export const projects = mysqlTable('projects', {
id: varchar('id', { length: 36 }).primaryKey(),
name: varchar('name', { length: 255 }).notNull(),
description: text('description'),
// Gitea
giteaRepoId: int('gitea_repo_id'),
giteaRepoUrl: varchar('gitea_repo_url', { length: 512 }),
giteaOwner: varchar('gitea_owner', { length: 100 }),
giteaRepoName: varchar('gitea_repo_name', { length: 100 }),
defaultBranch: varchar('default_branch', { length: 100 }).default('main'),
// K8s
k8sNamespace: varchar('k8s_namespace', { length: 63 }).notNull().unique(),
// Infrastructure
dockerImage: varchar('docker_image', { length: 512 }),
envVars: json('env_vars').$type<Record<string, string>>(),
replicas: int('replicas').default(1),
cpuLimit: varchar('cpu_limit', { length: 20 }).default('500m'),
memoryLimit: varchar('memory_limit', { length: 20 }).default('512Mi'),
// MCP
mcpTools: json('mcp_tools').$type<string[]>(),
mcpPermissions: json('mcp_permissions').$type<Record<string, any>>(),
// Status
status: mysqlEnum('status', ['active', 'paused', 'archived']).default('active'),
// Timestamps
createdAt: timestamp('created_at').defaultNow(),
updatedAt: timestamp('updated_at').defaultNow().onUpdateNow(),
}, (table) => ({
statusIdx: index('idx_status').on(table.status),
k8sNamespaceIdx: index('idx_k8s_namespace').on(table.k8sNamespace),
}))
// ============================================
// AGENTS TABLE
// ============================================
export const agents = mysqlTable('agents', {
id: varchar('id', { length: 36 }).primaryKey(),
// K8s
podName: varchar('pod_name', { length: 253 }).notNull().unique(),
k8sNamespace: varchar('k8s_namespace', { length: 63 }).default('agents'),
nodeName: varchar('node_name', { length: 253 }),
// Status
status: mysqlEnum('status', ['idle', 'busy', 'error', 'offline', 'initializing']).default('initializing'),
currentTaskId: varchar('current_task_id', { length: 36 }),
// Capabilities
capabilities: json('capabilities').$type<string[]>(),
maxConcurrentTasks: int('max_concurrent_tasks').default(1),
// Health
lastHeartbeat: timestamp('last_heartbeat'),
errorMessage: text('error_message'),
restartsCount: int('restarts_count').default(0),
// Metrics
tasksCompleted: int('tasks_completed').default(0),
totalRuntimeMinutes: int('total_runtime_minutes').default(0),
// Timestamps
createdAt: timestamp('created_at').defaultNow(),
updatedAt: timestamp('updated_at').defaultNow().onUpdateNow(),
}, (table) => ({
statusIdx: index('idx_status').on(table.status),
podNameIdx: index('idx_pod_name').on(table.podName),
lastHeartbeatIdx: index('idx_last_heartbeat').on(table.lastHeartbeat),
}))
// ============================================
// TASKS TABLE
// ============================================
export const tasks = mysqlTable('tasks', {
id: varchar('id', { length: 36 }).primaryKey(),
projectId: varchar('project_id', { length: 36 }).notNull().references(() => projects.id, { onDelete: 'cascade' }),
// Task info
title: varchar('title', { length: 255 }).notNull(),
description: text('description'),
priority: mysqlEnum('priority', ['low', 'medium', 'high', 'urgent']).default('medium'),
// State
state: mysqlEnum('state', [
'backlog',
'in_progress',
'needs_input',
'ready_to_test',
'approved',
'staging',
'production',
'cancelled'
]).default('backlog'),
// Assignment
assignedAgentId: varchar('assigned_agent_id', { length: 36 }).references(() => agents.id, { onDelete: 'set null' }),
assignedAt: timestamp('assigned_at'),
// Git
branchName: varchar('branch_name', { length: 255 }),
prNumber: int('pr_number'),
prUrl: varchar('pr_url', { length: 512 }),
// Preview
previewNamespace: varchar('preview_namespace', { length: 63 }),
previewUrl: varchar('preview_url', { length: 512 }),
previewDeployedAt: timestamp('preview_deployed_at'),
// Metadata
estimatedComplexity: mysqlEnum('estimated_complexity', ['trivial', 'simple', 'medium', 'complex']).default('medium'),
actualDurationMinutes: int('actual_duration_minutes'),
// Timestamps
createdAt: timestamp('created_at').defaultNow(),
updatedAt: timestamp('updated_at').defaultNow().onUpdateNow(),
startedAt: timestamp('started_at'),
completedAt: timestamp('completed_at'),
deployedStagingAt: timestamp('deployed_staging_at'),
deployedProductionAt: timestamp('deployed_production_at'),
}, (table) => ({
projectStateIdx: index('idx_project_state').on(table.projectId, table.state, table.createdAt),
stateIdx: index('idx_state').on(table.state),
assignedAgentIdx: index('idx_assigned_agent').on(table.assignedAgentId),
createdAtIdx: index('idx_created_at').on(table.createdAt),
}))
// ============================================
// TASK QUESTIONS TABLE
// ============================================
export const taskQuestions = mysqlTable('task_questions', {
id: varchar('id', { length: 36 }).primaryKey(),
taskId: varchar('task_id', { length: 36 }).notNull().references(() => tasks.id, { onDelete: 'cascade' }),
// Question
question: text('question').notNull(),
context: text('context'),
askedAt: timestamp('asked_at').defaultNow(),
// Response
response: text('response'),
respondedAt: timestamp('responded_at'),
respondedBy: varchar('responded_by', { length: 36 }),
// Status
status: mysqlEnum('status', ['pending', 'answered', 'skipped']).default('pending'),
}, (table) => ({
taskStatusIdx: index('idx_task_status').on(table.taskId, table.status),
statusIdx: index('idx_status').on(table.status),
}))
// ============================================
// TASK GROUPS TABLE
// ============================================
export const taskGroups = mysqlTable('task_groups', {
id: varchar('id', { length: 36 }).primaryKey(),
projectId: varchar('project_id', { length: 36 }).notNull().references(() => projects.id, { onDelete: 'cascade' }),
// Grouping
taskIds: json('task_ids').$type<string[]>().notNull(),
// Staging
stagingBranch: varchar('staging_branch', { length: 255 }),
stagingPrNumber: int('staging_pr_number'),
stagingPrUrl: varchar('staging_pr_url', { length: 512 }),
stagingDeployedAt: timestamp('staging_deployed_at'),
// Production
productionDeployedAt: timestamp('production_deployed_at'),
productionRollbackAvailable: boolean('production_rollback_available').default(true),
// Status
status: mysqlEnum('status', ['pending', 'staging', 'production', 'rolled_back']).default('pending'),
// Metadata
createdBy: varchar('created_by', { length: 36 }),
notes: text('notes'),
// Timestamps
createdAt: timestamp('created_at').defaultNow(),
updatedAt: timestamp('updated_at').defaultNow().onUpdateNow(),
}, (table) => ({
projectStatusIdx: index('idx_project_status').on(table.projectId, table.status),
statusIdx: index('idx_status').on(table.status),
}))
// ============================================
// DEPLOYMENTS TABLE
// ============================================
export const deployments = mysqlTable('deployments', {
id: varchar('id', { length: 36 }).primaryKey(),
projectId: varchar('project_id', { length: 36 }).notNull().references(() => projects.id, { onDelete: 'cascade' }),
taskGroupId: varchar('task_group_id', { length: 36 }).references(() => taskGroups.id, { onDelete: 'set null' }),
// Deployment info
environment: mysqlEnum('environment', ['preview', 'staging', 'production']).notNull(),
deploymentType: mysqlEnum('deployment_type', ['manual', 'automatic', 'rollback']).default('manual'),
// Git
branch: varchar('branch', { length: 255 }),
commitHash: varchar('commit_hash', { length: 40 }),
// K8s
k8sNamespace: varchar('k8s_namespace', { length: 63 }),
k8sDeploymentName: varchar('k8s_deployment_name', { length: 253 }),
imageTag: varchar('image_tag', { length: 255 }),
// Status
status: mysqlEnum('status', ['pending', 'in_progress', 'completed', 'failed', 'rolled_back']).default('pending'),
// Results
url: varchar('url', { length: 512 }),
errorMessage: text('error_message'),
logs: text('logs'),
// Timing
startedAt: timestamp('started_at'),
completedAt: timestamp('completed_at'),
durationSeconds: int('duration_seconds'),
// Metadata
triggeredBy: varchar('triggered_by', { length: 36 }),
// Timestamps
createdAt: timestamp('created_at').defaultNow(),
}, (table) => ({
projectEnvIdx: index('idx_project_env').on(table.projectId, table.environment),
statusIdx: index('idx_status').on(table.status),
createdAtIdx: index('idx_created_at').on(table.createdAt),
}))
// ============================================
// AGENT LOGS TABLE
// ============================================
export const agentLogs = mysqlTable('agent_logs', {
id: bigint('id', { mode: 'number' }).autoincrement().primaryKey(),
agentId: varchar('agent_id', { length: 36 }).notNull().references(() => agents.id, { onDelete: 'cascade' }),
taskId: varchar('task_id', { length: 36 }).references(() => tasks.id, { onDelete: 'set null' }),
// Log entry
level: mysqlEnum('level', ['debug', 'info', 'warn', 'error']).default('info'),
message: text('message').notNull(),
metadata: json('metadata').$type<Record<string, any>>(),
// Timestamp
createdAt: timestamp('created_at').defaultNow(),
}, (table) => ({
agentCreatedIdx: index('idx_agent_created').on(table.agentId, table.createdAt),
taskCreatedIdx: index('idx_task_created').on(table.taskId, table.createdAt),
levelIdx: index('idx_level').on(table.level),
}))
// ============================================
// RELATIONS
// ============================================
export const projectsRelations = relations(projects, ({ many }) => ({
tasks: many(tasks),
taskGroups: many(taskGroups),
deployments: many(deployments),
}))
export const tasksRelations = relations(tasks, ({ one, many }) => ({
project: one(projects, {
fields: [tasks.projectId],
references: [projects.id],
}),
assignedAgent: one(agents, {
fields: [tasks.assignedAgentId],
references: [agents.id],
}),
questions: many(taskQuestions),
}))
export const agentsRelations = relations(agents, ({ one, many }) => ({
currentTask: one(tasks, {
fields: [agents.currentTaskId],
references: [tasks.id],
}),
logs: many(agentLogs),
}))
export const taskQuestionsRelations = relations(taskQuestions, ({ one }) => ({
task: one(tasks, {
fields: [taskQuestions.taskId],
references: [tasks.id],
}),
}))
export const taskGroupsRelations = relations(taskGroups, ({ one, many }) => ({
project: one(projects, {
fields: [taskGroups.projectId],
references: [projects.id],
}),
deployments: many(deployments),
}))
export const deploymentsRelations = relations(deployments, ({ one }) => ({
project: one(projects, {
fields: [deployments.projectId],
references: [projects.id],
}),
taskGroup: one(taskGroups, {
fields: [deployments.taskGroupId],
references: [taskGroups.id],
}),
}))
export const agentLogsRelations = relations(agentLogs, ({ one }) => ({
agent: one(agents, {
fields: [agentLogs.agentId],
references: [agents.id],
}),
task: one(tasks, {
fields: [agentLogs.taskId],
references: [tasks.id],
}),
}))
```
## Drizzle Configuration
```typescript
// drizzle.config.ts
import type { Config } from 'drizzle-kit'
export default {
schema: './src/db/schema.ts',
out: './drizzle/migrations',
driver: 'mysql2',
dbCredentials: {
host: process.env.DB_HOST || 'localhost',
port: parseInt(process.env.DB_PORT || '3306'),
user: process.env.DB_USER || 'root',
password: process.env.DB_PASSWORD || '',
database: process.env.DB_NAME || 'aiworker',
},
} satisfies Config
```
## Database Client
```typescript
// db/client.ts
import { drizzle } from 'drizzle-orm/mysql2'
import mysql from 'mysql2/promise'
import * as schema from './schema'
const pool = mysql.createPool({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT || '3306'),
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
waitForConnections: true,
connectionLimit: 10,
queueLimit: 0,
})
export const db = drizzle(pool, { schema, mode: 'default' })
```
## Ejemplos de Queries
```typescript
// Get all tasks for a project
const projectTasks = await db.query.tasks.findMany({
where: eq(tasks.projectId, projectId),
with: {
assignedAgent: true,
questions: {
where: eq(taskQuestions.status, 'pending')
}
},
orderBy: [desc(tasks.createdAt)]
})
// Get next available task
const nextTask = await db.query.tasks.findFirst({
where: eq(tasks.state, 'backlog'),
orderBy: [desc(tasks.priority), asc(tasks.createdAt)]
})
// Get idle agents
const idleAgents = await db.query.agents.findMany({
where: and(
eq(agents.status, 'idle'),
gt(agents.lastHeartbeat, new Date(Date.now() - 60000))
)
})
// Insert new task
const newTask = await db.insert(tasks).values({
id: crypto.randomUUID(),
projectId: projectId,
title: 'New task',
description: 'Task description',
state: 'backlog',
priority: 'medium',
})
```
## Migrations
```bash
# Generate migration
bun run drizzle-kit generate:mysql
# Push changes directly (dev only)
bun run drizzle-kit push:mysql
# Run migrations
bun run scripts/migrate.ts
```
```typescript
// scripts/migrate.ts
import { migrate } from 'drizzle-orm/mysql2/migrator'
import { db } from '../src/db/client'
async function runMigrations() {
await migrate(db, { migrationsFolder: './drizzle/migrations' })
console.log('✓ Migrations completed')
process.exit(0)
}
runMigrations().catch(console.error)
```

View File

@@ -0,0 +1,480 @@
# Estructura del Backend
## Árbol de Directorios
```
backend/
├── src/
│ ├── index.ts # Entry point
│ ├── config/
│ │ ├── database.ts # MySQL connection
│ │ ├── redis.ts # Redis connection
│ │ └── env.ts # Environment variables
│ │
│ ├── api/
│ │ ├── app.ts # Express app setup
│ │ ├── routes/
│ │ │ ├── index.ts
│ │ │ ├── projects.ts # /api/projects
│ │ │ ├── tasks.ts # /api/tasks
│ │ │ ├── agents.ts # /api/agents
│ │ │ ├── deployments.ts# /api/deployments
│ │ │ └── health.ts # /api/health
│ │ │
│ │ ├── middleware/
│ │ │ ├── auth.ts # JWT validation
│ │ │ ├── error.ts # Error handler
│ │ │ ├── logger.ts # Request logging
│ │ │ └── validate.ts # Schema validation
│ │ │
│ │ └── websocket/
│ │ ├── server.ts # Socket.io setup
│ │ └── handlers.ts # WS event handlers
│ │
│ ├── db/
│ │ ├── schema.ts # Drizzle schema
│ │ ├── migrations/ # SQL migrations
│ │ └── client.ts # DB client instance
│ │
│ ├── services/
│ │ ├── mcp/
│ │ │ ├── server.ts # MCP server for agents
│ │ │ ├── tools.ts # MCP tool definitions
│ │ │ └── handlers.ts # Tool implementations
│ │ │
│ │ ├── gitea/
│ │ │ ├── client.ts # Gitea API client
│ │ │ ├── repos.ts # Repo operations
│ │ │ ├── pulls.ts # PR operations
│ │ │ └── webhooks.ts # Webhook handling
│ │ │
│ │ ├── kubernetes/
│ │ │ ├── client.ts # K8s API client
│ │ │ ├── namespaces.ts # Namespace management
│ │ │ ├── deployments.ts# Deployment management
│ │ │ ├── pods.ts # Pod operations
│ │ │ └── ingress.ts # Ingress management
│ │ │
│ │ ├── queue/
│ │ │ ├── task-queue.ts # Task queue
│ │ │ ├── deploy-queue.ts# Deploy queue
│ │ │ └── workers.ts # Queue workers
│ │ │
│ │ └── cache/
│ │ ├── redis.ts # Redis operations
│ │ └── strategies.ts # Caching strategies
│ │
│ ├── models/
│ │ ├── Project.ts # Project model
│ │ ├── Task.ts # Task model
│ │ ├── Agent.ts # Agent model
│ │ ├── TaskGroup.ts # TaskGroup model
│ │ └── Deployment.ts # Deployment model
│ │
│ ├── types/
│ │ ├── api.ts # API types
│ │ ├── mcp.ts # MCP types
│ │ ├── k8s.ts # K8s types
│ │ └── common.ts # Common types
│ │
│ └── utils/
│ ├── logger.ts # Winston logger
│ ├── errors.ts # Custom errors
│ ├── validators.ts # Validation helpers
│ └── helpers.ts # General helpers
├── drizzle/ # Drizzle config
│ ├── drizzle.config.ts
│ └── migrations/
├── tests/
│ ├── unit/
│ ├── integration/
│ └── e2e/
├── scripts/
│ ├── seed.ts # Seed database
│ ├── migrate.ts # Run migrations
│ └── generate-types.ts # Generate types
├── .env.example
├── .eslintrc.json
├── .prettierrc
├── tsconfig.json
├── package.json
└── README.md
```
## Entry Point (index.ts)
```typescript
import { startServer } from './api/app'
import { connectDatabase } from './config/database'
import { connectRedis } from './config/redis'
import { startMCPServer } from './services/mcp/server'
import { startQueueWorkers } from './services/queue/workers'
import { logger } from './utils/logger'
async function bootstrap() {
try {
// Connect to MySQL
await connectDatabase()
logger.info('✓ MySQL connected')
// Connect to Redis
await connectRedis()
logger.info('✓ Redis connected')
// Start MCP Server for agents
await startMCPServer()
logger.info('✓ MCP Server started')
// Start BullMQ workers
await startQueueWorkers()
logger.info('✓ Queue workers started')
// Start HTTP + WebSocket server
await startServer()
logger.info('✓ API Server started on port 3000')
} catch (error) {
logger.error('Failed to start server:', error)
process.exit(1)
}
}
bootstrap()
```
## Express App Setup (api/app.ts)
```typescript
import express from 'express'
import cors from 'cors'
import { createServer } from 'http'
import { Server as SocketIOServer } from 'socket.io'
import routes from './routes'
import { errorHandler } from './middleware/error'
import { requestLogger } from './middleware/logger'
import { setupWebSocket } from './websocket/server'
export async function startServer() {
const app = express()
const httpServer = createServer(app)
const io = new SocketIOServer(httpServer, {
cors: { origin: process.env.FRONTEND_URL }
})
// Middleware
app.use(cors())
app.use(express.json())
app.use(requestLogger)
// Routes
app.use('/api', routes)
// Error handling
app.use(errorHandler)
// WebSocket
setupWebSocket(io)
// Start
const port = process.env.PORT || 3000
httpServer.listen(port)
return { app, httpServer, io }
}
```
## Configuración de Base de Datos
```typescript
// config/database.ts
import { drizzle } from 'drizzle-orm/mysql2'
import mysql from 'mysql2/promise'
import * as schema from '../db/schema'
let connection: mysql.Connection
let db: ReturnType<typeof drizzle>
export async function connectDatabase() {
connection = await mysql.createConnection({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT || '3306'),
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
})
db = drizzle(connection, { schema, mode: 'default' })
return db
}
export function getDatabase() {
if (!db) {
throw new Error('Database not initialized')
}
return db
}
```
## Configuración de Redis
```typescript
// config/redis.ts
import Redis from 'ioredis'
let redis: Redis
export async function connectRedis() {
redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD,
retryStrategy: (times) => {
const delay = Math.min(times * 50, 2000)
return delay
}
})
await redis.ping()
return redis
}
export function getRedis() {
if (!redis) {
throw new Error('Redis not initialized')
}
return redis
}
```
## Variables de Entorno
```bash
# .env.example
# Server
NODE_ENV=development
PORT=3000
FRONTEND_URL=http://localhost:5173
# Database
DB_HOST=localhost
DB_PORT=3306
DB_USER=root
DB_PASSWORD=password
DB_NAME=aiworker
# Redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
# Gitea
GITEA_URL=http://localhost:3001
GITEA_TOKEN=your-gitea-token
GITEA_OWNER=aiworker
# Kubernetes
K8S_IN_CLUSTER=false
K8S_CONFIG_PATH=~/.kube/config
K8S_DEFAULT_NAMESPACE=aiworker
# MCP Server
MCP_SERVER_PORT=3100
MCP_AUTH_TOKEN=your-mcp-token
# JWT
JWT_SECRET=your-secret-key
JWT_EXPIRES_IN=7d
# Claude API
ANTHROPIC_API_KEY=your-api-key
```
## Scripts de Package.json
```json
{
"name": "aiworker-backend",
"version": "1.0.0",
"scripts": {
"dev": "bun --watch src/index.ts",
"build": "bun build src/index.ts --outdir dist --target node",
"start": "bun dist/index.js",
"db:generate": "drizzle-kit generate:mysql",
"db:push": "drizzle-kit push:mysql",
"db:migrate": "bun run scripts/migrate.ts",
"db:seed": "bun run scripts/seed.ts",
"test": "bun test",
"test:watch": "bun test --watch",
"lint": "eslint src/**/*.ts",
"format": "prettier --write src/**/*.ts"
},
"dependencies": {
"express": "^4.19.0",
"mysql2": "^3.11.0",
"drizzle-orm": "^0.36.0",
"ioredis": "^5.4.1",
"bullmq": "^5.23.0",
"socket.io": "^4.8.1",
"@modelcontextprotocol/sdk": "^1.0.0",
"@kubernetes/client-node": "^0.22.0",
"axios": "^1.7.9",
"zod": "^3.24.1",
"winston": "^3.17.0",
"jsonwebtoken": "^9.0.2",
"cors": "^2.8.5",
"dotenv": "^16.4.7"
},
"devDependencies": {
"@types/express": "^5.0.0",
"@types/node": "^22.10.2",
"drizzle-kit": "^0.31.0",
"typescript": "^5.7.2",
"prettier": "^3.4.2",
"eslint": "^9.18.0"
}
}
```
## Estructura de Rutas
```typescript
// api/routes/index.ts
import { Router } from 'express'
import projectRoutes from './projects'
import taskRoutes from './tasks'
import agentRoutes from './agents'
import deploymentRoutes from './deployments'
import healthRoutes from './health'
const router = Router()
router.use('/projects', projectRoutes)
router.use('/tasks', taskRoutes)
router.use('/agents', agentRoutes)
router.use('/deployments', deploymentRoutes)
router.use('/health', healthRoutes)
export default router
```
## Middleware de Validación
```typescript
// middleware/validate.ts
import { Request, Response, NextFunction } from 'express'
import { ZodSchema } from 'zod'
export function validate(schema: ZodSchema) {
return (req: Request, res: Response, next: NextFunction) => {
try {
schema.parse({
body: req.body,
query: req.query,
params: req.params,
})
next()
} catch (error) {
res.status(400).json({
error: 'Validation error',
details: error
})
}
}
}
```
## Logger Setup
```typescript
// utils/logger.ts
import winston from 'winston'
export const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' })
]
})
```
## Manejo de Errores
```typescript
// middleware/error.ts
import { Request, Response, NextFunction } from 'express'
import { logger } from '../utils/logger'
export class AppError extends Error {
statusCode: number
isOperational: boolean
constructor(message: string, statusCode: number) {
super(message)
this.statusCode = statusCode
this.isOperational = true
Error.captureStackTrace(this, this.constructor)
}
}
export function errorHandler(
err: Error | AppError,
req: Request,
res: Response,
next: NextFunction
) {
logger.error('Error:', err)
if (err instanceof AppError) {
return res.status(err.statusCode).json({
error: err.message
})
}
res.status(500).json({
error: 'Internal server error'
})
}
```
## Comandos Útiles
```bash
# Desarrollo
bun run dev
# Generar migraciones
bun run db:generate
# Aplicar migraciones
bun run db:migrate
# Seed inicial
bun run db:seed
# Tests
bun test
# Build para producción
bun run build
# Producción
bun run start
```

View File

@@ -0,0 +1,459 @@
# Integración con Gitea
## Cliente de Gitea
```typescript
// services/gitea/client.ts
import axios, { AxiosInstance } from 'axios'
import { logger } from '../../utils/logger'
export interface GiteaConfig {
url: string
token: string
owner: string
}
export class GiteaClient {
private client: AxiosInstance
private owner: string
constructor(config?: GiteaConfig) {
const url = config?.url || process.env.GITEA_URL!
const token = config?.token || process.env.GITEA_TOKEN!
this.owner = config?.owner || process.env.GITEA_OWNER!
this.client = axios.create({
baseURL: `${url}/api/v1`,
headers: {
'Authorization': `token ${token}`,
'Content-Type': 'application/json'
},
timeout: 30000
})
// Log requests
this.client.interceptors.request.use((config) => {
logger.debug(`Gitea API: ${config.method?.toUpperCase()} ${config.url}`)
return config
})
// Handle errors
this.client.interceptors.response.use(
(response) => response,
(error) => {
logger.error('Gitea API Error:', {
url: error.config?.url,
status: error.response?.status,
data: error.response?.data
})
throw error
}
)
}
// ============================================
// REPOSITORIES
// ============================================
async createRepo(name: string, options: {
description?: string
private?: boolean
autoInit?: boolean
defaultBranch?: string
} = {}) {
const response = await this.client.post('/user/repos', {
name,
description: options.description || '',
private: options.private !== false,
auto_init: options.autoInit !== false,
default_branch: options.defaultBranch || 'main',
trust_model: 'default'
})
logger.info(`Gitea: Created repo ${name}`)
return response.data
}
async getRepo(owner: string, repo: string) {
const response = await this.client.get(`/repos/${owner}/${repo}`)
return response.data
}
async deleteRepo(owner: string, repo: string) {
await this.client.delete(`/repos/${owner}/${repo}`)
logger.info(`Gitea: Deleted repo ${owner}/${repo}`)
}
async listRepos(owner?: string) {
const targetOwner = owner || this.owner
const response = await this.client.get(`/users/${targetOwner}/repos`)
return response.data
}
// ============================================
// BRANCHES
// ============================================
async createBranch(owner: string, repo: string, branchName: string, fromBranch: string = 'main') {
// Get reference commit
const refResponse = await this.client.get(
`/repos/${owner}/${repo}/git/refs/heads/${fromBranch}`
)
const sha = refResponse.data.object.sha
// Create new branch
const response = await this.client.post(
`/repos/${owner}/${repo}/git/refs`,
{
ref: `refs/heads/${branchName}`,
sha
}
)
logger.info(`Gitea: Created branch ${branchName} from ${fromBranch}`)
return response.data
}
async getBranch(owner: string, repo: string, branch: string) {
const response = await this.client.get(
`/repos/${owner}/${repo}/branches/${branch}`
)
return response.data
}
async listBranches(owner: string, repo: string) {
const response = await this.client.get(
`/repos/${owner}/${repo}/branches`
)
return response.data
}
async deleteBranch(owner: string, repo: string, branch: string) {
await this.client.delete(
`/repos/${owner}/${repo}/branches/${branch}`
)
logger.info(`Gitea: Deleted branch ${branch}`)
}
// ============================================
// PULL REQUESTS
// ============================================
async createPullRequest(owner: string, repo: string, data: {
title: string
body: string
head: string
base: string
}) {
const response = await this.client.post(
`/repos/${owner}/${repo}/pulls`,
{
title: data.title,
body: data.body,
head: data.head,
base: data.base
}
)
logger.info(`Gitea: Created PR #${response.data.number}`)
return response.data
}
async getPullRequest(owner: string, repo: string, index: number) {
const response = await this.client.get(
`/repos/${owner}/${repo}/pulls/${index}`
)
return response.data
}
async listPullRequests(owner: string, repo: string, state: 'open' | 'closed' | 'all' = 'open') {
const response = await this.client.get(
`/repos/${owner}/${repo}/pulls`,
{ params: { state } }
)
return response.data
}
async mergePullRequest(owner: string, repo: string, index: number, method: 'merge' | 'rebase' | 'squash' = 'merge') {
const response = await this.client.post(
`/repos/${owner}/${repo}/pulls/${index}/merge`,
{
Do: method,
MergeMessageField: '',
MergeTitleField: ''
}
)
logger.info(`Gitea: Merged PR #${index}`)
return response.data
}
async closePullRequest(owner: string, repo: string, index: number) {
const response = await this.client.patch(
`/repos/${owner}/${repo}/pulls/${index}`,
{ state: 'closed' }
)
logger.info(`Gitea: Closed PR #${index}`)
return response.data
}
// ============================================
// COMMITS
// ============================================
async getCommit(owner: string, repo: string, sha: string) {
const response = await this.client.get(
`/repos/${owner}/${repo}/git/commits/${sha}`
)
return response.data
}
async listCommits(owner: string, repo: string, options: {
sha?: string
path?: string
page?: number
limit?: number
} = {}) {
const response = await this.client.get(
`/repos/${owner}/${repo}/commits`,
{ params: options }
)
return response.data
}
// ============================================
// WEBHOOKS
// ============================================
async createWebhook(owner: string, repo: string, config: {
url: string
contentType?: 'json' | 'form'
secret?: string
events?: string[]
}) {
const response = await this.client.post(
`/repos/${owner}/${repo}/hooks`,
{
type: 'gitea',
config: {
url: config.url,
content_type: config.contentType || 'json',
secret: config.secret || ''
},
events: config.events || ['push', 'pull_request'],
active: true
}
)
logger.info(`Gitea: Created webhook for ${owner}/${repo}`)
return response.data
}
async listWebhooks(owner: string, repo: string) {
const response = await this.client.get(
`/repos/${owner}/${repo}/hooks`
)
return response.data
}
async deleteWebhook(owner: string, repo: string, hookId: number) {
await this.client.delete(
`/repos/${owner}/${repo}/hooks/${hookId}`
)
logger.info(`Gitea: Deleted webhook ${hookId}`)
}
// ============================================
// FILES
// ============================================
async getFileContents(owner: string, repo: string, filepath: string, ref: string = 'main') {
const response = await this.client.get(
`/repos/${owner}/${repo}/contents/${filepath}`,
{ params: { ref } }
)
return response.data
}
async createOrUpdateFile(owner: string, repo: string, filepath: string, data: {
content: string // base64 encoded
message: string
branch?: string
sha?: string // for updates
}) {
const response = await this.client.post(
`/repos/${owner}/${repo}/contents/${filepath}`,
{
content: data.content,
message: data.message,
branch: data.branch || 'main',
sha: data.sha
}
)
logger.info(`Gitea: Updated file ${filepath}`)
return response.data
}
// ============================================
// USERS
// ============================================
async getCurrentUser() {
const response = await this.client.get('/user')
return response.data
}
async getUser(username: string) {
const response = await this.client.get(`/users/${username}`)
return response.data
}
// ============================================
// ORGANIZATIONS (if needed)
// ============================================
async createOrg(name: string, options: {
fullName?: string
description?: string
} = {}) {
const response = await this.client.post('/orgs', {
username: name,
full_name: options.fullName || name,
description: options.description || ''
})
logger.info(`Gitea: Created org ${name}`)
return response.data
}
}
// Export singleton instance
export const giteaClient = new GiteaClient()
```
## Webhook Handler
```typescript
// services/gitea/webhooks.ts
import { Request, Response } from 'express'
import crypto from 'crypto'
import { logger } from '../../utils/logger'
import { db } from '../../db/client'
import { tasks } from '../../db/schema'
import { eq } from 'drizzle-orm'
import { emitWebSocketEvent } from '../../api/websocket/server'
export async function handleGiteaWebhook(req: Request, res: Response) {
const signature = req.headers['x-gitea-signature'] as string
const event = req.headers['x-gitea-event'] as string
const payload = req.body
// Verify signature
const secret = process.env.GITEA_WEBHOOK_SECRET || ''
if (secret && signature) {
const hmac = crypto.createHmac('sha256', secret)
hmac.update(JSON.stringify(payload))
const calculatedSignature = hmac.digest('hex')
if (signature !== calculatedSignature) {
logger.warn('Gitea webhook: Invalid signature')
return res.status(401).json({ error: 'Invalid signature' })
}
}
logger.info(`Gitea webhook: ${event}`, {
repo: payload.repository?.full_name,
ref: payload.ref
})
try {
switch (event) {
case 'push':
await handlePushEvent(payload)
break
case 'pull_request':
await handlePullRequestEvent(payload)
break
default:
logger.debug(`Unhandled webhook event: ${event}`)
}
res.status(200).json({ success: true })
} catch (error) {
logger.error('Webhook handler error:', error)
res.status(500).json({ error: 'Internal error' })
}
}
async function handlePushEvent(payload: any) {
const branch = payload.ref.replace('refs/heads/', '')
const commits = payload.commits || []
logger.info(`Push to ${branch}: ${commits.length} commits`)
// Find task by branch name
const task = await db.query.tasks.findFirst({
where: eq(tasks.branchName, branch)
})
if (task) {
emitWebSocketEvent('task:push', {
taskId: task.id,
branch,
commitsCount: commits.length
})
}
}
async function handlePullRequestEvent(payload: any) {
const action = payload.action // opened, closed, reopened, edited, synchronized
const prNumber = payload.pull_request.number
const state = payload.pull_request.state
logger.info(`PR #${prNumber}: ${action}`)
// Find task by PR number
const task = await db.query.tasks.findFirst({
where: eq(tasks.prNumber, prNumber)
})
if (task) {
if (action === 'closed' && payload.pull_request.merged) {
// PR was merged
await db.update(tasks)
.set({ state: 'staging' })
.where(eq(tasks.id, task.id))
emitWebSocketEvent('task:merged', {
taskId: task.id,
prNumber
})
}
emitWebSocketEvent('task:pr_updated', {
taskId: task.id,
prNumber,
action,
state
})
}
}
```
## Router para Webhooks
```typescript
// api/routes/webhooks.ts
import { Router } from 'express'
import { handleGiteaWebhook } from '../../services/gitea/webhooks'
const router = Router()
router.post('/gitea', handleGiteaWebhook)
export default router
```

View File

@@ -0,0 +1,788 @@
# MCP Server para Agentes
El MCP (Model Context Protocol) Server es la interfaz que permite a los agentes Claude Code comunicarse con el backend y ejecutar operaciones.
## Arquitectura MCP
```
┌─────────────────┐ ┌─────────────────┐
│ Claude Code │ MCP Protocol │ MCP Server │
│ (Agent Pod) │◄──────────────────►│ (Backend) │
└─────────────────┘ └─────────────────┘
┌─────────────────────┼─────────────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ MySQL │ │ Gitea │ │ K8s │
└─────────┘ └─────────┘ └─────────┘
```
## Setup del MCP Server
```typescript
// services/mcp/server.ts
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
import { tools } from './tools'
import { handleToolCall } from './handlers'
import { logger } from '../../utils/logger'
export class AgentMCPServer {
private server: Server
constructor() {
this.server = new Server(
{
name: 'aiworker-orchestrator',
version: '1.0.0',
},
{
capabilities: {
tools: {},
},
}
)
this.setupHandlers()
}
private setupHandlers() {
// List available tools
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: tools.map(tool => ({
name: tool.name,
description: tool.description,
inputSchema: tool.inputSchema,
}))
}
})
// Handle tool calls
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params
logger.info(`MCP: Tool called: ${name}`, { args })
try {
const result = await handleToolCall(name, args)
return result
} catch (error) {
logger.error(`MCP: Tool error: ${name}`, error)
return {
content: [{
type: 'text',
text: `Error: ${error.message}`
}],
isError: true
}
}
})
}
async start() {
const transport = new StdioServerTransport()
await this.server.connect(transport)
logger.info('MCP Server started')
}
}
// Start MCP server
let mcpServer: AgentMCPServer
export async function startMCPServer() {
mcpServer = new AgentMCPServer()
await mcpServer.start()
return mcpServer
}
export function getMCPServer() {
return mcpServer
}
```
## Definición de Herramientas
```typescript
// services/mcp/tools.ts
import { z } from 'zod'
export const tools = [
{
name: 'get_next_task',
description: 'Obtiene la siguiente tarea disponible de la cola',
inputSchema: {
type: 'object',
properties: {
agentId: {
type: 'string',
description: 'ID del agente solicitante'
},
capabilities: {
type: 'array',
items: { type: 'string' },
description: 'Capacidades del agente (ej: ["javascript", "react"])'
}
},
required: ['agentId']
}
},
{
name: 'update_task_status',
description: 'Actualiza el estado de una tarea',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
},
status: {
type: 'string',
enum: ['in_progress', 'needs_input', 'ready_to_test', 'completed'],
description: 'Nuevo estado'
},
metadata: {
type: 'object',
description: 'Metadata adicional (duración, errores, etc.)'
}
},
required: ['taskId', 'status']
}
},
{
name: 'ask_user_question',
description: 'Solicita información al usuario',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
},
question: {
type: 'string',
description: 'Pregunta para el usuario'
},
context: {
type: 'string',
description: 'Contexto adicional'
}
},
required: ['taskId', 'question']
}
},
{
name: 'check_question_response',
description: 'Verifica si el usuario ha respondido una pregunta',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
}
},
required: ['taskId']
}
},
{
name: 'create_branch',
description: 'Crea una nueva rama en Gitea',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
},
branchName: {
type: 'string',
description: 'Nombre de la rama (opcional, se genera automático)'
}
},
required: ['taskId']
}
},
{
name: 'create_pull_request',
description: 'Crea un Pull Request en Gitea',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
},
title: {
type: 'string',
description: 'Título del PR'
},
description: {
type: 'string',
description: 'Descripción del PR'
}
},
required: ['taskId', 'title', 'description']
}
},
{
name: 'trigger_preview_deploy',
description: 'Despliega un preview environment en K8s',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
}
},
required: ['taskId']
}
},
{
name: 'get_task_details',
description: 'Obtiene detalles completos de una tarea',
inputSchema: {
type: 'object',
properties: {
taskId: {
type: 'string',
description: 'ID de la tarea'
}
},
required: ['taskId']
}
},
{
name: 'log_activity',
description: 'Registra actividad del agente',
inputSchema: {
type: 'object',
properties: {
agentId: {
type: 'string',
description: 'ID del agente'
},
level: {
type: 'string',
enum: ['debug', 'info', 'warn', 'error'],
description: 'Nivel de log'
},
message: {
type: 'string',
description: 'Mensaje'
},
metadata: {
type: 'object',
description: 'Metadata adicional'
}
},
required: ['agentId', 'message']
}
},
{
name: 'heartbeat',
description: 'Envía heartbeat para indicar que el agente está activo',
inputSchema: {
type: 'object',
properties: {
agentId: {
type: 'string',
description: 'ID del agente'
},
status: {
type: 'string',
enum: ['idle', 'busy', 'error'],
description: 'Estado actual'
}
},
required: ['agentId', 'status']
}
}
]
```
## Implementación de Handlers
```typescript
// services/mcp/handlers.ts
import { db } from '../../db/client'
import { tasks, agents, taskQuestions, agentLogs } from '../../db/schema'
import { eq, and, desc, asc } from 'drizzle-orm'
import { GiteaClient } from '../gitea/client'
import { K8sClient } from '../kubernetes/client'
import { getRedis } from '../../config/redis'
import { emitWebSocketEvent } from '../../api/websocket/server'
import crypto from 'crypto'
const giteaClient = new GiteaClient()
const k8sClient = new K8sClient()
const redis = getRedis()
export async function handleToolCall(name: string, args: any) {
switch (name) {
case 'get_next_task':
return await getNextTask(args)
case 'update_task_status':
return await updateTaskStatus(args)
case 'ask_user_question':
return await askUserQuestion(args)
case 'check_question_response':
return await checkQuestionResponse(args)
case 'create_branch':
return await createBranch(args)
case 'create_pull_request':
return await createPullRequest(args)
case 'trigger_preview_deploy':
return await triggerPreviewDeploy(args)
case 'get_task_details':
return await getTaskDetails(args)
case 'log_activity':
return await logActivity(args)
case 'heartbeat':
return await heartbeat(args)
default:
throw new Error(`Unknown tool: ${name}`)
}
}
// ============================================
// TOOL IMPLEMENTATIONS
// ============================================
async function getNextTask(args: { agentId: string; capabilities?: string[] }) {
const { agentId } = args
// Get next task from backlog
const task = await db.query.tasks.findFirst({
where: eq(tasks.state, 'backlog'),
with: {
project: true
},
orderBy: [desc(tasks.priority), asc(tasks.createdAt)]
})
if (!task) {
return {
content: [{
type: 'text',
text: JSON.stringify({ message: 'No tasks available' })
}]
}
}
// Assign task to agent
await db.update(tasks)
.set({
state: 'in_progress',
assignedAgentId: agentId,
assignedAt: new Date(),
startedAt: new Date()
})
.where(eq(tasks.id, task.id))
await db.update(agents)
.set({
status: 'busy',
currentTaskId: task.id
})
.where(eq(agents.id, agentId))
// Emit WebSocket event
emitWebSocketEvent('task:status_changed', {
taskId: task.id,
oldState: 'backlog',
newState: 'in_progress',
agentId
})
// Cache invalidation
await redis.del(`task:${task.id}`)
await redis.del(`task:list:${task.projectId}`)
return {
content: [{
type: 'text',
text: JSON.stringify({
task: {
id: task.id,
title: task.title,
description: task.description,
priority: task.priority,
project: task.project
}
})
}]
}
}
async function updateTaskStatus(args: { taskId: string; status: string; metadata?: any }) {
const { taskId, status, metadata } = args
const updates: any = { state: status }
if (status === 'completed') {
updates.completedAt = new Date()
}
if (metadata?.durationMinutes) {
updates.actualDurationMinutes = metadata.durationMinutes
}
await db.update(tasks)
.set(updates)
.where(eq(tasks.id, taskId))
// If task completed, free up agent
if (status === 'completed' || status === 'ready_to_test') {
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, taskId)
})
if (task?.assignedAgentId) {
await db.update(agents)
.set({
status: 'idle',
currentTaskId: null,
tasksCompleted: db.$sql`tasks_completed + 1`
})
.where(eq(agents.id, task.assignedAgentId))
}
}
emitWebSocketEvent('task:status_changed', {
taskId,
newState: status,
metadata
})
await redis.del(`task:${taskId}`)
return {
content: [{
type: 'text',
text: JSON.stringify({ success: true })
}]
}
}
async function askUserQuestion(args: { taskId: string; question: string; context?: string }) {
const { taskId, question, context } = args
// Update task state
await db.update(tasks)
.set({ state: 'needs_input' })
.where(eq(tasks.id, taskId))
// Insert question
const questionId = crypto.randomUUID()
await db.insert(taskQuestions).values({
id: questionId,
taskId,
question,
context,
status: 'pending'
})
// Notify frontend
emitWebSocketEvent('task:needs_input', {
taskId,
questionId,
question,
context
})
await redis.del(`task:${taskId}`)
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
message: 'Question sent to user',
questionId
})
}]
}
}
async function checkQuestionResponse(args: { taskId: string }) {
const { taskId } = args
const question = await db.query.taskQuestions.findFirst({
where: and(
eq(taskQuestions.taskId, taskId),
eq(taskQuestions.status, 'answered')
),
orderBy: [desc(taskQuestions.respondedAt)]
})
if (!question || !question.response) {
return {
content: [{
type: 'text',
text: JSON.stringify({
hasResponse: false,
message: 'No response yet'
})
}]
}
}
// Update task back to in_progress
await db.update(tasks)
.set({ state: 'in_progress' })
.where(eq(tasks.id, taskId))
return {
content: [{
type: 'text',
text: JSON.stringify({
hasResponse: true,
response: question.response,
question: question.question
})
}]
}
}
async function createBranch(args: { taskId: string; branchName?: string }) {
const { taskId, branchName } = args
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, taskId),
with: { project: true }
})
if (!task) {
throw new Error('Task not found')
}
const branch = branchName || `task-${taskId.slice(0, 8)}-${task.title.toLowerCase().replace(/\s+/g, '-').slice(0, 30)}`
// Create branch in Gitea
await giteaClient.createBranch(
task.project.giteaOwner!,
task.project.giteaRepoName!,
branch,
task.project.defaultBranch!
)
// Update task
await db.update(tasks)
.set({ branchName: branch })
.where(eq(tasks.id, taskId))
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
branchName: branch,
repoUrl: task.project.giteaRepoUrl
})
}]
}
}
async function createPullRequest(args: { taskId: string; title: string; description: string }) {
const { taskId, title, description } = args
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, taskId),
with: { project: true }
})
if (!task || !task.branchName) {
throw new Error('Task not found or branch not created')
}
const pr = await giteaClient.createPullRequest(
task.project.giteaOwner!,
task.project.giteaRepoName!,
{
title,
body: description,
head: task.branchName,
base: task.project.defaultBranch!
}
)
await db.update(tasks)
.set({
prNumber: pr.number,
prUrl: pr.html_url
})
.where(eq(tasks.id, taskId))
emitWebSocketEvent('task:pr_created', {
taskId,
prUrl: pr.html_url,
prNumber: pr.number
})
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
prUrl: pr.html_url,
prNumber: pr.number
})
}]
}
}
async function triggerPreviewDeploy(args: { taskId: string }) {
const { taskId } = args
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, taskId),
with: { project: true }
})
if (!task) {
throw new Error('Task not found')
}
const previewNamespace = `preview-task-${taskId.slice(0, 8)}`
const previewUrl = `https://${previewNamespace}.preview.aiworker.dev`
// Deploy to K8s
await k8sClient.createPreviewDeployment({
namespace: previewNamespace,
taskId,
projectId: task.projectId,
image: task.project.dockerImage!,
branch: task.branchName!,
envVars: task.project.envVars as Record<string, string>
})
await db.update(tasks)
.set({
state: 'ready_to_test',
previewNamespace,
previewUrl,
previewDeployedAt: new Date()
})
.where(eq(tasks.id, taskId))
emitWebSocketEvent('task:ready_to_test', {
taskId,
previewUrl
})
return {
content: [{
type: 'text',
text: JSON.stringify({
success: true,
previewUrl,
namespace: previewNamespace
})
}]
}
}
async function getTaskDetails(args: { taskId: string }) {
const { taskId } = args
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, taskId),
with: {
project: true,
questions: true
}
})
if (!task) {
throw new Error('Task not found')
}
return {
content: [{
type: 'text',
text: JSON.stringify({ task })
}]
}
}
async function logActivity(args: { agentId: string; level?: string; message: string; metadata?: any }) {
const { agentId, level = 'info', message, metadata } = args
await db.insert(agentLogs).values({
agentId,
level: level as any,
message,
metadata
})
return {
content: [{
type: 'text',
text: JSON.stringify({ success: true })
}]
}
}
async function heartbeat(args: { agentId: string; status: string }) {
const { agentId, status } = args
await db.update(agents)
.set({
lastHeartbeat: new Date(),
status: status as any
})
.where(eq(agents.id, agentId))
return {
content: [{
type: 'text',
text: JSON.stringify({ success: true })
}]
}
}
```
## Uso desde Claude Code Agent
Desde el pod del agente, Claude Code usaría las herramientas así:
```bash
# En el pod del agente, configurar MCP
# claude-code config add-mcp-server aiworker stdio \
# "bun run /app/mcp-client.js"
# Ejemplo de uso en conversación con Claude Code:
# User: "Toma la siguiente tarea y trabaja en ella"
# Claude Code internamente llama:
# - get_next_task({ agentId: "agent-xyz" })
# - Si necesita info: ask_user_question({ taskId: "...", question: "..." })
# - Trabaja en el código
# - create_branch({ taskId: "..." })
# - (commits and pushes)
# - create_pull_request({ taskId: "...", title: "...", description: "..." })
# - trigger_preview_deploy({ taskId: "..." })
# - update_task_status({ taskId: "...", status: "ready_to_test" })
```

View File

@@ -0,0 +1,520 @@
# Sistema de Colas con BullMQ
## Setup de BullMQ
```typescript
// services/queue/config.ts
import { Queue, Worker, QueueScheduler } from 'bullmq'
import { getRedis } from '../../config/redis'
import { logger } from '../../utils/logger'
const connection = getRedis()
export const queues = {
tasks: new Queue('tasks', { connection }),
deploys: new Queue('deploys', { connection }),
merges: new Queue('merges', { connection }),
cleanup: new Queue('cleanup', { connection }),
}
// Queue options
export const defaultJobOptions = {
attempts: 3,
backoff: {
type: 'exponential',
delay: 2000,
},
removeOnComplete: {
age: 3600, // 1 hour
count: 1000,
},
removeOnFail: {
age: 86400, // 24 hours
},
}
```
## Task Queue
```typescript
// services/queue/task-queue.ts
import { queues, defaultJobOptions } from './config'
import { logger } from '../../utils/logger'
export interface TaskJob {
taskId: string
projectId: string
priority: 'low' | 'medium' | 'high' | 'urgent'
}
export async function enqueueTask(data: TaskJob) {
const priorityMap = {
urgent: 1,
high: 2,
medium: 3,
low: 4,
}
await queues.tasks.add('process-task', data, {
...defaultJobOptions,
priority: priorityMap[data.priority],
jobId: data.taskId,
})
logger.info(`Task queued: ${data.taskId}`)
}
export async function dequeueTask(taskId: string) {
const job = await queues.tasks.getJob(taskId)
if (job) {
await job.remove()
logger.info(`Task dequeued: ${taskId}`)
}
}
export async function getQueuedTasks() {
const jobs = await queues.tasks.getJobs(['waiting', 'active'])
return jobs.map(job => ({
id: job.id,
data: job.data,
state: await job.getState(),
progress: job.progress,
attemptsMade: job.attemptsMade,
}))
}
```
## Deploy Queue
```typescript
// services/queue/deploy-queue.ts
import { queues, defaultJobOptions } from './config'
import { logger } from '../../utils/logger'
export interface DeployJob {
deploymentId: string
projectId: string
taskId?: string
environment: 'preview' | 'staging' | 'production'
branch: string
commitHash: string
}
export async function enqueueDeploy(data: DeployJob) {
await queues.deploys.add('deploy', data, {
...defaultJobOptions,
priority: data.environment === 'production' ? 1 : 2,
jobId: data.deploymentId,
})
logger.info(`Deploy queued: ${data.environment} - ${data.deploymentId}`)
}
export async function getDeployStatus(deploymentId: string) {
const job = await queues.deploys.getJob(deploymentId)
if (!job) return null
return {
id: job.id,
state: await job.getState(),
progress: job.progress,
result: job.returnvalue,
failedReason: job.failedReason,
}
}
```
## Merge Queue
```typescript
// services/queue/merge-queue.ts
import { queues, defaultJobOptions } from './config'
import { logger } from '../../utils/logger'
export interface MergeJob {
taskGroupId: string
projectId: string
taskIds: string[]
targetBranch: 'staging' | 'main'
}
export async function enqueueMerge(data: MergeJob) {
await queues.merges.add('merge-tasks', data, {
...defaultJobOptions,
priority: data.targetBranch === 'main' ? 1 : 2,
jobId: data.taskGroupId,
})
logger.info(`Merge queued: ${data.taskGroupId}`)
}
```
## Cleanup Queue
```typescript
// services/queue/cleanup-queue.ts
import { queues, defaultJobOptions } from './config'
import { logger } from '../../utils/logger'
export interface CleanupJob {
type: 'preview-namespace' | 'old-logs' | 'completed-jobs'
namespaceOrResource: string
ageHours: number
}
export async function enqueueCleanup(data: CleanupJob) {
await queues.cleanup.add('cleanup', data, {
...defaultJobOptions,
attempts: 1,
})
logger.info(`Cleanup queued: ${data.type}`)
}
// Schedule recurring cleanup
export async function scheduleRecurringCleanup() {
// Clean preview namespaces older than 7 days
await queues.cleanup.add(
'cleanup-preview-namespaces',
{
type: 'preview-namespace',
ageHours: 168, // 7 days
},
{
repeat: {
pattern: '0 2 * * *', // Daily at 2 AM
},
}
)
// Clean old logs
await queues.cleanup.add(
'cleanup-old-logs',
{
type: 'old-logs',
ageHours: 720, // 30 days
},
{
repeat: {
pattern: '0 3 * * 0', // Weekly on Sunday at 3 AM
},
}
)
logger.info('Recurring cleanup jobs scheduled')
}
```
## Workers Implementation
```typescript
// services/queue/workers.ts
import { Worker, Job } from 'bullmq'
import { getRedis } from '../../config/redis'
import { logger } from '../../utils/logger'
import { db } from '../../db/client'
import { tasks, agents, deployments } from '../../db/schema'
import { eq } from 'drizzle-orm'
import { K8sClient } from '../kubernetes/client'
import { GiteaClient } from '../gitea/client'
import { TaskJob, DeployJob, MergeJob, CleanupJob } from './types'
const connection = getRedis()
const k8sClient = new K8sClient()
const giteaClient = new GiteaClient()
// ============================================
// TASK WORKER
// ============================================
const taskWorker = new Worker(
'tasks',
async (job: Job<TaskJob>) => {
logger.info(`Processing task job: ${job.id}`)
// Check if there's an available agent
const availableAgent = await db.query.agents.findFirst({
where: eq(agents.status, 'idle'),
})
if (!availableAgent) {
logger.info('No available agents, task will be retried')
throw new Error('No available agents')
}
// Task will be picked up by agent via MCP get_next_task
logger.info(`Task ${job.data.taskId} ready for agent pickup`)
return { success: true, readyForPickup: true }
},
{
connection,
concurrency: 5,
}
)
taskWorker.on('completed', (job) => {
logger.info(`Task job completed: ${job.id}`)
})
taskWorker.on('failed', (job, err) => {
logger.error(`Task job failed: ${job?.id}`, err)
})
// ============================================
// DEPLOY WORKER
// ============================================
const deployWorker = new Worker(
'deploys',
async (job: Job<DeployJob>) => {
const { deploymentId, projectId, environment, branch, commitHash } = job.data
logger.info(`Deploying: ${environment} - ${deploymentId}`)
// Update deployment status
await db.update(deployments)
.set({
status: 'in_progress',
startedAt: new Date(),
})
.where(eq(deployments.id, deploymentId))
job.updateProgress(10)
try {
// Get project config
const project = await db.query.projects.findFirst({
where: eq(deployments.projectId, projectId),
})
if (!project) {
throw new Error('Project not found')
}
job.updateProgress(20)
// Prepare deployment
const namespace = environment === 'production'
? `${project.k8sNamespace}-prod`
: environment === 'staging'
? `${project.k8sNamespace}-staging`
: job.data.taskId
? `preview-task-${job.data.taskId.slice(0, 8)}`
: project.k8sNamespace
job.updateProgress(40)
// Deploy to K8s
await k8sClient.createOrUpdateDeployment({
namespace,
name: `${project.name}-${environment}`,
image: `${project.dockerImage}:${commitHash.slice(0, 7)}`,
envVars: project.envVars as Record<string, string>,
replicas: project.replicas || 1,
resources: {
cpu: project.cpuLimit || '500m',
memory: project.memoryLimit || '512Mi',
},
})
job.updateProgress(70)
// Create/update ingress
const url = await k8sClient.createOrUpdateIngress({
namespace,
name: `${project.name}-${environment}`,
host: environment === 'production'
? `${project.name}.aiworker.dev`
: `${environment}-${project.name}.aiworker.dev`,
serviceName: `${project.name}-${environment}`,
servicePort: 3000,
})
job.updateProgress(90)
// Update deployment record
await db.update(deployments)
.set({
status: 'completed',
completedAt: new Date(),
url,
durationSeconds: Math.floor(
(new Date().getTime() - job.processedOn!) / 1000
),
})
.where(eq(deployments.id, deploymentId))
job.updateProgress(100)
logger.info(`Deploy completed: ${environment} - ${url}`)
return { success: true, url }
} catch (error) {
// Update deployment as failed
await db.update(deployments)
.set({
status: 'failed',
errorMessage: error.message,
completedAt: new Date(),
})
.where(eq(deployments.id, deploymentId))
throw error
}
},
{
connection,
concurrency: 3,
}
)
// ============================================
// MERGE WORKER
// ============================================
const mergeWorker = new Worker(
'merges',
async (job: Job<MergeJob>) => {
const { taskGroupId, projectId, taskIds, targetBranch } = job.data
logger.info(`Merging tasks: ${taskIds.join(', ')} to ${targetBranch}`)
// Get project and tasks
const project = await db.query.projects.findFirst({
where: eq(deployments.projectId, projectId),
})
if (!project) {
throw new Error('Project not found')
}
const tasksList = await db.query.tasks.findMany({
where: (tasks, { inArray }) => inArray(tasks.id, taskIds),
})
job.updateProgress(20)
// Merge each PR
for (const task of tasksList) {
if (task.prNumber) {
await giteaClient.mergePullRequest(
project.giteaOwner!,
project.giteaRepoName!,
task.prNumber,
'squash'
)
job.updateProgress(20 + (40 / tasksList.length))
}
}
job.updateProgress(60)
// Create staging/production branch if needed
// Then trigger deploy
// ... implementation
job.updateProgress(100)
logger.info(`Merge completed: ${taskGroupId}`)
return { success: true }
},
{
connection,
concurrency: 2,
}
)
// ============================================
// CLEANUP WORKER
// ============================================
const cleanupWorker = new Worker(
'cleanup',
async (job: Job<CleanupJob>) => {
const { type, ageHours } = job.data
logger.info(`Cleanup: ${type}`)
switch (type) {
case 'preview-namespace':
await k8sClient.cleanupOldPreviewNamespaces(ageHours)
break
case 'old-logs':
const cutoffDate = new Date(Date.now() - ageHours * 60 * 60 * 1000)
await db.delete(agentLogs)
.where(lt(agentLogs.createdAt, cutoffDate))
break
}
logger.info(`Cleanup completed: ${type}`)
return { success: true }
},
{
connection,
concurrency: 1,
}
)
// ============================================
// START ALL WORKERS
// ============================================
export async function startQueueWorkers() {
logger.info('Starting BullMQ workers...')
// Workers are already instantiated above
// Just schedule recurring jobs
await scheduleRecurringCleanup()
logger.info('✓ All workers started')
return {
taskWorker,
deployWorker,
mergeWorker,
cleanupWorker,
}
}
// Graceful shutdown
process.on('SIGTERM', async () => {
logger.info('Shutting down workers...')
await taskWorker.close()
await deployWorker.close()
await mergeWorker.close()
await cleanupWorker.close()
logger.info('Workers shut down')
process.exit(0)
})
```
## Monitorización de Colas
```typescript
// api/routes/queues.ts
import { Router } from 'express'
import { queues } from '../../services/queue/config'
const router = Router()
router.get('/status', async (req, res) => {
const status = await Promise.all(
Object.entries(queues).map(async ([name, queue]) => ({
name,
waiting: await queue.getWaitingCount(),
active: await queue.getActiveCount(),
completed: await queue.getCompletedCount(),
failed: await queue.getFailedCount(),
}))
)
res.json({ queues: status })
})
export default router
```

View File

@@ -0,0 +1,498 @@
# Componentes Principales
## KanbanBoard
```typescript
// components/kanban/KanbanBoard.tsx
import { useMemo } from 'react'
import { DndContext, DragEndEvent, PointerSensor, useSensor, useSensors } from '@dnd-kit/core'
import { useTasks, useUpdateTask } from '@/hooks/useTasks'
import KanbanColumn from './KanbanColumn'
import { Task, TaskState } from '@/types/task'
const COLUMNS: { id: TaskState; title: string; color: string }[] = [
{ id: 'backlog', title: 'Backlog', color: 'gray' },
{ id: 'in_progress', title: 'En Progreso', color: 'blue' },
{ id: 'needs_input', title: 'Necesita Respuestas', color: 'yellow' },
{ id: 'ready_to_test', title: 'Listo para Probar', color: 'purple' },
{ id: 'approved', title: 'Aprobado', color: 'green' },
{ id: 'staging', title: 'Staging', color: 'indigo' },
{ id: 'production', title: 'Producción', color: 'emerald' },
]
interface KanbanBoardProps {
projectId: string
}
export function KanbanBoard({ projectId }: KanbanBoardProps) {
const { data: tasks = [], isLoading } = useTasks({ projectId })
const updateTask = useUpdateTask()
const sensors = useSensors(
useSensor(PointerSensor, {
activationConstraint: {
distance: 8,
},
})
)
const tasksByState = useMemo(() => {
return COLUMNS.reduce((acc, column) => {
acc[column.id] = tasks.filter((task) => task.state === column.id)
return acc
}, {} as Record<TaskState, Task[]>)
}, [tasks])
const handleDragEnd = (event: DragEndEvent) => {
const { active, over } = event
if (!over || active.id === over.id) return
const taskId = active.id as string
const newState = over.id as TaskState
updateTask.mutate({
taskId,
updates: { state: newState },
})
}
if (isLoading) {
return <div className="flex justify-center p-8">Loading...</div>
}
return (
<DndContext sensors={sensors} onDragEnd={handleDragEnd}>
<div className="flex gap-4 overflow-x-auto pb-4">
{COLUMNS.map((column) => (
<KanbanColumn
key={column.id}
id={column.id}
title={column.title}
color={column.color}
tasks={tasksByState[column.id]}
/>
))}
</div>
</DndContext>
)
}
```
## KanbanColumn
```typescript
// components/kanban/KanbanColumn.tsx
import { useDroppable } from '@dnd-kit/core'
import { SortableContext, verticalListSortingStrategy } from '@dnd-kit/sortable'
import TaskCard from './TaskCard'
import { Task, TaskState } from '@/types/task'
interface KanbanColumnProps {
id: TaskState
title: string
color: string
tasks: Task[]
}
export default function KanbanColumn({ id, title, color, tasks }: KanbanColumnProps) {
const { setNodeRef } = useDroppable({ id })
return (
<div className="flex flex-col w-80 flex-shrink-0">
<div className={`bg-${color}-100 border-${color}-300 border-t-4 rounded-t-lg p-3`}>
<h3 className="font-semibold text-gray-900">
{title}
<span className="ml-2 text-sm text-gray-500">({tasks.length})</span>
</h3>
</div>
<div
ref={setNodeRef}
className="flex-1 bg-gray-50 border border-t-0 border-gray-200 rounded-b-lg p-3 min-h-[200px]"
>
<SortableContext items={tasks.map((t) => t.id)} strategy={verticalListSortingStrategy}>
<div className="space-y-3">
{tasks.map((task) => (
<TaskCard key={task.id} task={task} />
))}
</div>
</SortableContext>
{tasks.length === 0 && (
<div className="text-center text-gray-400 text-sm py-8">
Sin tareas
</div>
)}
</div>
</div>
)
}
```
## TaskCard
```typescript
// components/kanban/TaskCard.tsx
import { useSortable } from '@dnd-kit/sortable'
import { CSS } from '@dnd-kit/utilities'
import { Clock, User, GitBranch, AlertCircle } from 'lucide-react'
import { Task } from '@/types/task'
import { useNavigate } from 'react-router-dom'
interface TaskCardProps {
task: Task
}
const PRIORITY_COLORS = {
low: 'bg-gray-100 text-gray-800',
medium: 'bg-blue-100 text-blue-800',
high: 'bg-orange-100 text-orange-800',
urgent: 'bg-red-100 text-red-800',
}
export default function TaskCard({ task }: TaskCardProps) {
const navigate = useNavigate()
const { attributes, listeners, setNodeRef, transform, transition, isDragging } = useSortable({
id: task.id,
})
const style = {
transform: CSS.Transform.toString(transform),
transition,
opacity: isDragging ? 0.5 : 1,
}
return (
<div
ref={setNodeRef}
style={style}
{...attributes}
{...listeners}
className="card cursor-move hover:shadow-md transition-shadow"
onClick={() => navigate(`/tasks/${task.id}`)}
>
<div className="flex items-start justify-between mb-2">
<h4 className="font-medium text-sm line-clamp-2">{task.title}</h4>
<span className={`badge ${PRIORITY_COLORS[task.priority]}`}>
{task.priority}
</span>
</div>
{task.description && (
<p className="text-xs text-gray-600 line-clamp-2 mb-3">{task.description}</p>
)}
<div className="flex items-center gap-3 text-xs text-gray-500">
{task.assignedAgent && (
<div className="flex items-center gap-1">
<User className="w-3 h-3" />
<span>Agent {task.assignedAgent.podName.slice(0, 8)}</span>
</div>
)}
{task.branchName && (
<div className="flex items-center gap-1">
<GitBranch className="w-3 h-3" />
<span className="truncate max-w-[100px]">{task.branchName}</span>
</div>
)}
{task.state === 'needs_input' && (
<div className="flex items-center gap-1 text-yellow-600">
<AlertCircle className="w-3 h-3" />
<span>Pregunta pendiente</span>
</div>
)}
</div>
{task.actualDurationMinutes && (
<div className="flex items-center gap-1 mt-2 text-xs text-gray-500">
<Clock className="w-3 h-3" />
<span>{task.actualDurationMinutes}min</span>
</div>
)}
{task.previewUrl && (
<a
href={task.previewUrl}
target="_blank"
rel="noopener noreferrer"
className="mt-2 text-xs text-primary-600 hover:underline block"
onClick={(e) => e.stopPropagation()}
>
Ver Preview
</a>
)}
</div>
)
}
```
## WebTerminal
```typescript
// components/terminal/WebTerminal.tsx
import { useEffect, useRef } from 'react'
import { Terminal } from 'xterm'
import { FitAddon } from 'xterm-addon-fit'
import { WebLinksAddon } from 'xterm-addon-web-links'
import 'xterm/css/xterm.css'
interface WebTerminalProps {
agentId: string
podName: string
}
export function WebTerminal({ agentId, podName }: WebTerminalProps) {
const terminalRef = useRef<HTMLDivElement>(null)
const xtermRef = useRef<Terminal>()
const fitAddonRef = useRef<FitAddon>()
useEffect(() => {
if (!terminalRef.current) return
// Create terminal
const term = new Terminal({
cursorBlink: true,
fontSize: 14,
fontFamily: 'Menlo, Monaco, "Courier New", monospace',
theme: {
background: '#1e1e1e',
foreground: '#d4d4d4',
},
})
const fitAddon = new FitAddon()
const webLinksAddon = new WebLinksAddon()
term.loadAddon(fitAddon)
term.loadAddon(webLinksAddon)
term.open(terminalRef.current)
fitAddon.fit()
xtermRef.current = term
fitAddonRef.current = fitAddon
// Connect to backend WebSocket for terminal
const ws = new WebSocket(`ws://localhost:3000/terminal/${agentId}`)
ws.onopen = () => {
term.writeln(`Connected to ${podName}`)
term.writeln('')
}
ws.onmessage = (event) => {
term.write(event.data)
}
term.onData((data) => {
ws.send(data)
})
// Handle resize
const handleResize = () => {
fitAddon.fit()
}
window.addEventListener('resize', handleResize)
return () => {
term.dispose()
ws.close()
window.removeEventListener('resize', handleResize)
}
}, [agentId, podName])
return (
<div className="h-full w-full bg-[#1e1e1e] rounded-lg overflow-hidden">
<div ref={terminalRef} className="h-full w-full p-2" />
</div>
)
}
```
## TaskForm
```typescript
// components/tasks/TaskForm.tsx
import { useState } from 'react'
import { useCreateTask } from '@/hooks/useTasks'
import { Button } from '@/components/ui/Button'
import { Input } from '@/components/ui/Input'
import { Select } from '@/components/ui/Select'
import { toast } from 'react-hot-toast'
interface TaskFormProps {
projectId: string
onSuccess?: () => void
}
export function TaskForm({ projectId, onSuccess }: TaskFormProps) {
const [title, setTitle] = useState('')
const [description, setDescription] = useState('')
const [priority, setPriority] = useState<'low' | 'medium' | 'high' | 'urgent'>('medium')
const createTask = useCreateTask()
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault()
if (!title.trim()) {
toast.error('El título es requerido')
return
}
try {
await createTask.mutateAsync({
projectId,
title,
description,
priority,
})
toast.success('Tarea creada')
setTitle('')
setDescription('')
setPriority('medium')
onSuccess?.()
} catch (error) {
toast.error('Error al crear tarea')
}
}
return (
<form onSubmit={handleSubmit} className="space-y-4">
<Input
label="Título"
value={title}
onChange={(e) => setTitle(e.target.value)}
placeholder="Ej: Implementar autenticación"
required
/>
<div>
<label className="block text-sm font-medium text-gray-700 mb-1">
Descripción
</label>
<textarea
value={description}
onChange={(e) => setDescription(e.target.value)}
placeholder="Describe la tarea en detalle..."
rows={4}
className="w-full px-3 py-2 border border-gray-300 rounded-lg focus:ring-2 focus:ring-primary-500"
/>
</div>
<Select
label="Prioridad"
value={priority}
onChange={(e) => setPriority(e.target.value as any)}
options={[
{ value: 'low', label: 'Baja' },
{ value: 'medium', label: 'Media' },
{ value: 'high', label: 'Alta' },
{ value: 'urgent', label: 'Urgente' },
]}
/>
<Button type="submit" loading={createTask.isPending} className="w-full">
Crear Tarea
</Button>
</form>
)
}
```
## AgentCard
```typescript
// components/agents/AgentCard.tsx
import { Agent } from '@/types/agent'
import { Activity, Clock, CheckCircle, AlertCircle } from 'lucide-react'
import { formatDistanceToNow } from 'date-fns'
import { es } from 'date-fns/locale'
interface AgentCardProps {
agent: Agent
onOpenTerminal?: (agentId: string) => void
}
const STATUS_CONFIG = {
idle: { color: 'green', icon: CheckCircle, label: 'Inactivo' },
busy: { color: 'blue', icon: Activity, label: 'Trabajando' },
error: { color: 'red', icon: AlertCircle, label: 'Error' },
offline: { color: 'gray', icon: AlertCircle, label: 'Offline' },
initializing: { color: 'yellow', icon: Clock, label: 'Inicializando' },
}
export function AgentCard({ agent, onOpenTerminal }: AgentCardProps) {
const config = STATUS_CONFIG[agent.status]
const Icon = config.icon
return (
<div className="card">
<div className="flex items-start justify-between mb-3">
<div>
<h3 className="font-semibold text-gray-900">{agent.podName}</h3>
<p className="text-xs text-gray-500 mt-1">ID: {agent.id.slice(0, 8)}</p>
</div>
<span className={`badge bg-${config.color}-100 text-${config.color}-800`}>
<Icon className="w-3 h-3 mr-1" />
{config.label}
</span>
</div>
<div className="space-y-2 text-sm text-gray-600">
<div className="flex justify-between">
<span>Tareas completadas:</span>
<span className="font-medium">{agent.tasksCompleted}</span>
</div>
<div className="flex justify-between">
<span>Tiempo total:</span>
<span className="font-medium">{agent.totalRuntimeMinutes}min</span>
</div>
{agent.lastHeartbeat && (
<div className="flex justify-between">
<span>Último heartbeat:</span>
<span className="font-medium">
{formatDistanceToNow(new Date(agent.lastHeartbeat), {
addSuffix: true,
locale: es,
})}
</span>
</div>
)}
</div>
{agent.currentTask && (
<div className="mt-3 p-2 bg-blue-50 rounded text-sm">
<p className="text-blue-900 font-medium">Tarea actual:</p>
<p className="text-blue-700 text-xs mt-1">{agent.currentTask.title}</p>
</div>
)}
{agent.capabilities && agent.capabilities.length > 0 && (
<div className="mt-3 flex flex-wrap gap-1">
{agent.capabilities.map((cap) => (
<span key={cap} className="badge bg-gray-100 text-gray-700 text-xs">
{cap}
</span>
))}
</div>
)}
{onOpenTerminal && (
<button
onClick={() => onOpenTerminal(agent.id)}
className="mt-3 w-full btn-secondary text-sm"
>
Abrir Terminal
</button>
)}
</div>
)
}
```

View File

@@ -0,0 +1,422 @@
# Consolas Web con xterm.js
## Implementación del Terminal Web
### WebTerminal Component
```typescript
// components/terminal/WebTerminal.tsx
import { useEffect, useRef, useState } from 'react'
import { Terminal } from 'xterm'
import { FitAddon } from 'xterm-addon-fit'
import { WebLinksAddon } from 'xterm-addon-web-links'
import { SearchAddon } from 'xterm-addon-search'
import 'xterm/css/xterm.css'
interface WebTerminalProps {
agentId: string
podName: string
namespace?: string
}
export function WebTerminal({ agentId, podName, namespace = 'agents' }: WebTerminalProps) {
const terminalRef = useRef<HTMLDivElement>(null)
const xtermRef = useRef<Terminal>()
const fitAddonRef = useRef<FitAddon>()
const wsRef = useRef<WebSocket>()
const [isConnected, setIsConnected] = useState(false)
const [error, setError] = useState<string | null>(null)
useEffect(() => {
if (!terminalRef.current) return
// Create terminal instance
const term = new Terminal({
cursorBlink: true,
fontSize: 14,
fontFamily: 'Menlo, Monaco, "Courier New", monospace',
lineHeight: 1.2,
theme: {
background: '#1e1e1e',
foreground: '#d4d4d4',
cursor: '#ffffff',
selection: '#264f78',
black: '#000000',
red: '#cd3131',
green: '#0dbc79',
yellow: '#e5e510',
blue: '#2472c8',
magenta: '#bc3fbc',
cyan: '#11a8cd',
white: '#e5e5e5',
brightBlack: '#666666',
brightRed: '#f14c4c',
brightGreen: '#23d18b',
brightYellow: '#f5f543',
brightBlue: '#3b8eea',
brightMagenta: '#d670d6',
brightCyan: '#29b8db',
brightWhite: '#ffffff',
},
scrollback: 10000,
tabStopWidth: 4,
})
// Addons
const fitAddon = new FitAddon()
const webLinksAddon = new WebLinksAddon()
const searchAddon = new SearchAddon()
term.loadAddon(fitAddon)
term.loadAddon(webLinksAddon)
term.loadAddon(searchAddon)
// Open terminal
term.open(terminalRef.current)
fitAddon.fit()
// Store refs
xtermRef.current = term
fitAddonRef.current = fitAddon
// Connect to backend WebSocket
const wsUrl = `${import.meta.env.VITE_WS_URL || 'ws://localhost:3000'}/terminal/${agentId}`
const ws = new WebSocket(wsUrl)
wsRef.current = ws
ws.onopen = () => {
setIsConnected(true)
setError(null)
term.writeln(`\x1b[32m✓\x1b[0m Connected to ${podName}`)
term.writeln('')
}
ws.onerror = (err) => {
setError('Connection error')
term.writeln(`\x1b[31m✗\x1b[0m Connection error`)
}
ws.onclose = () => {
setIsConnected(false)
term.writeln('')
term.writeln(`\x1b[33m⚠\x1b[0m Disconnected from ${podName}`)
}
ws.onmessage = (event) => {
term.write(event.data)
}
// Send input to backend
term.onData((data) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(data)
}
})
// Handle terminal resize
const handleResize = () => {
fitAddon.fit()
// Send resize info to backend
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({
type: 'resize',
cols: term.cols,
rows: term.rows,
}))
}
}
window.addEventListener('resize', handleResize)
// Cleanup
return () => {
term.dispose()
ws.close()
window.removeEventListener('resize', handleResize)
}
}, [agentId, podName, namespace])
return (
<div className="flex flex-col h-full">
{/* Header */}
<div className="bg-gray-800 text-white px-4 py-2 flex items-center justify-between">
<div className="flex items-center gap-3">
<div className={`w-2 h-2 rounded-full ${isConnected ? 'bg-green-400' : 'bg-red-400'}`} />
<span className="font-mono text-sm">{podName}</span>
<span className="text-gray-400 text-xs">({namespace})</span>
</div>
{error && (
<span className="text-red-400 text-xs">{error}</span>
)}
</div>
{/* Terminal */}
<div className="flex-1 bg-[#1e1e1e] overflow-hidden">
<div ref={terminalRef} className="h-full w-full p-2" />
</div>
</div>
)
}
```
### Terminal Tabs Manager
```typescript
// components/terminal/TerminalTabs.tsx
import { X } from 'lucide-react'
import { useTerminalStore } from '@/store/terminalStore'
import { WebTerminal } from './WebTerminal'
export function TerminalTabs() {
const { tabs, activeTabId, setActiveTab, closeTerminal } = useTerminalStore()
if (tabs.length === 0) {
return (
<div className="h-full flex items-center justify-center text-gray-500">
<p>No hay terminales abiertas</p>
</div>
)
}
return (
<div className="flex flex-col h-full">
{/* Tabs */}
<div className="flex items-center bg-gray-800 border-b border-gray-700 overflow-x-auto">
{tabs.map((tab) => (
<div
key={tab.id}
className={`
flex items-center gap-2 px-4 py-2 cursor-pointer
${tab.isActive ? 'bg-gray-700 text-white' : 'text-gray-400 hover:text-white'}
border-r border-gray-700
`}
onClick={() => setActiveTab(tab.id)}
>
<span className="font-mono text-sm truncate max-w-[150px]">
{tab.podName}
</span>
<button
onClick={(e) => {
e.stopPropagation()
closeTerminal(tab.id)
}}
className="hover:text-red-400"
>
<X className="w-4 h-4" />
</button>
</div>
))}
</div>
{/* Active terminal */}
<div className="flex-1">
{tabs.map((tab) => (
<div
key={tab.id}
className={`h-full ${tab.isActive ? 'block' : 'hidden'}`}
>
<WebTerminal
agentId={tab.agentId}
podName={tab.podName}
/>
</div>
))}
</div>
</div>
)
}
```
### Terminal Page/View
```typescript
// pages/TerminalsView.tsx
import { TerminalTabs } from '@/components/terminal/TerminalTabs'
import { useAgents } from '@/hooks/useAgents'
import { useTerminalStore } from '@/store/terminalStore'
import { Plus } from 'lucide-react'
export default function TerminalsView() {
const { data: agents = [] } = useAgents()
const { openTerminal } = useTerminalStore()
return (
<div className="flex h-screen">
{/* Sidebar with agents */}
<div className="w-64 bg-white border-r border-gray-200 overflow-y-auto">
<div className="p-4">
<h2 className="font-semibold text-gray-900 mb-4">Agentes Disponibles</h2>
<div className="space-y-2">
{agents.map((agent) => (
<button
key={agent.id}
onClick={() => openTerminal(agent.id, agent.podName)}
className="w-full text-left p-3 rounded-lg hover:bg-gray-100 transition-colors"
>
<div className="flex items-center justify-between">
<span className="font-mono text-sm truncate">{agent.podName}</span>
<div className={`w-2 h-2 rounded-full ${
agent.status === 'idle' ? 'bg-green-400' :
agent.status === 'busy' ? 'bg-blue-400' :
'bg-gray-400'
}`} />
</div>
<p className="text-xs text-gray-500 mt-1">{agent.status}</p>
</button>
))}
</div>
</div>
</div>
{/* Terminals */}
<div className="flex-1">
<TerminalTabs />
</div>
</div>
)
}
```
## Backend WebSocket Handler
```typescript
// backend: api/websocket/terminal.ts
import { Server as SocketIOServer, Socket } from 'socket.io'
import { K8sClient } from '../../services/kubernetes/client'
import { logger } from '../../utils/logger'
const k8sClient = new K8sClient()
export function setupTerminalWebSocket(io: SocketIOServer) {
io.of('/terminal').on('connection', async (socket: Socket) => {
const agentId = socket.handshake.query.agentId as string
if (!agentId) {
socket.disconnect()
return
}
logger.info(`Terminal connection: agent ${agentId}`)
try {
// Get agent pod info
const agent = await db.query.agents.findFirst({
where: eq(agents.id, agentId),
})
if (!agent) {
socket.emit('error', { message: 'Agent not found' })
socket.disconnect()
return
}
// Connect to K8s pod exec
const stream = await k8sClient.execInPod({
namespace: agent.k8sNamespace,
podName: agent.podName,
command: ['/bin/bash'],
})
// Forward data from K8s to client
stream.stdout.on('data', (data: Buffer) => {
socket.emit('data', data.toString())
})
stream.stderr.on('data', (data: Buffer) => {
socket.emit('data', data.toString())
})
// Forward data from client to K8s
socket.on('data', (data: string) => {
stream.stdin.write(data)
})
// Handle resize
socket.on('resize', ({ cols, rows }: { cols: number; rows: number }) => {
stream.resize({ cols, rows })
})
// Cleanup on disconnect
socket.on('disconnect', () => {
logger.info(`Terminal disconnected: agent ${agentId}`)
stream.stdin.end()
stream.destroy()
})
} catch (error) {
logger.error('Terminal connection error:', error)
socket.emit('error', { message: 'Failed to connect to pod' })
socket.disconnect()
}
})
}
```
## Features Adicionales
### Copy/Paste
```typescript
// En WebTerminal component
term.attachCustomKeyEventHandler((e) => {
// Ctrl+C / Cmd+C
if ((e.ctrlKey || e.metaKey) && e.key === 'c') {
const selection = term.getSelection()
if (selection) {
navigator.clipboard.writeText(selection)
return false
}
}
// Ctrl+V / Cmd+V
if ((e.ctrlKey || e.metaKey) && e.key === 'v') {
navigator.clipboard.readText().then((text) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(text)
}
})
return false
}
return true
})
```
### Clear Terminal
```typescript
<button
onClick={() => xtermRef.current?.clear()}
className="btn-secondary"
>
Clear
</button>
```
### Download Log
```typescript
const downloadLog = () => {
if (!xtermRef.current) return
const buffer = xtermRef.current.buffer.active
let content = ''
for (let i = 0; i < buffer.length; i++) {
const line = buffer.getLine(i)
if (line) {
content += line.translateToString(true) + '\n'
}
}
const blob = new Blob([content], { type: 'text/plain' })
const url = URL.createObjectURL(blob)
const a = document.createElement('a')
a.href = url
a.download = `${podName}-${Date.now()}.log`
a.click()
URL.revokeObjectURL(url)
}
```

504
docs/03-frontend/estado.md Normal file
View File

@@ -0,0 +1,504 @@
# Gestión de Estado
## React Query para Server State
```typescript
// hooks/useTasks.ts
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
import { api } from '@/api/client'
import { Task, CreateTaskInput, UpdateTaskInput } from '@/types/task'
import toast from 'react-hot-toast'
export function useTasks(filters?: { projectId?: string; state?: string }) {
return useQuery({
queryKey: ['tasks', filters],
queryFn: async () => {
const { data } = await api.get<{ tasks: Task[] }>('/tasks', { params: filters })
return data.tasks
},
})
}
export function useTask(taskId: string) {
return useQuery({
queryKey: ['tasks', taskId],
queryFn: async () => {
const { data } = await api.get<{ task: Task }>(`/tasks/${taskId}`)
return data.task
},
enabled: !!taskId,
})
}
export function useCreateTask() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (input: CreateTaskInput) => {
const { data } = await api.post('/tasks', input)
return data
},
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['tasks'] })
},
})
}
export function useUpdateTask() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async ({ taskId, updates }: { taskId: string; updates: UpdateTaskInput }) => {
const { data } = await api.patch(`/tasks/${taskId}`, updates)
return data
},
onSuccess: (_, variables) => {
queryClient.invalidateQueries({ queryKey: ['tasks'] })
queryClient.invalidateQueries({ queryKey: ['tasks', variables.taskId] })
},
})
}
export function useRespondToQuestion() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async ({
taskId,
questionId,
response,
}: {
taskId: string
questionId: string
response: string
}) => {
const { data } = await api.post(`/tasks/${taskId}/respond`, {
questionId,
response,
})
return data
},
onSuccess: (_, variables) => {
toast.success('Respuesta enviada')
queryClient.invalidateQueries({ queryKey: ['tasks', variables.taskId] })
},
})
}
export function useApproveTask() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (taskId: string) => {
const { data } = await api.post(`/tasks/${taskId}/approve`)
return data
},
onSuccess: () => {
toast.success('Tarea aprobada')
queryClient.invalidateQueries({ queryKey: ['tasks'] })
},
})
}
```
```typescript
// hooks/useProjects.ts
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
import { api } from '@/api/client'
import { Project, CreateProjectInput } from '@/types/project'
export function useProjects() {
return useQuery({
queryKey: ['projects'],
queryFn: async () => {
const { data } = await api.get<{ projects: Project[] }>('/projects')
return data.projects
},
})
}
export function useProject(projectId: string) {
return useQuery({
queryKey: ['projects', projectId],
queryFn: async () => {
const { data } = await api.get<{ project: Project }>(`/projects/${projectId}`)
return data.project
},
enabled: !!projectId,
})
}
export function useCreateProject() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (input: CreateProjectInput) => {
const { data } = await api.post('/projects', input)
return data
},
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['projects'] })
},
})
}
```
```typescript
// hooks/useAgents.ts
import { useQuery } from '@tanstack/react-query'
import { api } from '@/api/client'
import { Agent } from '@/types/agent'
export function useAgents() {
return useQuery({
queryKey: ['agents'],
queryFn: async () => {
const { data } = await api.get<{ agents: Agent[] }>('/agents')
return data.agents
},
refetchInterval: 5000, // Refetch every 5s
})
}
export function useAgent(agentId: string) {
return useQuery({
queryKey: ['agents', agentId],
queryFn: async () => {
const { data } = await api.get<{ agent: Agent }>(`/agents/${agentId}`)
return data.agent
},
enabled: !!agentId,
refetchInterval: 3000,
})
}
export function useAgentLogs(agentId: string, limit = 100) {
return useQuery({
queryKey: ['agents', agentId, 'logs', limit],
queryFn: async () => {
const { data } = await api.get(`/agents/${agentId}/logs`, {
params: { limit },
})
return data.logs
},
enabled: !!agentId,
})
}
```
## Zustand para Client State
```typescript
// store/authStore.ts
import { create } from 'zustand'
import { persist } from 'zustand/middleware'
interface User {
id: string
email: string
name: string
}
interface AuthState {
user: User | null
token: string | null
isAuthenticated: boolean
login: (token: string, user: User) => void
logout: () => void
}
export const useAuthStore = create<AuthState>()(
persist(
(set) => ({
user: null,
token: null,
isAuthenticated: false,
login: (token, user) => {
set({ token, user, isAuthenticated: true })
},
logout: () => {
set({ user: null, token: null, isAuthenticated: false })
},
}),
{
name: 'auth-storage',
}
)
)
```
```typescript
// store/uiStore.ts
import { create } from 'zustand'
interface UIState {
sidebarOpen: boolean
activeModal: string | null
toggleSidebar: () => void
openModal: (modalId: string) => void
closeModal: () => void
}
export const useUIStore = create<UIState>((set) => ({
sidebarOpen: true,
activeModal: null,
toggleSidebar: () => set((state) => ({ sidebarOpen: !state.sidebarOpen })),
openModal: (modalId) => set({ activeModal: modalId }),
closeModal: () => set({ activeModal: null }),
}))
```
```typescript
// store/terminalStore.ts
import { create } from 'zustand'
interface TerminalTab {
id: string
agentId: string
podName: string
isActive: boolean
}
interface TerminalState {
tabs: TerminalTab[]
activeTabId: string | null
openTerminal: (agentId: string, podName: string) => void
closeTerminal: (tabId: string) => void
setActiveTab: (tabId: string) => void
}
export const useTerminalStore = create<TerminalState>((set) => ({
tabs: [],
activeTabId: null,
openTerminal: (agentId, podName) =>
set((state) => {
const existingTab = state.tabs.find((t) => t.agentId === agentId)
if (existingTab) {
return {
tabs: state.tabs.map((t) => ({
...t,
isActive: t.id === existingTab.id,
})),
activeTabId: existingTab.id,
}
}
const newTab: TerminalTab = {
id: `term-${Date.now()}`,
agentId,
podName,
isActive: true,
}
return {
tabs: [
...state.tabs.map((t) => ({ ...t, isActive: false })),
newTab,
],
activeTabId: newTab.id,
}
}),
closeTerminal: (tabId) =>
set((state) => {
const newTabs = state.tabs.filter((t) => t.id !== tabId)
const newActiveTab = newTabs.length > 0 ? newTabs[0].id : null
return {
tabs: newTabs.map((t, i) => ({
...t,
isActive: i === 0,
})),
activeTabId: newActiveTab,
}
}),
setActiveTab: (tabId) =>
set((state) => ({
tabs: state.tabs.map((t) => ({
...t,
isActive: t.id === tabId,
})),
activeTabId: tabId,
})),
}))
```
## WebSocket Hook
```typescript
// hooks/useWebSocket.ts
import { useEffect } from 'use'
import { useQueryClient } from '@tanstack/react-query'
import { io, Socket } from 'socket.io-client'
import { useAuthStore } from '@/store/authStore'
import toast from 'react-hot-toast'
let socket: Socket | null = null
export function useWebSocket() {
const queryClient = useQueryClient()
const token = useAuthStore((state) => state.token)
useEffect(() => {
if (!token) return
// Initialize socket
socket = io(import.meta.env.VITE_WS_URL || 'http://localhost:3000', {
auth: { token },
})
socket.on('connect', () => {
console.log('WebSocket connected')
})
socket.on('disconnect', () => {
console.log('WebSocket disconnected')
})
// Task events
socket.on('task:created', (data) => {
queryClient.invalidateQueries({ queryKey: ['tasks'] })
toast.success(`Nueva tarea: ${data.title}`)
})
socket.on('task:status_changed', (data) => {
queryClient.invalidateQueries({ queryKey: ['tasks'] })
queryClient.invalidateQueries({ queryKey: ['tasks', data.taskId] })
if (data.newState === 'ready_to_test') {
toast.success('Tarea lista para probar!', {
duration: 5000,
})
}
})
socket.on('task:needs_input', (data) => {
queryClient.invalidateQueries({ queryKey: ['tasks', data.taskId] })
toast((t) => (
<div>
<p className="font-medium">El agente necesita información</p>
<p className="text-sm text-gray-600 mt-1">{data.question}</p>
<button
onClick={() => {
// Navigate to task
window.location.href = `/tasks/${data.taskId}`
toast.dismiss(t.id)
}}
className="mt-2 text-sm text-primary-600 hover:underline"
>
Ver tarea
</button>
</div>
), {
duration: 10000,
icon: '❓',
})
})
socket.on('task:pr_created', (data) => {
toast.success('Pull Request creado!', {
action: {
label: 'Ver PR',
onClick: () => window.open(data.prUrl, '_blank'),
},
})
})
socket.on('task:ready_to_test', (data) => {
toast.success('Preview deploy completado!', {
action: {
label: 'Ver Preview',
onClick: () => window.open(data.previewUrl, '_blank'),
},
})
})
// Agent events
socket.on('agent:status', (data) => {
queryClient.invalidateQueries({ queryKey: ['agents'] })
})
// Deploy events
socket.on('deploy:started', (data) => {
toast.loading(`Desplegando a ${data.environment}...`, {
id: `deploy-${data.deploymentId}`,
})
})
socket.on('deploy:completed', (data) => {
toast.success(`Deploy completado: ${data.environment}`, {
id: `deploy-${data.deploymentId}`,
action: {
label: 'Abrir',
onClick: () => window.open(data.url, '_blank'),
},
})
})
socket.on('deploy:failed', (data) => {
toast.error(`Deploy falló: ${data.environment}`, {
id: `deploy-${data.deploymentId}`,
})
})
return () => {
if (socket) {
socket.disconnect()
socket = null
}
}
}, [token, queryClient])
return socket
}
// Export for manual usage
export function getSocket() {
return socket
}
```
## API Client
```typescript
// api/client.ts
import axios from 'axios'
import { useAuthStore } from '@/store/authStore'
export const api = axios.create({
baseURL: import.meta.env.VITE_API_URL || 'http://localhost:3000/api',
timeout: 30000,
})
// Request interceptor
api.interceptors.request.use((config) => {
const token = useAuthStore.getState().token
if (token) {
config.headers.Authorization = `Bearer ${token}`
}
return config
})
// Response interceptor
api.interceptors.response.use(
(response) => response,
(error) => {
if (error.response?.status === 401) {
useAuthStore.getState().logout()
window.location.href = '/login'
}
return Promise.reject(error)
}
)
```

View File

@@ -0,0 +1,420 @@
# Estructura del Frontend
## Árbol de Directorios
```
frontend/
├── public/
│ └── favicon.ico
├── src/
│ ├── main.tsx # Entry point
│ ├── App.tsx # App root
│ │
│ ├── pages/
│ │ ├── Dashboard.tsx # Main dashboard
│ │ ├── ProjectView.tsx # Single project view
│ │ ├── TaskDetail.tsx # Task details modal
│ │ └── AgentsView.tsx # Agents monitoring
│ │
│ ├── components/
│ │ ├── kanban/
│ │ │ ├── KanbanBoard.tsx
│ │ │ ├── KanbanColumn.tsx
│ │ │ ├── TaskCard.tsx
│ │ │ └── TaskCardActions.tsx
│ │ │
│ │ ├── terminal/
│ │ │ ├── WebTerminal.tsx
│ │ │ └── TerminalTab.tsx
│ │ │
│ │ ├── projects/
│ │ │ ├── ProjectCard.tsx
│ │ │ ├── ProjectForm.tsx
│ │ │ └── ProjectSettings.tsx
│ │ │
│ │ ├── tasks/
│ │ │ ├── TaskForm.tsx
│ │ │ ├── TaskQuestion.tsx
│ │ │ └── TaskTimeline.tsx
│ │ │
│ │ ├── agents/
│ │ │ ├── AgentCard.tsx
│ │ │ ├── AgentStatus.tsx
│ │ │ └── AgentLogs.tsx
│ │ │
│ │ ├── deployments/
│ │ │ ├── DeploymentList.tsx
│ │ │ ├── DeploymentCard.tsx
│ │ │ └── DeployButton.tsx
│ │ │
│ │ ├── ui/
│ │ │ ├── Button.tsx
│ │ │ ├── Modal.tsx
│ │ │ ├── Card.tsx
│ │ │ ├── Badge.tsx
│ │ │ ├── Input.tsx
│ │ │ ├── Select.tsx
│ │ │ └── Spinner.tsx
│ │ │
│ │ └── layout/
│ │ ├── Sidebar.tsx
│ │ ├── Header.tsx
│ │ ├── Layout.tsx
│ │ └── Navigation.tsx
│ │
│ ├── hooks/
│ │ ├── useProjects.ts
│ │ ├── useTasks.ts
│ │ ├── useAgents.ts
│ │ ├── useWebSocket.ts
│ │ ├── useTaskActions.ts
│ │ └── useDeployments.ts
│ │
│ ├── store/
│ │ ├── authStore.ts
│ │ ├── uiStore.ts
│ │ └── terminalStore.ts
│ │
│ ├── api/
│ │ ├── client.ts # Axios instance
│ │ ├── projects.ts
│ │ ├── tasks.ts
│ │ ├── agents.ts
│ │ ├── deployments.ts
│ │ └── websocket.ts
│ │
│ ├── types/
│ │ ├── project.ts
│ │ ├── task.ts
│ │ ├── agent.ts
│ │ ├── deployment.ts
│ │ └── common.ts
│ │
│ ├── utils/
│ │ ├── format.ts
│ │ ├── validation.ts
│ │ └── constants.ts
│ │
│ └── styles/
│ └── index.css # Tailwind imports
├── index.html
├── vite.config.ts
├── tailwind.config.js
├── tsconfig.json
├── package.json
└── README.md
```
## Setup Inicial
### package.json
```json
{
"name": "aiworker-frontend",
"version": "1.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc && vite build",
"preview": "vite preview",
"lint": "eslint src --ext ts,tsx",
"format": "prettier --write src/**/*.{ts,tsx}"
},
"dependencies": {
"react": "19.2.0",
"react-dom": "19.2.0",
"react-router-dom": "^7.1.3",
"@tanstack/react-query": "^6.3.0",
"zustand": "^5.0.3",
"socket.io-client": "^4.8.1",
"axios": "^1.7.9",
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/sortable": "^9.1.0",
"xterm": "^5.5.0",
"xterm-addon-fit": "^0.10.0",
"xterm-addon-web-links": "^0.11.0",
"lucide-react": "^0.469.0",
"react-hot-toast": "^2.4.1",
"recharts": "^2.15.0",
"date-fns": "^4.1.0",
"clsx": "^2.1.1"
},
"devDependencies": {
"@types/react": "^19.0.6",
"@types/react-dom": "^19.0.2",
"@vitejs/plugin-react": "^4.3.4",
"typescript": "^5.7.2",
"vite": "^6.0.7",
"tailwindcss": "^4.0.0",
"autoprefixer": "^10.4.21",
"postcss": "^8.4.49",
"eslint": "^9.18.0",
"prettier": "^3.4.2"
}
}
```
### vite.config.ts
```typescript
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import path from 'path'
export default defineConfig({
plugins: [react()],
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
},
},
server: {
port: 5173,
proxy: {
'/api': {
target: 'http://localhost:3000',
changeOrigin: true,
},
'/socket.io': {
target: 'http://localhost:3000',
ws: true,
},
},
},
})
```
### tailwind.config.js
```javascript
/** @type {import('tailwindcss').Config} */
export default {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
],
theme: {
extend: {
colors: {
primary: {
50: '#f0f9ff',
100: '#e0f2fe',
500: '#0ea5e9',
600: '#0284c7',
700: '#0369a1',
},
success: {
50: '#f0fdf4',
500: '#22c55e',
600: '#16a34a',
},
warning: {
50: '#fefce8',
500: '#eab308',
600: '#ca8a04',
},
error: {
50: '#fef2f2',
500: '#ef4444',
600: '#dc2626',
},
},
},
},
plugins: [],
}
```
### tsconfig.json
```json
{
"compilerOptions": {
"target": "ES2020",
"useDefineForClassFields": true,
"lib": ["ES2020", "DOM", "DOM.Iterable"],
"module": "ESNext",
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
},
"include": ["src"],
"references": [{ "path": "./tsconfig.node.json" }]
}
```
## Entry Points
### main.tsx
```typescript
import React from 'react'
import ReactDOM from 'react-dom/client'
import { BrowserRouter } from 'react-router-dom'
import { QueryClient, QueryClientProvider } from '@tanstack/react-query'
import { Toaster } from 'react-hot-toast'
import App from './App'
import './styles/index.css'
const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: 1000 * 60 * 5, // 5 minutes
refetchOnWindowFocus: false,
},
},
})
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<BrowserRouter>
<QueryClientProvider client={queryClient}>
<App />
<Toaster position="top-right" />
</QueryClientProvider>
</BrowserRouter>
</React.StrictMode>
)
```
### App.tsx
```typescript
import { Routes, Route } from 'react-router-dom'
import Layout from './components/layout/Layout'
import Dashboard from './pages/Dashboard'
import ProjectView from './pages/ProjectView'
import AgentsView from './pages/AgentsView'
import { WebSocketProvider } from './api/websocket'
function App() {
return (
<WebSocketProvider>
<Layout>
<Routes>
<Route path="/" element={<Dashboard />} />
<Route path="/projects/:projectId" element={<ProjectView />} />
<Route path="/agents" element={<AgentsView />} />
</Routes>
</Layout>
</WebSocketProvider>
)
}
export default App
```
### styles/index.css
```css
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';
@layer base {
body {
@apply bg-gray-50 text-gray-900;
}
}
@layer components {
.card {
@apply bg-white rounded-lg shadow-sm border border-gray-200 p-4;
}
.btn {
@apply px-4 py-2 rounded-lg font-medium transition-colors;
}
.btn-primary {
@apply btn bg-primary-600 text-white hover:bg-primary-700;
}
.btn-secondary {
@apply btn bg-gray-200 text-gray-700 hover:bg-gray-300;
}
.badge {
@apply inline-flex items-center px-2.5 py-0.5 rounded-full text-xs font-medium;
}
}
```
## Comandos
```bash
# Desarrollo
bun run dev
# Build
bun run build
# Preview build
bun run preview
# Lint
bun run lint
# Format
bun run format
```
## Variables de Entorno
```bash
# .env
VITE_API_URL=http://localhost:3000
VITE_WS_URL=ws://localhost:3000
```
## Estructura de Componentes
Los componentes siguen esta estructura:
```typescript
// Imports
import { useState } from 'react'
import { SomeIcon } from 'lucide-react'
// Types
interface ComponentProps {
prop1: string
prop2?: number
}
// Component
export function Component({ prop1, prop2 = 0 }: ComponentProps) {
// State
const [state, setState] = useState<string>('')
// Handlers
const handleAction = () => {
// ...
}
// Render
return (
<div className="component">
{/* JSX */}
</div>
)
}
```

444
docs/03-frontend/kanban.md Normal file
View File

@@ -0,0 +1,444 @@
# Kanban Board - Implementación Detallada
## Drag & Drop con dnd-kit
### Configuración del DndContext
```typescript
// components/kanban/KanbanBoard.tsx
import {
DndContext,
DragEndEvent,
DragOverEvent,
DragStartEvent,
PointerSensor,
useSensor,
useSensors,
DragOverlay,
} from '@dnd-kit/core'
import { useState } from 'react'
export function KanbanBoard({ projectId }: KanbanBoardProps) {
const [activeId, setActiveId] = useState<string | null>(null)
const { data: tasks = [] } = useTasks({ projectId })
const updateTask = useUpdateTask()
// Configure sensors
const sensors = useSensors(
useSensor(PointerSensor, {
activationConstraint: {
distance: 8, // Require 8px movement before dragging starts
},
})
)
const handleDragStart = (event: DragStartEvent) => {
setActiveId(event.active.id as string)
}
const handleDragEnd = (event: DragEndEvent) => {
const { active, over } = event
setActiveId(null)
if (!over || active.id === over.id) return
const taskId = active.id as string
const newState = over.id as TaskState
// Optimistic update
updateTask.mutate({
taskId,
updates: { state: newState },
})
}
const activeTask = tasks.find((t) => t.id === activeId)
return (
<DndContext
sensors={sensors}
onDragStart={handleDragStart}
onDragEnd={handleDragEnd}
>
<div className="flex gap-4 overflow-x-auto pb-4">
{COLUMNS.map((column) => (
<KanbanColumn
key={column.id}
id={column.id}
title={column.title}
color={column.color}
tasks={tasksByState[column.id]}
/>
))}
</div>
{/* Drag overlay for better UX */}
<DragOverlay>
{activeTask ? <TaskCard task={activeTask} /> : null}
</DragOverlay>
</DndContext>
)
}
```
### Column como Droppable
```typescript
// components/kanban/KanbanColumn.tsx
import { useDroppable } from '@dnd-kit/core'
import { SortableContext, verticalListSortingStrategy } from '@dnd-kit/sortable'
export default function KanbanColumn({ id, title, color, tasks }: KanbanColumnProps) {
const { setNodeRef, isOver } = useDroppable({ id })
return (
<div className="flex flex-col w-80 flex-shrink-0">
{/* Header */}
<div className={`bg-${color}-100 border-${color}-300 border-t-4 rounded-t-lg p-3`}>
<h3 className="font-semibold text-gray-900">
{title}
<span className="ml-2 text-sm text-gray-500">({tasks.length})</span>
</h3>
</div>
{/* Drop zone */}
<div
ref={setNodeRef}
className={`
flex-1 bg-gray-50 border border-t-0 border-gray-200 rounded-b-lg p-3 min-h-[200px]
${isOver ? 'bg-blue-50 border-blue-300' : ''}
transition-colors
`}
>
<SortableContext items={tasks.map((t) => t.id)} strategy={verticalListSortingStrategy}>
<div className="space-y-3">
{tasks.map((task) => (
<TaskCard key={task.id} task={task} />
))}
</div>
</SortableContext>
{tasks.length === 0 && (
<div className="text-center text-gray-400 text-sm py-8">
{isOver ? 'Suelta aquí' : 'Sin tareas'}
</div>
)}
</div>
</div>
)
}
```
### Task Card como Draggable
```typescript
// components/kanban/TaskCard.tsx
import { useSortable } from '@dnd-kit/sortable'
import { CSS } from '@dnd-kit/utilities'
export default function TaskCard({ task }: TaskCardProps) {
const {
attributes,
listeners,
setNodeRef,
transform,
transition,
isDragging,
} = useSortable({
id: task.id,
data: {
type: 'task',
task,
},
})
const style = {
transform: CSS.Transform.toString(transform),
transition,
opacity: isDragging ? 0.5 : 1,
cursor: 'move',
}
return (
<div
ref={setNodeRef}
style={style}
{...attributes}
{...listeners}
className="card hover:shadow-md transition-shadow"
>
{/* Task content */}
</div>
)
}
```
## Acciones Rápidas
```typescript
// components/kanban/TaskCardActions.tsx
import { MoreVertical, ExternalLink, MessageSquare, CheckCircle, XCircle } from 'lucide-react'
import { Task } from '@/types/task'
import { useApproveTask, useRejectTask } from '@/hooks/useTasks'
interface TaskCardActionsProps {
task: Task
}
export function TaskCardActions({ task }: TaskCardActionsProps) {
const approveTask = useApproveTask()
const rejectTask = useRejectTask()
const handleApprove = (e: React.MouseEvent) => {
e.stopPropagation()
if (confirm('¿Aprobar esta tarea?')) {
approveTask.mutate(task.id)
}
}
const handleReject = (e: React.MouseEvent) => {
e.stopPropagation()
const reason = prompt('Razón del rechazo:')
if (reason) {
rejectTask.mutate({ taskId: task.id, reason })
}
}
return (
<div className="flex items-center gap-1">
{/* Preview link */}
{task.previewUrl && (
<a
href={task.previewUrl}
target="_blank"
rel="noopener noreferrer"
className="p-1 hover:bg-gray-100 rounded"
onClick={(e) => e.stopPropagation()}
title="Abrir preview"
>
<ExternalLink className="w-4 h-4 text-gray-600" />
</a>
)}
{/* Questions */}
{task.state === 'needs_input' && (
<button
className="p-1 hover:bg-yellow-100 rounded"
title="Responder pregunta"
>
<MessageSquare className="w-4 h-4 text-yellow-600" />
</button>
)}
{/* Approve/Reject for ready_to_test */}
{task.state === 'ready_to_test' && (
<>
<button
onClick={handleApprove}
className="p-1 hover:bg-green-100 rounded"
title="Aprobar"
>
<CheckCircle className="w-4 h-4 text-green-600" />
</button>
<button
onClick={handleReject}
className="p-1 hover:bg-red-100 rounded"
title="Rechazar"
>
<XCircle className="w-4 h-4 text-red-600" />
</button>
</>
)}
{/* More actions */}
<button className="p-1 hover:bg-gray-100 rounded">
<MoreVertical className="w-4 h-4 text-gray-600" />
</button>
</div>
)
}
```
## Filtros y Búsqueda
```typescript
// components/kanban/KanbanFilters.tsx
import { useState } from 'react'
import { Search, Filter } from 'lucide-react'
import { Input } from '@/components/ui/Input'
import { Select } from '@/components/ui/Select'
interface KanbanFiltersProps {
onFilterChange: (filters: TaskFilters) => void
}
export function KanbanFilters({ onFilterChange }: KanbanFiltersProps) {
const [search, setSearch] = useState('')
const [priority, setPriority] = useState<string>('all')
const [assignedAgent, setAssignedAgent] = useState<string>('all')
const handleSearchChange = (value: string) => {
setSearch(value)
onFilterChange({ search: value, priority, assignedAgent })
}
return (
<div className="flex items-center gap-3 p-4 bg-white rounded-lg shadow-sm mb-4">
<div className="flex-1 relative">
<Search className="absolute left-3 top-1/2 transform -translate-y-1/2 w-5 h-5 text-gray-400" />
<input
type="text"
value={search}
onChange={(e) => handleSearchChange(e.target.value)}
placeholder="Buscar tareas..."
className="w-full pl-10 pr-4 py-2 border border-gray-300 rounded-lg focus:ring-2 focus:ring-primary-500"
/>
</div>
<Select
value={priority}
onChange={(e) => {
setPriority(e.target.value)
onFilterChange({ search, priority: e.target.value, assignedAgent })
}}
options={[
{ value: 'all', label: 'Todas las prioridades' },
{ value: 'urgent', label: 'Urgente' },
{ value: 'high', label: 'Alta' },
{ value: 'medium', label: 'Media' },
{ value: 'low', label: 'Baja' },
]}
className="w-48"
/>
<button className="btn-secondary">
<Filter className="w-4 h-4 mr-2" />
Más filtros
</button>
</div>
)
}
```
## Bulk Actions
```typescript
// components/kanban/KanbanBulkActions.tsx
import { useState } from 'react'
import { CheckSquare, GitMerge, Trash2 } from 'lucide-react'
import { Task } from '@/types/task'
interface KanbanBulkActionsProps {
selectedTasks: Task[]
onMergeToStaging: (taskIds: string[]) => void
onClearSelection: () => void
}
export function KanbanBulkActions({
selectedTasks,
onMergeToStaging,
onClearSelection,
}: KanbanBulkActionsProps) {
if (selectedTasks.length === 0) return null
const approvedTasks = selectedTasks.filter((t) => t.state === 'approved')
return (
<div className="fixed bottom-4 left-1/2 transform -translate-x-1/2 bg-white rounded-lg shadow-xl border border-gray-200 p-4">
<div className="flex items-center gap-4">
<div className="flex items-center gap-2">
<CheckSquare className="w-5 h-5 text-primary-600" />
<span className="font-medium">
{selectedTasks.length} tarea{selectedTasks.length !== 1 ? 's' : ''} seleccionada{selectedTasks.length !== 1 ? 's' : ''}
</span>
</div>
<div className="h-6 w-px bg-gray-300" />
{approvedTasks.length >= 2 && (
<button
onClick={() => onMergeToStaging(approvedTasks.map((t) => t.id))}
className="btn-primary flex items-center gap-2"
>
<GitMerge className="w-4 h-4" />
Merge a Staging ({approvedTasks.length})
</button>
)}
<button onClick={onClearSelection} className="btn-secondary">
Limpiar selección
</button>
</div>
</div>
)
}
```
## Estadísticas del Kanban
```typescript
// components/kanban/KanbanStats.tsx
import { Task } from '@/types/task'
import { Activity, CheckCircle, Clock, AlertTriangle } from 'lucide-react'
interface KanbanStatsProps {
tasks: Task[]
}
export function KanbanStats({ tasks }: KanbanStatsProps) {
const stats = {
total: tasks.length,
inProgress: tasks.filter((t) => t.state === 'in_progress').length,
completed: tasks.filter((t) => t.state === 'production').length,
needsInput: tasks.filter((t) => t.state === 'needs_input').length,
avgDuration: tasks
.filter((t) => t.actualDurationMinutes)
.reduce((acc, t) => acc + (t.actualDurationMinutes || 0), 0) / tasks.length || 0,
}
return (
<div className="grid grid-cols-4 gap-4 mb-6">
<div className="card">
<div className="flex items-center justify-between">
<div>
<p className="text-sm text-gray-600">Total</p>
<p className="text-2xl font-bold text-gray-900">{stats.total}</p>
</div>
<Activity className="w-8 h-8 text-gray-400" />
</div>
</div>
<div className="card">
<div className="flex items-center justify-between">
<div>
<p className="text-sm text-gray-600">En Progreso</p>
<p className="text-2xl font-bold text-blue-600">{stats.inProgress}</p>
</div>
<Clock className="w-8 h-8 text-blue-400" />
</div>
</div>
<div className="card">
<div className="flex items-center justify-between">
<div>
<p className="text-sm text-gray-600">Completadas</p>
<p className="text-2xl font-bold text-green-600">{stats.completed}</p>
</div>
<CheckCircle className="w-8 h-8 text-green-400" />
</div>
</div>
<div className="card">
<div className="flex items-center justify-between">
<div>
<p className="text-sm text-gray-600">Necesitan Input</p>
<p className="text-2xl font-bold text-yellow-600">{stats.needsInput}</p>
</div>
<AlertTriangle className="w-8 h-8 text-yellow-400" />
</div>
</div>
</div>
)
}
```

View File

@@ -0,0 +1,456 @@
# Setup del Cluster Kubernetes
## Requisitos
- Kubernetes 1.28+
- kubectl CLI
- helm 3.x
- 4 GB RAM mínimo
- 20 GB storage
## Instalación Local (Kind/Minikube)
### Con Kind (recomendado para desarrollo)
```bash
# Instalar kind
brew install kind # macOS
# o
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Crear cluster con configuración personalizada
cat <<EOF | kind create cluster --name aiworker --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
EOF
# Verificar
kubectl cluster-info --context kind-aiworker
kubectl get nodes
```
### Con Minikube
```bash
# Instalar minikube
brew install minikube # macOS
# Iniciar cluster
minikube start --cpus=4 --memory=8192 --disk-size=40g --driver=docker
# Habilitar addons
minikube addons enable ingress
minikube addons enable metrics-server
minikube addons enable storage-provisioner
# Verificar
kubectl get nodes
```
## Instalación en Cloud
### Google Kubernetes Engine (GKE)
```bash
# Instalar gcloud CLI
brew install --cask google-cloud-sdk
# Autenticar
gcloud auth login
gcloud config set project YOUR_PROJECT_ID
# Crear cluster
gcloud container clusters create aiworker \
--zone us-central1-a \
--num-nodes 3 \
--machine-type n1-standard-2 \
--disk-size 30 \
--enable-autoscaling \
--min-nodes 2 \
--max-nodes 5 \
--enable-autorepair \
--enable-autoupgrade
# Obtener credenciales
gcloud container clusters get-credentials aiworker --zone us-central1-a
# Verificar
kubectl get nodes
```
### Amazon EKS
```bash
# Instalar eksctl
brew install eksctl
# Crear cluster
eksctl create cluster \
--name aiworker \
--region us-west-2 \
--nodegroup-name workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 2 \
--nodes-max 5 \
--managed
# Verificar
kubectl get nodes
```
### Azure AKS
```bash
# Instalar Azure CLI
brew install azure-cli
# Login
az login
# Crear resource group
az group create --name aiworker-rg --location eastus
# Crear cluster
az aks create \
--resource-group aiworker-rg \
--name aiworker \
--node-count 3 \
--node-vm-size Standard_D2s_v3 \
--enable-cluster-autoscaler \
--min-count 2 \
--max-count 5 \
--generate-ssh-keys
# Obtener credenciales
az aks get-credentials --resource-group aiworker-rg --name aiworker
# Verificar
kubectl get nodes
```
## Instalación de Componentes Base
### Nginx Ingress Controller
```bash
# Instalar con Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux
# Verificar
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
```
### Cert-Manager (TLS)
```bash
# Instalar cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Verificar
kubectl get pods -n cert-manager
# Crear ClusterIssuer para Let's Encrypt
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
```
### Metrics Server
```bash
# Instalar metrics-server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Verificar
kubectl get deployment metrics-server -n kube-system
kubectl top nodes
```
### Prometheus & Grafana (opcional)
```bash
# Añadir repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Instalar kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set prometheus.prometheusSpec.retention=30d \
--set grafana.adminPassword=admin
# Verificar
kubectl get pods -n monitoring
# Port-forward para acceder a Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3001:80
# http://localhost:3001 (admin/admin)
```
## Creación de Namespaces
```bash
# Script de creación de namespaces
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: control-plane
labels:
name: control-plane
environment: production
---
apiVersion: v1
kind: Namespace
metadata:
name: agents
labels:
name: agents
environment: production
---
apiVersion: v1
kind: Namespace
metadata:
name: gitea
labels:
name: gitea
environment: production
EOF
# Verificar
kubectl get namespaces
```
## Configuración de RBAC
```bash
# ServiceAccount para backend
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: aiworker-backend
namespace: control-plane
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: aiworker-backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "create", "delete"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aiworker-backend
subjects:
- kind: ServiceAccount
name: aiworker-backend
namespace: control-plane
roleRef:
kind: ClusterRole
name: aiworker-backend
apiGroup: rbac.authorization.k8s.io
EOF
```
## Secrets y ConfigMaps
```bash
# Crear secret para credentials
kubectl create secret generic aiworker-secrets \
--namespace=control-plane \
--from-literal=db-password='your-db-password' \
--from-literal=gitea-token='your-gitea-token' \
--from-literal=anthropic-api-key='your-anthropic-key'
# ConfigMap para configuración
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: aiworker-config
namespace: control-plane
data:
GITEA_URL: "http://gitea.gitea.svc.cluster.local:3000"
K8S_DEFAULT_NAMESPACE: "aiworker"
NODE_ENV: "production"
EOF
```
## Storage Classes
```bash
# Crear StorageClass para preview environments (fast SSD)
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/gce-pd # Cambiar según cloud provider
parameters:
type: pd-ssd
replication-type: none
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF
```
## Network Policies
```bash
# Aislar namespaces de preview
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: preview-isolation
namespace: agents
spec:
podSelector:
matchLabels:
env: preview
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: control-plane
egress:
- to:
- namespaceSelector:
matchLabels:
name: gitea
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
EOF
```
## Verificación Final
```bash
# Script de verificación
cat > verify-cluster.sh <<'EOF'
#!/bin/bash
echo "🔍 Verificando cluster..."
echo "✓ Nodes:"
kubectl get nodes
echo "✓ Namespaces:"
kubectl get namespaces
echo "✓ Ingress Controller:"
kubectl get pods -n ingress-nginx
echo "✓ Cert-Manager:"
kubectl get pods -n cert-manager
echo "✓ Metrics Server:"
kubectl top nodes 2>/dev/null || echo "⚠️ Metrics not available yet"
echo "✓ Storage Classes:"
kubectl get storageclass
echo "✅ Cluster setup complete!"
EOF
chmod +x verify-cluster.sh
./verify-cluster.sh
```
## Mantenimiento
```bash
# Actualizar componentes
helm repo update
helm upgrade ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx
# Limpiar recursos viejos
kubectl delete pods --field-selector=status.phase=Failed -A
kubectl delete pods --field-selector=status.phase=Succeeded -A
# Backup de configuración
kubectl get all --all-namespaces -o yaml > cluster-backup.yaml
```
## Troubleshooting
```bash
# Ver logs de componentes
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
kubectl logs -n cert-manager deployment/cert-manager
# Describir recursos con problemas
kubectl describe pod <pod-name> -n <namespace>
# Eventos del cluster
kubectl get events --all-namespaces --sort-by='.lastTimestamp'
# Recursos consumidos
kubectl top nodes
kubectl top pods -A
```

View File

@@ -0,0 +1,706 @@
# Deployments en Kubernetes
## Backend API Deployment
```yaml
# k8s/control-plane/backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aiworker-backend
namespace: control-plane
labels:
app: aiworker-backend
version: v1
spec:
replicas: 2
selector:
matchLabels:
app: aiworker-backend
template:
metadata:
labels:
app: aiworker-backend
version: v1
spec:
serviceAccountName: aiworker-backend
containers:
- name: backend
image: aiworker/backend:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
- name: mcp
containerPort: 3100
env:
- name: NODE_ENV
value: "production"
- name: PORT
value: "3000"
- name: DB_HOST
value: "mysql.control-plane.svc.cluster.local"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "aiworker"
- name: DB_USER
value: "root"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: db-password
- name: REDIS_HOST
value: "redis.control-plane.svc.cluster.local"
- name: REDIS_PORT
value: "6379"
- name: GITEA_URL
value: "http://gitea.gitea.svc.cluster.local:3000"
- name: GITEA_TOKEN
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: gitea-token
- name: K8S_IN_CLUSTER
value: "true"
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: aiworker-backend
namespace: control-plane
spec:
selector:
app: aiworker-backend
ports:
- name: http
port: 3000
targetPort: 3000
- name: mcp
port: 3100
targetPort: 3100
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: aiworker-backend
namespace: control-plane
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/websocket-services: "aiworker-backend"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.aiworker.dev
secretName: aiworker-backend-tls
rules:
- host: api.aiworker.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aiworker-backend
port:
number: 3000
```
## MySQL Deployment
```yaml
# k8s/control-plane/mysql-deployment.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: control-plane
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: control-plane
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: db-password
- name: MYSQL_DATABASE
value: "aiworker"
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
livenessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: control-plane
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
type: ClusterIP
```
## Redis Deployment
```yaml
# k8s/control-plane/redis-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: control-plane
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
name: redis
args:
- --maxmemory
- 2gb
- --maxmemory-policy
- allkeys-lru
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1"
memory: "2Gi"
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 15
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: control-plane
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
type: ClusterIP
```
## Claude Code Agent Pod Template
```yaml
# k8s/agents/agent-pod-template.yaml
apiVersion: v1
kind: Pod
metadata:
name: claude-agent-{agent-id}
namespace: agents
labels:
app: claude-agent
agent-id: "{agent-id}"
managed-by: aiworker
spec:
containers:
- name: agent
image: aiworker/claude-agent:latest
env:
- name: AGENT_ID
value: "{agent-id}"
- name: MCP_SERVER_URL
value: "http://aiworker-backend.control-plane.svc.cluster.local:3100"
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: anthropic-api-key
- name: GITEA_URL
value: "http://gitea.gitea.svc.cluster.local:3000"
- name: GIT_SSH_KEY
valueFrom:
secretKeyRef:
name: git-ssh-keys
key: private-key
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
volumeMounts:
- name: workspace
mountPath: /workspace
- name: git-config
mountPath: /root/.gitconfig
subPath: .gitconfig
volumes:
- name: workspace
emptyDir: {}
- name: git-config
configMap:
name: git-config
restartPolicy: Never
```
## Preview Deployment Template
```typescript
// services/kubernetes/templates/preview-deployment.ts
export function generatePreviewDeployment(params: {
taskId: string
projectId: string
projectName: string
image: string
branch: string
envVars: Record<string, string>
}) {
const namespace = `preview-task-${params.taskId.slice(0, 8)}`
const name = `${params.projectName}-preview`
return {
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
name,
namespace,
labels: {
app: name,
project: params.projectId,
task: params.taskId,
environment: 'preview',
},
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: name,
},
},
template: {
metadata: {
labels: {
app: name,
project: params.projectId,
task: params.taskId,
},
},
spec: {
containers: [
{
name: 'app',
image: `${params.image}:${params.branch}`,
ports: [
{
name: 'http',
containerPort: 3000,
},
],
env: Object.entries(params.envVars).map(([key, value]) => ({
name: key,
value,
})),
resources: {
requests: {
cpu: '250m',
memory: '512Mi',
},
limits: {
cpu: '1',
memory: '2Gi',
},
},
},
],
},
},
},
}
}
export function generatePreviewService(params: {
taskId: string
projectName: string
}) {
const namespace = `preview-task-${params.taskId.slice(0, 8)}`
const name = `${params.projectName}-preview`
return {
apiVersion: 'v1',
kind: 'Service',
metadata: {
name,
namespace,
},
spec: {
selector: {
app: name,
},
ports: [
{
port: 80,
targetPort: 3000,
},
],
type: 'ClusterIP',
},
}
}
export function generatePreviewIngress(params: {
taskId: string
projectName: string
}) {
const namespace = `preview-task-${params.taskId.slice(0, 8)}`
const name = `${params.projectName}-preview`
const host = `task-${params.taskId.slice(0, 8)}.preview.aiworker.dev`
return {
apiVersion: 'networking.k8s.io/v1',
kind: 'Ingress',
metadata: {
name,
namespace,
annotations: {
'cert-manager.io/cluster-issuer': 'letsencrypt-prod',
},
},
spec: {
ingressClassName: 'nginx',
tls: [
{
hosts: [host],
secretName: `${name}-tls`,
},
],
rules: [
{
host,
http: {
paths: [
{
path: '/',
pathType: 'Prefix',
backend: {
service: {
name,
port: {
number: 80,
},
},
},
},
],
},
},
],
},
}
}
```
## Kubernetes Client Implementation
```typescript
// services/kubernetes/client.ts
import { KubeConfig, AppsV1Api, CoreV1Api, NetworkingV1Api } from '@kubernetes/client-node'
import { logger } from '../../utils/logger'
export class K8sClient {
private kc: KubeConfig
private appsApi: AppsV1Api
private coreApi: CoreV1Api
private networkingApi: NetworkingV1Api
constructor() {
this.kc = new KubeConfig()
if (process.env.K8S_IN_CLUSTER === 'true') {
this.kc.loadFromCluster()
} else {
this.kc.loadFromDefault()
}
this.appsApi = this.kc.makeApiClient(AppsV1Api)
this.coreApi = this.kc.makeApiClient(CoreV1Api)
this.networkingApi = this.kc.makeApiClient(NetworkingV1Api)
}
async createPreviewDeployment(params: {
namespace: string
taskId: string
projectId: string
image: string
branch: string
envVars: Record<string, string>
}) {
const { namespace, taskId, projectId } = params
// Create namespace
await this.createNamespace(namespace, {
project: projectId,
environment: 'preview',
taskId,
})
// Create deployment
const deployment = generatePreviewDeployment(params)
await this.appsApi.createNamespacedDeployment(namespace, deployment)
// Create service
const service = generatePreviewService(params)
await this.coreApi.createNamespacedService(namespace, service)
// Create ingress
const ingress = generatePreviewIngress(params)
await this.networkingApi.createNamespacedIngress(namespace, ingress)
logger.info(`Created preview deployment for task ${taskId}`)
return {
namespace,
url: ingress.spec.rules[0].host,
}
}
async deletePreviewDeployment(namespace: string) {
await this.deleteNamespace(namespace)
logger.info(`Deleted preview deployment namespace: ${namespace}`)
}
async createNamespace(name: string, labels: Record<string, string> = {}) {
try {
await this.coreApi.createNamespace({
metadata: {
name,
labels: {
'managed-by': 'aiworker',
...labels,
},
},
})
logger.info(`Created namespace: ${name}`)
} catch (error: any) {
if (error.statusCode !== 409) { // Ignore if already exists
throw error
}
}
}
async deleteNamespace(name: string) {
await this.coreApi.deleteNamespace(name)
}
async createAgentPod(agentId: string) {
const podSpec = {
metadata: {
name: `claude-agent-${agentId.slice(0, 8)}`,
namespace: 'agents',
labels: {
app: 'claude-agent',
'agent-id': agentId,
},
},
spec: {
containers: [
{
name: 'agent',
image: 'aiworker/claude-agent:latest',
env: [
{ name: 'AGENT_ID', value: agentId },
{
name: 'MCP_SERVER_URL',
value: 'http://aiworker-backend.control-plane.svc.cluster.local:3100',
},
{
name: 'ANTHROPIC_API_KEY',
valueFrom: {
secretKeyRef: {
name: 'aiworker-secrets',
key: 'anthropic-api-key',
},
},
},
],
resources: {
requests: { cpu: '500m', memory: '1Gi' },
limits: { cpu: '2', memory: '4Gi' },
},
},
],
restartPolicy: 'Never',
},
}
await this.coreApi.createNamespacedPod('agents', podSpec)
logger.info(`Created agent pod: ${agentId}`)
return {
podName: podSpec.metadata.name,
namespace: 'agents',
}
}
async deletePod(namespace: string, podName: string) {
await this.coreApi.deleteNamespacedPod(podName, namespace)
}
async getPodLogs(namespace: string, podName: string, tailLines = 100) {
const response = await this.coreApi.readNamespacedPodLog(
podName,
namespace,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
tailLines
)
return response.body
}
async execInPod(params: {
namespace: string
podName: string
command: string[]
}) {
// Implementation using WebSocketStream
const exec = new Exec(this.kc)
const stream = await exec.exec(
params.namespace,
params.podName,
'agent',
params.command,
process.stdout,
process.stderr,
process.stdin,
true // tty
)
return stream
}
}
```
## Deployment Script
```bash
#!/bin/bash
# deploy-all.sh
set -e
echo "🚀 Deploying AiWorker to Kubernetes..."
# Apply secrets (should be done once manually with real values)
echo "📦 Creating secrets..."
kubectl apply -f k8s/secrets/
# Deploy control-plane
echo "🎛️ Deploying control-plane..."
kubectl apply -f k8s/control-plane/
# Deploy agents namespace
echo "🤖 Setting up agents namespace..."
kubectl apply -f k8s/agents/
# Deploy Gitea
echo "📚 Deploying Gitea..."
kubectl apply -f k8s/gitea/
# Wait for pods
echo "⏳ Waiting for pods to be ready..."
kubectl wait --for=condition=ready pod -l app=aiworker-backend -n control-plane --timeout=300s
kubectl wait --for=condition=ready pod -l app=mysql -n control-plane --timeout=300s
kubectl wait --for=condition=ready pod -l app=redis -n control-plane --timeout=300s
echo "✅ Deployment complete!"
echo "📍 Backend API: https://api.aiworker.dev"
echo "📍 Gitea: https://git.aiworker.dev"
```

View File

@@ -0,0 +1,456 @@
# Gitea Deployment en Kubernetes
## Gitea StatefulSet
```yaml
# k8s/gitea/gitea-statefulset.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-data
namespace: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gitea
namespace: gitea
spec:
serviceName: gitea
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
containers:
- name: gitea
image: gitea/gitea:1.22
ports:
- name: http
containerPort: 3000
- name: ssh
containerPort: 22
env:
- name: USER_UID
value: "1000"
- name: USER_GID
value: "1000"
- name: GITEA__database__DB_TYPE
value: "mysql"
- name: GITEA__database__HOST
value: "mysql.control-plane.svc.cluster.local:3306"
- name: GITEA__database__NAME
value: "gitea"
- name: GITEA__database__USER
value: "root"
- name: GITEA__database__PASSWD
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: db-password
- name: GITEA__server__DOMAIN
value: "git.aiworker.dev"
- name: GITEA__server__SSH_DOMAIN
value: "git.aiworker.dev"
- name: GITEA__server__ROOT_URL
value: "https://git.aiworker.dev"
- name: GITEA__server__HTTP_PORT
value: "3000"
- name: GITEA__server__SSH_PORT
value: "2222"
- name: GITEA__security__INSTALL_LOCK
value: "true"
- name: GITEA__webhook__ALLOWED_HOST_LIST
value: "*.svc.cluster.local"
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
livenessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 60
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/healthz
port: 3000
initialDelaySeconds: 30
periodSeconds: 5
volumes:
- name: data
persistentVolumeClaim:
claimName: gitea-data
---
apiVersion: v1
kind: Service
metadata:
name: gitea
namespace: gitea
spec:
selector:
app: gitea
ports:
- name: http
port: 3000
targetPort: 3000
- name: ssh
port: 2222
targetPort: 22
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: gitea-ssh
namespace: gitea
annotations:
service.beta.kubernetes.io/external-traffic: OnlyLocal
spec:
selector:
app: gitea
ports:
- name: ssh
port: 2222
targetPort: 22
protocol: TCP
type: LoadBalancer
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitea
namespace: gitea
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: "512m"
spec:
ingressClassName: nginx
tls:
- hosts:
- git.aiworker.dev
secretName: gitea-tls
rules:
- host: git.aiworker.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitea
port:
number: 3000
```
## Gitea Configuration
```yaml
# k8s/gitea/gitea-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitea-config
namespace: gitea
data:
app.ini: |
[server]
PROTOCOL = http
DOMAIN = git.aiworker.dev
ROOT_URL = https://git.aiworker.dev
HTTP_PORT = 3000
SSH_PORT = 2222
DISABLE_SSH = false
START_SSH_SERVER = true
SSH_LISTEN_HOST = 0.0.0.0
SSH_LISTEN_PORT = 22
LFS_START_SERVER = true
OFFLINE_MODE = false
[database]
DB_TYPE = mysql
HOST = mysql.control-plane.svc.cluster.local:3306
NAME = gitea
USER = root
SSL_MODE = disable
[security]
INSTALL_LOCK = true
SECRET_KEY = your-secret-key-here
INTERNAL_TOKEN = your-internal-token-here
[service]
DISABLE_REGISTRATION = false
REQUIRE_SIGNIN_VIEW = false
ENABLE_NOTIFY_MAIL = false
[webhook]
ALLOWED_HOST_LIST = *.svc.cluster.local,*.aiworker.dev
[api]
ENABLE_SWAGGER = true
[actions]
ENABLED = true
[repository]
DEFAULT_BRANCH = main
FORCE_PRIVATE = false
[ui]
DEFAULT_THEME = arc-green
```
## Inicialización de Gitea
```bash
#!/bin/bash
# scripts/init-gitea.sh
set -e
echo "🚀 Initializing Gitea..."
# Wait for Gitea to be ready
echo "⏳ Waiting for Gitea pod..."
kubectl wait --for=condition=ready pod -l app=gitea -n gitea --timeout=300s
# Port-forward temporalmente
echo "🔌 Port-forwarding Gitea..."
kubectl port-forward -n gitea svc/gitea 3001:3000 &
PF_PID=$!
sleep 5
# Create admin user
echo "👤 Creating admin user..."
kubectl exec -n gitea gitea-0 -- gitea admin user create \
--username aiworker \
--password admin123 \
--email admin@aiworker.dev \
--admin \
--must-change-password=false
# Create organization
echo "🏢 Creating organization..."
kubectl exec -n gitea gitea-0 -- gitea admin user create \
--username aiworker-bot \
--password bot123 \
--email bot@aiworker.dev
# Generate access token
echo "🔑 Generating access token..."
TOKEN=$(kubectl exec -n gitea gitea-0 -- gitea admin user generate-access-token \
--username aiworker-bot \
--scopes write:repository,write:issue,write:user \
--raw)
echo "✅ Gitea initialized!"
echo "📍 URL: https://git.aiworker.dev"
echo "👤 User: aiworker / admin123"
echo "🔑 Bot Token: $TOKEN"
echo ""
echo "⚠️ Save this token and update the secret:"
echo "kubectl create secret generic aiworker-secrets -n control-plane \\"
echo " --from-literal=gitea-token='$TOKEN' --dry-run=client -o yaml | kubectl apply -f -"
# Stop port-forward
kill $PF_PID
```
## Gitea Webhook Configuration
```typescript
// services/gitea/setup.ts
import { giteaClient } from './client'
import { logger } from '../../utils/logger'
export async function setupGiteaWebhooks(owner: string, repo: string) {
const backendUrl = process.env.BACKEND_URL || 'https://api.aiworker.dev'
try {
// Create webhook for push events
await giteaClient.createWebhook(owner, repo, {
url: `${backendUrl}/api/webhooks/gitea`,
contentType: 'json',
secret: process.env.GITEA_WEBHOOK_SECRET || '',
events: ['push', 'pull_request', 'pull_request_closed'],
})
logger.info(`Webhooks configured for ${owner}/${repo}`)
} catch (error) {
logger.error('Failed to setup webhooks:', error)
throw error
}
}
export async function initializeGiteaForProject(projectName: string) {
const owner = process.env.GITEA_OWNER || 'aiworker'
// Create repository
const repo = await giteaClient.createRepo(projectName, {
description: `AiWorker project: ${projectName}`,
private: true,
autoInit: true,
defaultBranch: 'main',
})
// Setup webhooks
await setupGiteaWebhooks(owner, projectName)
// Create initial branches
await giteaClient.createBranch(owner, projectName, 'develop', 'main')
await giteaClient.createBranch(owner, projectName, 'staging', 'main')
logger.info(`Gitea initialized for project: ${projectName}`)
return {
repoUrl: repo.html_url,
cloneUrl: repo.clone_url,
sshUrl: repo.ssh_url,
}
}
```
## Backup de Gitea
```yaml
# k8s/gitea/gitea-backup-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: gitea-backup
namespace: gitea
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: gitea/gitea:1.22
command:
- /bin/sh
- -c
- |
echo "Starting backup..."
gitea dump -c /data/gitea/conf/app.ini -f /backups/gitea-backup-$(date +%Y%m%d).zip
echo "Backup complete!"
# Upload to S3 or other storage
volumeMounts:
- name: data
mountPath: /data
- name: backups
mountPath: /backups
volumes:
- name: data
persistentVolumeClaim:
claimName: gitea-data
- name: backups
persistentVolumeClaim:
claimName: gitea-backups
restartPolicy: OnFailure
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-backups
namespace: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
```
## Monitoreo de Gitea
```yaml
# k8s/gitea/gitea-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: gitea
namespace: gitea
spec:
selector:
matchLabels:
app: gitea
endpoints:
- port: http
path: /metrics
interval: 30s
```
## Troubleshooting
```bash
# Ver logs
kubectl logs -n gitea gitea-0 --tail=100 -f
# Entrar al pod
kubectl exec -it -n gitea gitea-0 -- /bin/sh
# Verificar config
kubectl exec -n gitea gitea-0 -- cat /data/gitea/conf/app.ini
# Regenerar admin user
kubectl exec -n gitea gitea-0 -- gitea admin user change-password \
--username aiworker --password newpassword
# Limpiar cache
kubectl exec -n gitea gitea-0 -- rm -rf /data/gitea/queues/*
```
## SSH Keys Setup
```bash
# Generar SSH key para agentes
ssh-keygen -t ed25519 -C "aiworker-agent" -f agent-key -N ""
# Crear secret
kubectl create secret generic git-ssh-keys -n agents \
--from-file=private-key=agent-key \
--from-file=public-key=agent-key.pub
# Añadir public key a Gitea
# (via API o manualmente en UI)
```
## Git Config para Agentes
```yaml
# k8s/agents/git-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: git-config
namespace: agents
data:
.gitconfig: |
[user]
name = AiWorker Agent
email = agent@aiworker.dev
[core]
sshCommand = ssh -i /root/.ssh/id_ed25519 -o StrictHostKeyChecking=no
[credential]
helper = store
```

View File

@@ -0,0 +1,481 @@
# Estructura de Namespaces
## Arquitectura de Namespaces
```
aiworker-cluster/
├── control-plane/ # Backend, API, MCP Server
├── agents/ # Claude Code agent pods
├── gitea/ # Gitea server
├── projects/
│ └── <project-name>/
│ ├── dev/ # Desarrollo continuo
│ ├── preview-*/ # Preview deployments por tarea
│ ├── staging/ # Staging environment
│ └── production/ # Production environment
└── monitoring/ # Prometheus, Grafana
```
## Namespace: control-plane
**Propósito**: Backend API, MCP Server, servicios core
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: control-plane
labels:
name: control-plane
environment: production
managed-by: aiworker
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: control-plane-quota
namespace: control-plane
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "5"
---
apiVersion: v1
kind: LimitRange
metadata:
name: control-plane-limits
namespace: control-plane
spec:
limits:
- max:
cpu: "2"
memory: 4Gi
min:
cpu: "100m"
memory: 128Mi
default:
cpu: "500m"
memory: 512Mi
defaultRequest:
cpu: "250m"
memory: 256Mi
type: Container
```
### Servicios en control-plane
- **Backend API**: Express + Bun
- **MCP Server**: Comunicación con agentes
- **MySQL**: Base de datos
- **Redis**: Cache y colas
- **BullMQ Workers**: Procesamiento de jobs
## Namespace: agents
**Propósito**: Pods de Claude Code agents
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: agents
labels:
name: agents
environment: production
managed-by: aiworker
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: agents-quota
namespace: agents
spec:
hard:
requests.cpu: "20"
requests.memory: 40Gi
limits.cpu: "40"
limits.memory: 80Gi
pods: "50"
---
apiVersion: v1
kind: LimitRange
metadata:
name: agents-limits
namespace: agents
spec:
limits:
- max:
cpu: "2"
memory: 4Gi
min:
cpu: "500m"
memory: 1Gi
default:
cpu: "1"
memory: 2Gi
defaultRequest:
cpu: "500m"
memory: 1Gi
type: Container
```
### Network Policy para Agents
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: agents-network-policy
namespace: agents
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Permitir tráfico desde control-plane
- from:
- namespaceSelector:
matchLabels:
name: control-plane
egress:
# Permitir salida a control-plane (MCP Server)
- to:
- namespaceSelector:
matchLabels:
name: control-plane
# Permitir salida a gitea
- to:
- namespaceSelector:
matchLabels:
name: gitea
# Permitir DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Permitir HTTPS externo (para Claude API)
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443
```
## Namespace: gitea
**Propósito**: Servidor Git auto-alojado
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: gitea
labels:
name: gitea
environment: production
managed-by: aiworker
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: gitea-quota
namespace: gitea
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
persistentvolumeclaims: "2"
```
## Namespaces por Proyecto
### Estructura Dinámica
Para cada proyecto creado, se generan automáticamente 4 namespaces:
```typescript
// services/kubernetes/namespaces.ts
export async function createProjectNamespaces(projectName: string) {
const baseName = projectName.toLowerCase().replace(/[^a-z0-9-]/g, '-')
const namespaces = [
`${baseName}-dev`,
`${baseName}-staging`,
`${baseName}-production`,
]
for (const ns of namespaces) {
await k8sClient.createNamespace({
name: ns,
labels: {
project: baseName,
'managed-by': 'aiworker',
},
})
// Aplicar resource quotas
await k8sClient.applyResourceQuota(ns, {
requests: { cpu: '2', memory: '4Gi' },
limits: { cpu: '4', memory: '8Gi' },
})
}
}
```
### Namespace: project-dev
**Propósito**: Desarrollo continuo, deploy automático de main/develop
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-project-dev
labels:
project: my-project
environment: dev
managed-by: aiworker
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: my-project-dev
spec:
hard:
requests.cpu: "1"
requests.memory: 2Gi
limits.cpu: "2"
limits.memory: 4Gi
pods: "5"
```
### Namespace: preview-task-{id}
**Propósito**: Preview deployment temporal para una tarea específica
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: preview-task-abc123
labels:
project: my-project
environment: preview
task-id: abc123
managed-by: aiworker
ttl: "168h" # 7 days
annotations:
created-at: "2026-01-19T12:00:00Z"
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: preview-quota
namespace: preview-task-abc123
spec:
hard:
requests.cpu: "500m"
requests.memory: 1Gi
limits.cpu: "1"
limits.memory: 2Gi
pods: "3"
```
**Limpieza automática**:
```typescript
// Cleanup job que corre diariamente
export async function cleanupOldPreviewNamespaces() {
const allNamespaces = await k8sClient.listNamespaces()
for (const ns of allNamespaces) {
if (ns.metadata?.labels?.environment === 'preview') {
const createdAt = new Date(ns.metadata.annotations?.['created-at'])
const ageHours = (Date.now() - createdAt.getTime()) / (1000 * 60 * 60)
if (ageHours > 168) { // 7 days
await k8sClient.deleteNamespace(ns.metadata.name)
logger.info(`Deleted old preview namespace: ${ns.metadata.name}`)
}
}
}
}
```
### Namespace: project-staging
**Propósito**: Staging environment, testing antes de producción
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-project-staging
labels:
project: my-project
environment: staging
managed-by: aiworker
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: staging-quota
namespace: my-project-staging
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
pods: "10"
```
### Namespace: project-production
**Propósito**: Production environment
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-project-production
labels:
project: my-project
environment: production
managed-by: aiworker
protected: "true"
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: my-project-production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
---
# Pod Disruption Budget para alta disponibilidad
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
namespace: my-project-production
spec:
minAvailable: 1
selector:
matchLabels:
app: my-project
```
## Namespace: monitoring
**Propósito**: Prometheus, Grafana, logs
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
name: monitoring
environment: production
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: monitoring-quota
namespace: monitoring
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "10"
```
## Gestión de Namespaces desde el Backend
```typescript
// services/kubernetes/namespaces.ts
import { KubeConfig, CoreV1Api } from '@kubernetes/client-node'
export class NamespaceManager {
private k8sApi: CoreV1Api
constructor() {
const kc = new KubeConfig()
kc.loadFromDefault()
this.k8sApi = kc.makeApiClient(CoreV1Api)
}
async createNamespace(name: string, labels: Record<string, string> = {}) {
await this.k8sApi.createNamespace({
metadata: {
name,
labels: {
'managed-by': 'aiworker',
...labels,
},
},
})
}
async deleteNamespace(name: string) {
await this.k8sApi.deleteNamespace(name)
}
async listNamespaces(labelSelector?: string) {
const response = await this.k8sApi.listNamespace(undefined, undefined, undefined, undefined, labelSelector)
return response.body.items
}
async namespaceExists(name: string): Promise<boolean> {
try {
await this.k8sApi.readNamespace(name)
return true
} catch {
return false
}
}
}
```
## Dashboard de Namespaces
En el frontend, mostrar todos los namespaces con sus recursos:
```typescript
// hooks/useNamespaces.ts
export function useNamespaces(projectId?: string) {
return useQuery({
queryKey: ['namespaces', projectId],
queryFn: async () => {
const { data } = await api.get('/namespaces', {
params: { projectId },
})
return data.namespaces
},
})
}
```
Vista en el dashboard:
- **Mapa de namespaces** por proyecto
- **Uso de recursos** (CPU, memoria) por namespace
- **Número de pods** activos
- **Botón de cleanup** para preview namespaces antiguos

View File

@@ -0,0 +1,474 @@
# Networking e Ingress
## Arquitectura de Red
```
Internet
[LoadBalancer] (Cloud Provider)
[Nginx Ingress Controller]
├──► api.aiworker.dev ──► Backend (control-plane)
├──► git.aiworker.dev ──► Gitea (gitea)
├──► app.aiworker.dev ──► Frontend (control-plane)
├──► *.preview.aiworker.dev ──► Preview Deployments
├──► staging-*.aiworker.dev ──► Staging Envs
└──► *.aiworker.dev ──► Production Apps
```
## Ingress Configuration
### Wildcard Certificate
```yaml
# k8s/ingress/wildcard-certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-aiworker
namespace: ingress-nginx
spec:
secretName: wildcard-aiworker-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.aiworker.dev"
dnsNames:
- "aiworker.dev"
- "*.aiworker.dev"
- "*.preview.aiworker.dev"
```
### Backend Ingress
```yaml
# k8s/ingress/backend-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
namespace: control-plane
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/websocket-services: "aiworker-backend"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.aiworker.dev"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, PATCH, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.aiworker.dev
secretName: backend-tls
rules:
- host: api.aiworker.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aiworker-backend
port:
number: 3000
```
### Frontend Ingress
```yaml
# k8s/ingress/frontend-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: control-plane
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-XSS-Protection: 1; mode=block";
spec:
ingressClassName: nginx
tls:
- hosts:
- app.aiworker.dev
secretName: frontend-tls
rules:
- host: app.aiworker.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: aiworker-frontend
port:
number: 80
```
### Preview Deployments Ingress Template
```typescript
// services/kubernetes/ingress.ts
export function generatePreviewIngress(params: {
taskId: string
projectName: string
namespace: string
}) {
const shortId = params.taskId.slice(0, 8)
const host = `task-${shortId}.preview.aiworker.dev`
return {
apiVersion: 'networking.k8s.io/v1',
kind: 'Ingress',
metadata: {
name: `${params.projectName}-preview`,
namespace: params.namespace,
annotations: {
'cert-manager.io/cluster-issuer': 'letsencrypt-prod',
'nginx.ingress.kubernetes.io/ssl-redirect': 'true',
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'preview-basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Preview Environment',
},
labels: {
environment: 'preview',
task: params.taskId,
project: params.projectName,
},
},
spec: {
ingressClassName: 'nginx',
tls: [
{
hosts: [host],
secretName: `${params.projectName}-preview-tls`,
},
],
rules: [
{
host,
http: {
paths: [
{
path: '/',
pathType: 'Prefix',
backend: {
service: {
name: `${params.projectName}-preview`,
port: {
number: 80,
},
},
},
},
],
},
},
],
},
}
}
```
## Service Mesh (Opcional)
Si necesitas más control sobre el tráfico, considera usar Istio o Linkerd:
### Istio Gateway
```yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: aiworker-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: wildcard-aiworker-tls
hosts:
- "*.aiworker.dev"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: backend-vs
namespace: control-plane
spec:
hosts:
- "api.aiworker.dev"
gateways:
- istio-system/aiworker-gateway
http:
- match:
- uri:
prefix: /api
route:
- destination:
host: aiworker-backend
port:
number: 3000
```
## DNS Configuration
### Cloudflare DNS Records
```bash
# A records
api.aiworker.dev A <loadbalancer-ip>
git.aiworker.dev A <loadbalancer-ip>
app.aiworker.dev A <loadbalancer-ip>
# Wildcard for preview and dynamic environments
*.preview.aiworker.dev A <loadbalancer-ip>
*.aiworker.dev A <loadbalancer-ip>
```
### External DNS (Automated)
```yaml
# k8s/external-dns/external-dns-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.k8s.io/external-dns/external-dns:v0.14.0
args:
- --source=ingress
- --domain-filter=aiworker.dev
- --provider=cloudflare
env:
- name: CF_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-api-token
key: token
```
## Network Policies
### Isolate Preview Environments
```yaml
# k8s/network-policies/preview-isolation.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: preview-isolation
namespace: agents
spec:
podSelector:
matchLabels:
environment: preview
policyTypes:
- Ingress
- Egress
ingress:
# Allow from ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
# Allow from control-plane
- from:
- namespaceSelector:
matchLabels:
name: control-plane
egress:
# Allow to gitea
- to:
- namespaceSelector:
matchLabels:
name: gitea
# Allow to external HTTPS (npm, apt, etc)
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443
# Allow DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
```
### Allow Backend to All
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress
namespace: control-plane
spec:
podSelector:
matchLabels:
app: aiworker-backend
policyTypes:
- Egress
egress:
- {} # Allow all egress
```
## Load Balancing
### Session Affinity for WebSocket
```yaml
apiVersion: v1
kind: Service
metadata:
name: aiworker-backend
namespace: control-plane
annotations:
service.beta.kubernetes.io/external-traffic: OnlyLocal
spec:
selector:
app: aiworker-backend
ports:
- name: http
port: 3000
targetPort: 3000
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600
type: ClusterIP
```
## Rate Limiting
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
namespace: control-plane
annotations:
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-burst: "200"
nginx.ingress.kubernetes.io/rate-limit-key: "$binary_remote_addr"
spec:
# ... spec
```
## Health Checks
### Liveness and Readiness Probes
```yaml
livenessProbe:
httpGet:
path: /api/health
port: 3000
httpHeaders:
- name: X-Health-Check
value: liveness
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/health/ready
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
```
### Health Endpoint Implementation
```typescript
// api/routes/health.ts
import { Router } from 'express'
import { getDatabase } from '../../config/database'
import { getRedis } from '../../config/redis'
const router = Router()
router.get('/health', async (req, res) => {
res.json({
status: 'ok',
timestamp: new Date().toISOString(),
})
})
router.get('/health/ready', async (req, res) => {
try {
// Check DB
const db = getDatabase()
await db.execute('SELECT 1')
// Check Redis
const redis = getRedis()
await redis.ping()
res.json({
status: 'ready',
services: {
database: 'connected',
redis: 'connected',
},
})
} catch (error) {
res.status(503).json({
status: 'not ready',
error: error.message,
})
}
})
export default router
```
## Monitoring Traffic
```bash
# Ver logs de Nginx Ingress
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller --tail=100 -f
# Ver métricas
kubectl top pods -n ingress-nginx
# Ver configuración generada
kubectl exec -n ingress-nginx <pod> -- cat /etc/nginx/nginx.conf
```

View File

@@ -0,0 +1,452 @@
# Ciclo de Vida de los Agentes
## Estados del Agente
```
┌──────────────┐
│ Initializing │
└──────┬───────┘
┌──────┐ ┌──────┐
│ Idle │◄───►│ Busy │
└───┬──┘ └──┬───┘
│ │
│ │
▼ ▼
┌───────┐ ┌───────┐
│ Error │ │Offline│
└───────┘ └───────┘
```
## Inicialización
### 1. Creación del Pod
```typescript
// Backend crea el pod
const agentManager = new AgentManager()
const agent = await agentManager.createAgent(['javascript', 'react'])
// Resultado
{
id: 'agent-abc123',
podName: 'claude-agent-abc123',
namespace: 'agents',
status: 'initializing'
}
```
### 2. Arranque del Contenedor
```bash
# En el pod (entrypoint.sh)
echo "🤖 Starting agent: $AGENT_ID"
# 1. Setup SSH
echo "$GIT_SSH_KEY" > /root/.ssh/id_ed25519
chmod 600 /root/.ssh/id_ed25519
# 2. Configure Claude Code MCP
cat > /root/.claude-code/config.json <<EOF
{
"mcpServers": {
"aiworker": {
"url": "$MCP_SERVER_URL"
}
}
}
EOF
# 3. Send initial heartbeat
curl -X POST "$MCP_SERVER_URL/heartbeat" \
-H "Content-Type: application/json" \
-H "X-Agent-ID: $AGENT_ID" \
-d '{"status":"idle"}'
# 4. Start work loop
exec /usr/local/bin/agent-loop.sh
```
### 3. Registro en el Sistema
```typescript
// Backend detecta el heartbeat y actualiza
await db.update(agents)
.set({
status: 'idle',
lastHeartbeat: new Date(),
})
.where(eq(agents.id, agentId))
logger.info(`Agent ${agentId} is now active`)
```
## Asignación de Tarea
### 1. Agent Polling
```bash
# agent-loop.sh
while true; do
echo "📋 Checking for tasks..."
TASK=$(curl -s -X POST "$MCP_SERVER_URL/tools/call" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"get_next_task\",
\"arguments\": {\"agentId\": \"$AGENT_ID\"}
}")
TASK_ID=$(echo "$TASK" | jq -r '.content[0].text | fromjson | .task.id // empty')
if [ -n "$TASK_ID" ]; then
echo "🎯 Got task: $TASK_ID"
process_task "$TASK_ID"
else
sleep 10
fi
done
```
### 2. Backend Asigna Tarea
```typescript
// services/mcp/handlers.ts - getNextTask()
async function getNextTask(args: { agentId: string }) {
// 1. Buscar siguiente tarea en backlog
const task = await db.query.tasks.findFirst({
where: eq(tasks.state, 'backlog'),
orderBy: [desc(tasks.priority), asc(tasks.createdAt)],
})
if (!task) {
return { content: [{ type: 'text', text: JSON.stringify({ message: 'No tasks' }) }] }
}
// 2. Asignar al agente
await db.update(tasks)
.set({
state: 'in_progress',
assignedAgentId: args.agentId,
assignedAt: new Date(),
startedAt: new Date(),
})
.where(eq(tasks.id, task.id))
// 3. Actualizar agente
await db.update(agents)
.set({
status: 'busy',
currentTaskId: task.id,
})
.where(eq(agents.id, args.agentId))
// 4. Retornar tarea
return {
content: [{
type: 'text',
text: JSON.stringify({ task }),
}],
}
}
```
## Trabajo en Tarea
### Fase 1: Setup
```bash
# Clone repo
git clone "$PROJECT_REPO" "/workspace/task-$TASK_ID"
cd "/workspace/task-$TASK_ID"
# Create branch (via MCP)
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{\"name\": \"create_branch\", \"arguments\": {\"taskId\": \"$TASK_ID\"}}"
# Checkout branch
git fetch origin
git checkout "$BRANCH_NAME"
```
### Fase 2: Implementación
```bash
# Start Claude Code session
claude-code chat --message "
I need you to work on this task:
Title: $TASK_TITLE
Description: $TASK_DESC
Instructions:
1. Analyze the codebase
2. Implement the changes
3. Write tests
4. Commit with clear messages
5. Use MCP tools when done
Start working now.
"
```
### Fase 3: Preguntas (opcional)
```typescript
// Si el agente necesita info
await mcp.callTool('ask_user_question', {
taskId,
question: 'Should I add TypeScript types?',
context: 'The codebase is in JavaScript...',
})
// Cambiar estado a needs_input
await mcp.callTool('update_task_status', {
taskId,
status: 'needs_input',
})
// Hacer polling cada 5s hasta respuesta
let response
while (!response) {
await sleep(5000)
const check = await mcp.callTool('check_question_response', { taskId })
if (check.hasResponse) {
response = check.response
}
}
// Continuar con la respuesta
await mcp.callTool('update_task_status', {
taskId,
status: 'in_progress',
})
```
### Fase 4: Finalización
```bash
# Create PR
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{
\"name\": \"create_pull_request\",
\"arguments\": {
\"taskId\": \"$TASK_ID\",
\"title\": \"$TASK_TITLE\",
\"description\": \"Implemented feature X...\"
}
}"
# Deploy preview
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{
\"name\": \"trigger_preview_deploy\",
\"arguments\": {\"taskId\": \"$TASK_ID\"}
}"
# Update status
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{
\"name\": \"update_task_status\",
\"arguments\": {
\"taskId\": \"$TASK_ID\",
\"status\": \"ready_to_test\"
}
}"
```
## Liberación del Agente
```typescript
// Cuando tarea completa (ready_to_test o completed)
await db.update(agents)
.set({
status: 'idle',
currentTaskId: null,
tasksCompleted: sql`tasks_completed + 1`,
})
.where(eq(agents.id, agentId))
logger.info(`Agent ${agentId} completed task ${taskId}, now idle`)
```
## Manejo de Errores
### Timeout de Tarea
```bash
# agent-loop.sh con timeout
timeout 7200 claude-code chat --message "$TASK_PROMPT" || {
STATUS=$?
if [ $STATUS -eq 124 ]; then
echo "⏰ Task timeout after 2 hours"
# Notify backend
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{
\"name\": \"update_task_status\",
\"arguments\": {
\"taskId\": \"$TASK_ID\",
\"status\": \"needs_input\",
\"metadata\": {\"reason\": \"timeout\"}
}
}"
# Log error
curl -X POST "$MCP_SERVER_URL/tools/call" \
-d "{
\"name\": \"log_activity\",
\"arguments\": {
\"agentId\": \"$AGENT_ID\",
\"level\": \"error\",
\"message\": \"Task timeout: $TASK_ID\"
}
}"
fi
}
```
### Crash del Agente
```typescript
// Backend detecta agente sin heartbeat
async function checkStaleAgents() {
const staleThreshold = new Date(Date.now() - 5 * 60 * 1000) // 5 min
const staleAgents = await db.query.agents.findMany({
where: lt(agents.lastHeartbeat, staleThreshold),
})
for (const agent of staleAgents) {
logger.warn(`Agent ${agent.id} is stale`)
// Mark current task as needs attention
if (agent.currentTaskId) {
await db.update(tasks)
.set({
state: 'backlog',
assignedAgentId: null,
})
.where(eq(tasks.id, agent.currentTaskId))
}
// Delete agent pod
await k8sClient.deletePod(agent.k8sNamespace, agent.podName)
// Remove from DB
await db.delete(agents).where(eq(agents.id, agent.id))
// Create replacement
await agentManager.createAgent()
}
}
// Run every minute
setInterval(checkStaleAgents, 60000)
```
## Terminación Graciosa
```bash
# agent-entrypoint.sh
cleanup() {
echo "🛑 Shutting down agent..."
# Send offline status
curl -X POST "$MCP_SERVER_URL/heartbeat" \
-d '{"status":"offline"}' 2>/dev/null || true
# Kill background jobs
kill $HEARTBEAT_PID 2>/dev/null || true
echo "👋 Goodbye"
exit 0
}
trap cleanup SIGTERM SIGINT
# Wait for signals
wait
```
## Auto-Scaling
```typescript
// Auto-scaler que corre cada 30s
async function autoScale() {
// Get metrics
const pendingTasks = await db.query.tasks.findMany({
where: eq(tasks.state, 'backlog'),
})
const idleAgents = await db.query.agents.findMany({
where: eq(agents.status, 'idle'),
})
const busyAgents = await db.query.agents.findMany({
where: eq(agents.status, 'busy'),
})
const totalAgents = idleAgents.length + busyAgents.length
// Decision logic
let targetAgents = totalAgents
// Scale up if:
// - More than 3 pending tasks
// - No idle agents
if (pendingTasks.length > 3 && idleAgents.length === 0) {
targetAgents = Math.min(totalAgents + 2, 10) // Max 10
}
// Scale down if:
// - No pending tasks
// - More than 2 idle agents
if (pendingTasks.length === 0 && idleAgents.length > 2) {
targetAgents = Math.max(totalAgents - 1, 2) // Min 2
}
if (targetAgents !== totalAgents) {
logger.info(`Auto-scaling: ${totalAgents}${targetAgents}`)
await agentManager.scaleAgents(targetAgents)
}
}
setInterval(autoScale, 30000)
```
## Métricas del Ciclo de Vida
```typescript
// Endpoint para métricas de agentes
router.get('/agents/metrics', async (req, res) => {
const agents = await db.query.agents.findMany()
const metrics = {
total: agents.length,
byStatus: {
idle: agents.filter((a) => a.status === 'idle').length,
busy: agents.filter((a) => a.status === 'busy').length,
error: agents.filter((a) => a.status === 'error').length,
offline: agents.filter((a) => a.status === 'offline').length,
},
totalTasksCompleted: agents.reduce((sum, a) => sum + a.tasksCompleted, 0),
avgTasksPerAgent:
agents.reduce((sum, a) => sum + a.tasksCompleted, 0) / agents.length || 0,
totalRuntime: agents.reduce((sum, a) => sum + a.totalRuntimeMinutes, 0),
}
res.json(metrics)
})
```
## Dashboard Visualization
En el frontend, mostrar:
- **Estado actual** de cada agente (idle/busy/error)
- **Tarea actual** si está busy
- **Historial** de tareas completadas
- **Métricas** (tareas/hora, uptime, etc.)
- **Botones** para restart/delete agente
- **Logs en tiempo real** de cada agente

View File

@@ -0,0 +1,499 @@
# Claude Code Agents - Pods en Kubernetes
## Dockerfile del Agente
```dockerfile
# Dockerfile
FROM node:20-alpine
# Install dependencies
RUN apk add --no-cache \
git \
openssh-client \
curl \
bash \
vim
# Install Bun
RUN curl -fsSL https://bun.sh/install | bash
ENV PATH="/root/.bun/bin:$PATH"
# Install Claude Code CLI
RUN npm install -g @anthropic-ai/claude-code
# Create workspace
WORKDIR /workspace
# Copy agent scripts
COPY scripts/agent-entrypoint.sh /usr/local/bin/
COPY scripts/agent-loop.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/agent-*.sh
# Git config
RUN git config --global user.name "AiWorker Agent" && \
git config --global user.email "agent@aiworker.dev" && \
git config --global init.defaultBranch main
# Setup SSH
RUN mkdir -p /root/.ssh && \
ssh-keyscan -H git.aiworker.dev >> /root/.ssh/known_hosts
ENTRYPOINT ["/usr/local/bin/agent-entrypoint.sh"]
```
## Agent Entrypoint Script
```bash
#!/bin/bash
# scripts/agent-entrypoint.sh
set -e
echo "🤖 Starting AiWorker Agent..."
echo "Agent ID: $AGENT_ID"
# Setup SSH key
if [ -n "$GIT_SSH_KEY" ]; then
echo "$GIT_SSH_KEY" > /root/.ssh/id_ed25519
chmod 600 /root/.ssh/id_ed25519
fi
# Configure Claude Code with MCP Server
cat > /root/.claude-code/config.json <<EOF
{
"mcpServers": {
"aiworker": {
"command": "curl",
"args": [
"-X", "POST",
"-H", "Content-Type: application/json",
"-H", "X-Agent-ID: $AGENT_ID",
"$MCP_SERVER_URL/rpc"
]
}
}
}
EOF
# Send heartbeat
send_heartbeat() {
curl -s -X POST "$MCP_SERVER_URL/heartbeat" \
-H "Content-Type: application/json" \
-d "{\"agentId\":\"$AGENT_ID\",\"status\":\"$1\"}" > /dev/null 2>&1 || true
}
# Start heartbeat loop in background
while true; do
send_heartbeat "idle"
sleep 30
done &
HEARTBEAT_PID=$!
# Trap signals for graceful shutdown
trap "kill $HEARTBEAT_PID; send_heartbeat 'offline'; exit 0" SIGTERM SIGINT
# Start agent work loop
exec /usr/local/bin/agent-loop.sh
```
## Agent Work Loop
```bash
#!/bin/bash
# scripts/agent-loop.sh
set -e
echo "🔄 Starting agent work loop..."
while true; do
echo "📋 Checking for tasks..."
# Get next task via MCP
TASK=$(curl -s -X POST "$MCP_SERVER_URL/tools/call" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"get_next_task\",
\"arguments\": {
\"agentId\": \"$AGENT_ID\"
}
}")
TASK_ID=$(echo "$TASK" | jq -r '.content[0].text | fromjson | .task.id // empty')
if [ -z "$TASK_ID" ] || [ "$TASK_ID" = "null" ]; then
echo "💤 No tasks available, waiting..."
sleep 10
continue
fi
echo "🎯 Got task: $TASK_ID"
# Extract task details
TASK_TITLE=$(echo "$TASK" | jq -r '.content[0].text | fromjson | .task.title')
TASK_DESC=$(echo "$TASK" | jq -r '.content[0].text | fromjson | .task.description')
PROJECT_REPO=$(echo "$TASK" | jq -r '.content[0].text | fromjson | .task.project.giteaRepoUrl')
echo "📝 Task: $TASK_TITLE"
echo "📦 Repo: $PROJECT_REPO"
# Log activity
curl -s -X POST "$MCP_SERVER_URL/tools/call" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"log_activity\",
\"arguments\": {
\"agentId\": \"$AGENT_ID\",
\"level\": \"info\",
\"message\": \"Starting task: $TASK_TITLE\"
}
}" > /dev/null
# Clone repository
REPO_DIR="/workspace/task-$TASK_ID"
if [ ! -d "$REPO_DIR" ]; then
echo "📥 Cloning repository..."
git clone "$PROJECT_REPO" "$REPO_DIR"
fi
cd "$REPO_DIR"
# Create branch via MCP
echo "🌿 Creating branch..."
BRANCH_RESULT=$(curl -s -X POST "$MCP_SERVER_URL/tools/call" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"create_branch\",
\"arguments\": {
\"taskId\": \"$TASK_ID\"
}
}")
BRANCH_NAME=$(echo "$BRANCH_RESULT" | jq -r '.content[0].text | fromjson | .branchName')
echo "🌿 Branch: $BRANCH_NAME"
# Fetch and checkout
git fetch origin
git checkout "$BRANCH_NAME" 2>/dev/null || git checkout -b "$BRANCH_NAME"
# Start Claude Code session
echo "🧠 Starting Claude Code session..."
# Create task prompt
TASK_PROMPT="I need you to work on the following task:
Title: $TASK_TITLE
Description:
$TASK_DESC
Instructions:
1. Analyze the codebase
2. Implement the required changes
3. Write tests if needed
4. Commit your changes with clear messages
5. When done, use the MCP tools to:
- create_pull_request with a summary
- trigger_preview_deploy
- update_task_status to 'ready_to_test'
If you need clarification, use ask_user_question.
Start working on this task now."
# Run Claude Code (with timeout of 2 hours)
timeout 7200 claude-code chat --message "$TASK_PROMPT" || {
STATUS=$?
if [ $STATUS -eq 124 ]; then
echo "⏰ Task timeout"
curl -s -X POST "$MCP_SERVER_URL/tools/call" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"update_task_status\",
\"arguments\": {
\"taskId\": \"$TASK_ID\",
\"status\": \"needs_input\",
\"metadata\": {\"reason\": \"timeout\"}
}
}" > /dev/null
else
echo "❌ Claude Code exited with status $STATUS"
fi
}
echo "✅ Task completed: $TASK_ID"
# Cleanup
cd /workspace
rm -rf "$REPO_DIR"
# Brief pause before next task
sleep 5
done
```
## Pod Specification
```yaml
# k8s/agents/claude-agent-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: claude-agent-{{ AGENT_ID }}
namespace: agents
labels:
app: claude-agent
agent-id: "{{ AGENT_ID }}"
managed-by: aiworker
spec:
restartPolicy: Never
serviceAccountName: claude-agent
containers:
- name: agent
image: aiworker/claude-agent:latest
imagePullPolicy: Always
env:
- name: AGENT_ID
value: "{{ AGENT_ID }}"
- name: MCP_SERVER_URL
value: "http://aiworker-backend.control-plane.svc.cluster.local:3100"
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: aiworker-secrets
key: anthropic-api-key
- name: GITEA_URL
value: "http://gitea.gitea.svc.cluster.local:3000"
- name: GIT_SSH_KEY
valueFrom:
secretKeyRef:
name: git-ssh-keys
key: private-key
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace
emptyDir:
sizeLimit: 10Gi
```
## Agent Manager (Backend)
```typescript
// services/kubernetes/agent-manager.ts
import { K8sClient } from './client'
import { db } from '../../db/client'
import { agents } from '../../db/schema'
import { eq } from 'drizzle-orm'
import crypto from 'crypto'
import { logger } from '../../utils/logger'
export class AgentManager {
private k8sClient: K8sClient
constructor() {
this.k8sClient = new K8sClient()
}
async createAgent(capabilities: string[] = []) {
const agentId = crypto.randomUUID()
// Create agent pod in K8s
const { podName, namespace } = await this.k8sClient.createAgentPod(agentId)
// Insert in database
await db.insert(agents).values({
id: agentId,
podName,
k8sNamespace: namespace,
status: 'initializing',
capabilities,
lastHeartbeat: new Date(),
})
logger.info(`Created agent: ${agentId}`)
return {
id: agentId,
podName,
namespace,
}
}
async deleteAgent(agentId: string) {
const agent = await db.query.agents.findFirst({
where: eq(agents.id, agentId),
})
if (!agent) {
throw new Error('Agent not found')
}
// Delete pod
await this.k8sClient.deletePod(agent.k8sNamespace, agent.podName)
// Delete from database
await db.delete(agents).where(eq(agents.id, agentId))
logger.info(`Deleted agent: ${agentId}`)
}
async scaleAgents(targetCount: number) {
const currentAgents = await db.query.agents.findMany()
if (currentAgents.length < targetCount) {
// Scale up
const toCreate = targetCount - currentAgents.length
logger.info(`Scaling up: creating ${toCreate} agents`)
for (let i = 0; i < toCreate; i++) {
await this.createAgent()
await new Promise(resolve => setTimeout(resolve, 1000)) // Stagger creation
}
} else if (currentAgents.length > targetCount) {
// Scale down
const toDelete = currentAgents.length - targetCount
logger.info(`Scaling down: deleting ${toDelete} agents`)
// Delete idle agents first
const idleAgents = currentAgents.filter(a => a.status === 'idle').slice(0, toDelete)
for (const agent of idleAgents) {
await this.deleteAgent(agent.id)
}
}
}
async autoScale() {
// Get pending tasks
const pendingTasks = await db.query.tasks.findMany({
where: eq(tasks.state, 'backlog'),
})
// Get available agents
const availableAgents = await db.query.agents.findMany({
where: eq(agents.status, 'idle'),
})
const busyAgents = await db.query.agents.findMany({
where: eq(agents.status, 'busy'),
})
const totalAgents = availableAgents.length + busyAgents.length
// Simple scaling logic
const targetAgents = Math.min(
Math.max(2, pendingTasks.length, busyAgents.length + 1), // At least 2, max 1 per pending task
10 // Max 10 agents
)
if (targetAgents !== totalAgents) {
logger.info(`Auto-scaling agents: ${totalAgents}${targetAgents}`)
await this.scaleAgents(targetAgents)
}
}
async cleanupStaleAgents() {
const staleThreshold = new Date(Date.now() - 5 * 60 * 1000) // 5 minutes
const staleAgents = await db.query.agents.findMany({
where: (agents, { lt }) => lt(agents.lastHeartbeat, staleThreshold),
})
for (const agent of staleAgents) {
logger.warn(`Cleaning up stale agent: ${agent.id}`)
await this.deleteAgent(agent.id)
}
}
}
// Start autoscaler
setInterval(async () => {
const manager = new AgentManager()
await manager.autoScale()
await manager.cleanupStaleAgents()
}, 30000) // Every 30 seconds
```
## Agent Logs Streaming
```typescript
// api/routes/agents.ts
import { Router } from 'express'
import { K8sClient } from '../../services/kubernetes/client'
import { db } from '../../db/client'
import { agents } from '../../db/schema'
import { eq } from 'drizzle-orm'
const router = Router()
const k8sClient = new K8sClient()
router.get('/:agentId/logs/stream', async (req, res) => {
const { agentId } = req.params
const agent = await db.query.agents.findFirst({
where: eq(agents.id, agentId),
})
if (!agent) {
return res.status(404).json({ error: 'Agent not found' })
}
res.setHeader('Content-Type', 'text/event-stream')
res.setHeader('Cache-Control', 'no-cache')
res.setHeader('Connection', 'keep-alive')
try {
const logStream = await k8sClient.streamPodLogs(agent.k8sNamespace, agent.podName)
logStream.on('data', (chunk) => {
res.write(`data: ${chunk.toString()}\n\n`)
})
logStream.on('end', () => {
res.end()
})
req.on('close', () => {
logStream.destroy()
})
} catch (error) {
res.status(500).json({ error: 'Failed to stream logs' })
}
})
export default router
```
## Monitoring Agents
```bash
# Ver todos los agentes
kubectl get pods -n agents -l app=claude-agent
# Ver logs de un agente
kubectl logs -n agents claude-agent-abc123 -f
# Entrar a un agente
kubectl exec -it -n agents claude-agent-abc123 -- /bin/bash
# Ver recursos consumidos
kubectl top pods -n agents
```

View File

@@ -0,0 +1,567 @@
# Comunicación Agentes-Backend
## Arquitectura de Comunicación
```
┌─────────────────────┐
│ Claude Code Agent │
│ (Pod en K8s) │
└──────────┬──────────┘
│ MCP Protocol
│ (HTTP/JSON-RPC)
┌─────────────────────┐
│ MCP Server │
│ (Backend Service) │
└──────────┬──────────┘
┌──────┴──────┐
│ │
▼ ▼
┌────────┐ ┌────────┐
│ MySQL │ │ Gitea │
└────────┘ └────────┘
```
## MCP Protocol Implementation
### Request Format
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "get_next_task",
"arguments": {
"agentId": "agent-uuid"
}
}
}
```
### Response Format
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "{\"task\": {...}}"
}
]
}
}
```
## HTTP Client en Agente
```typescript
// agent/mcp-client.ts
class MCPClient {
private baseUrl: string
private agentId: string
constructor(baseUrl: string, agentId: string) {
this.baseUrl = baseUrl
this.agentId = agentId
}
async callTool(toolName: string, args: any) {
const response = await fetch(`${this.baseUrl}/rpc`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Agent-ID': this.agentId,
},
body: JSON.stringify({
jsonrpc: '2.0',
id: Date.now(),
method: 'tools/call',
params: {
name: toolName,
arguments: args,
},
}),
})
if (!response.ok) {
throw new Error(`MCP call failed: ${response.statusText}`)
}
const data = await response.json()
if (data.error) {
throw new Error(data.error.message)
}
return data.result
}
async listTools() {
const response = await fetch(`${this.baseUrl}/rpc`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Agent-ID': this.agentId,
},
body: JSON.stringify({
jsonrpc: '2.0',
id: Date.now(),
method: 'tools/list',
params: {},
}),
})
const data = await response.json()
return data.result.tools
}
}
// Usage
const mcp = new MCPClient(
process.env.MCP_SERVER_URL,
process.env.AGENT_ID
)
const task = await mcp.callTool('get_next_task', {
agentId: process.env.AGENT_ID,
})
```
## Server-Side Handler
```typescript
// backend: api/routes/mcp.ts
import { Router, Request, Response } from 'express'
import { handleToolCall } from '../../services/mcp/handlers'
import { tools } from '../../services/mcp/tools'
import { logger } from '../../utils/logger'
const router = Router()
// JSON-RPC endpoint
router.post('/rpc', async (req: Request, res: Response) => {
const { jsonrpc, id, method, params } = req.body
if (jsonrpc !== '2.0') {
return res.status(400).json({
jsonrpc: '2.0',
id,
error: {
code: -32600,
message: 'Invalid Request',
},
})
}
const agentId = req.headers['x-agent-id'] as string
if (!agentId) {
return res.status(401).json({
jsonrpc: '2.0',
id,
error: {
code: -32001,
message: 'Missing agent ID',
},
})
}
try {
switch (method) {
case 'tools/list':
return res.json({
jsonrpc: '2.0',
id,
result: {
tools: tools.map((t) => ({
name: t.name,
description: t.description,
inputSchema: t.inputSchema,
})),
},
})
case 'tools/call':
const { name, arguments: args } = params
logger.info(`MCP call from ${agentId}: ${name}`)
const result = await handleToolCall(name, {
...args,
agentId,
})
return res.json({
jsonrpc: '2.0',
id,
result,
})
default:
return res.status(404).json({
jsonrpc: '2.0',
id,
error: {
code: -32601,
message: 'Method not found',
},
})
}
} catch (error: any) {
logger.error('MCP error:', error)
return res.status(500).json({
jsonrpc: '2.0',
id,
error: {
code: -32603,
message: 'Internal error',
data: error.message,
},
})
}
})
export default router
```
## Heartbeat System
### Agent-Side Heartbeat
```bash
# In agent pod
while true; do
curl -s -X POST "$MCP_SERVER_URL/heartbeat" \
-H "Content-Type: application/json" \
-H "X-Agent-ID: $AGENT_ID" \
-d "{\"status\":\"idle\"}"
sleep 30
done &
```
### Server-Side Heartbeat Handler
```typescript
// api/routes/mcp.ts
router.post('/heartbeat', async (req: Request, res: Response) => {
const agentId = req.headers['x-agent-id'] as string
const { status } = req.body
if (!agentId) {
return res.status(401).json({ error: 'Missing agent ID' })
}
try {
await db.update(agents)
.set({
lastHeartbeat: new Date(),
status: status || 'idle',
})
.where(eq(agents.id, agentId))
res.json({ success: true })
} catch (error) {
res.status(500).json({ error: 'Failed to update heartbeat' })
}
})
```
## WebSocket for Real-Time Updates
Alternativamente, para comunicación bidireccional en tiempo real:
```typescript
// backend: api/websocket/agents.ts
import { Server as SocketIOServer } from 'socket.io'
export function setupAgentWebSocket(io: SocketIOServer) {
const agentNamespace = io.of('/agents')
agentNamespace.on('connection', (socket) => {
const agentId = socket.handshake.query.agentId as string
console.log(`Agent connected: ${agentId}`)
// Join agent room
socket.join(agentId)
// Heartbeat
socket.on('heartbeat', async (data) => {
await db.update(agents)
.set({
lastHeartbeat: new Date(),
status: data.status,
})
.where(eq(agents.id, agentId))
})
// Task updates
socket.on('task_update', async (data) => {
await db.update(tasks)
.set({ state: data.state })
.where(eq(tasks.id, data.taskId))
// Notify frontend
io.emit('task:status_changed', {
taskId: data.taskId,
newState: data.state,
})
})
socket.on('disconnect', () => {
console.log(`Agent disconnected: ${agentId}`)
})
})
// Send task assignment to specific agent
return {
assignTask: (agentId: string, task: any) => {
agentNamespace.to(agentId).emit('task_assigned', task)
},
}
}
```
## Authentication & Security
### JWT for Agents
```typescript
// Generate agent token
import jwt from 'jsonwebtoken'
export function generateAgentToken(agentId: string) {
return jwt.sign(
{
agentId,
type: 'agent',
},
process.env.JWT_SECRET!,
{
expiresIn: '7d',
}
)
}
// Verify middleware
export function verifyAgentToken(req: Request, res: Response, next: NextFunction) {
const token = req.headers.authorization?.replace('Bearer ', '')
if (!token) {
return res.status(401).json({ error: 'No token provided' })
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET!)
req.agentId = decoded.agentId
next()
} catch (error) {
return res.status(401).json({ error: 'Invalid token' })
}
}
```
### mTLS (Optional)
Para seguridad adicional, usar mTLS entre agentes y backend:
```yaml
# Agent pod with client cert
volumes:
- name: agent-certs
secret:
secretName: agent-client-certs
volumeMounts:
- name: agent-certs
mountPath: /etc/certs
readOnly: true
env:
- name: MCP_CLIENT_CERT
value: /etc/certs/client.crt
- name: MCP_CLIENT_KEY
value: /etc/certs/client.key
```
## Retry & Error Handling
```typescript
// agent/mcp-client-with-retry.ts
class MCPClientWithRetry extends MCPClient {
async callToolWithRetry(
toolName: string,
args: any,
maxRetries = 3
) {
let lastError
for (let i = 0; i < maxRetries; i++) {
try {
return await this.callTool(toolName, args)
} catch (error: any) {
lastError = error
console.error(`Attempt ${i + 1} failed:`, error.message)
if (i < maxRetries - 1) {
// Exponential backoff
await sleep(Math.pow(2, i) * 1000)
}
}
}
throw lastError
}
}
```
## Circuit Breaker
```typescript
// agent/circuit-breaker.ts
class CircuitBreaker {
private failures = 0
private lastFailureTime = 0
private state: 'closed' | 'open' | 'half-open' = 'closed'
private readonly threshold = 5
private readonly timeout = 60000 // 1 minute
async call<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'open') {
if (Date.now() - this.lastFailureTime > this.timeout) {
this.state = 'half-open'
} else {
throw new Error('Circuit breaker is open')
}
}
try {
const result = await fn()
if (this.state === 'half-open') {
this.state = 'closed'
this.failures = 0
}
return result
} catch (error) {
this.failures++
this.lastFailureTime = Date.now()
if (this.failures >= this.threshold) {
this.state = 'open'
}
throw error
}
}
}
// Usage
const breaker = new CircuitBreaker()
const task = await breaker.call(() =>
mcp.callTool('get_next_task', { agentId })
)
```
## Monitoring Communication
```typescript
// backend: middleware/mcp-metrics.ts
import { Request, Response, NextFunction } from 'express'
import { logger } from '../utils/logger'
const metrics = {
totalCalls: 0,
successCalls: 0,
failedCalls: 0,
callDurations: [] as number[],
}
export function mcpMetricsMiddleware(
req: Request,
res: Response,
next: NextFunction
) {
const start = Date.now()
metrics.totalCalls++
res.on('finish', () => {
const duration = Date.now() - start
metrics.callDurations.push(duration)
if (res.statusCode < 400) {
metrics.successCalls++
} else {
metrics.failedCalls++
}
logger.debug('MCP call metrics', {
method: req.body?.method,
agentId: req.headers['x-agent-id'],
duration,
status: res.statusCode,
})
})
next()
}
// Endpoint para ver métricas
router.get('/metrics', (req, res) => {
res.json({
total: metrics.totalCalls,
success: metrics.successCalls,
failed: metrics.failedCalls,
avgDuration:
metrics.callDurations.reduce((a, b) => a + b, 0) /
metrics.callDurations.length,
})
})
```
## Testing MCP Communication
```typescript
// test/mcp-client.test.ts
import { MCPClient } from '../agent/mcp-client'
describe('MCP Client', () => {
let client: MCPClient
beforeEach(() => {
client = new MCPClient('http://localhost:3100', 'test-agent')
})
it('should list available tools', async () => {
const tools = await client.listTools()
expect(tools).toContainEqual(
expect.objectContaining({ name: 'get_next_task' })
)
})
it('should call tool successfully', async () => {
const result = await client.callTool('heartbeat', {
status: 'idle',
})
expect(result.content[0].text).toContain('success')
})
it('should handle errors', async () => {
await expect(
client.callTool('invalid_tool', {})
).rejects.toThrow()
})
})
```

452
docs/05-agents/mcp-tools.md Normal file
View File

@@ -0,0 +1,452 @@
# MCP Tools - Herramientas Disponibles para Agentes
Esta documentación detalla todas las herramientas MCP que los agentes Claude Code pueden usar para interactuar con el sistema AiWorker.
## get_next_task
Obtiene la siguiente tarea disponible de la cola y la asigna al agente.
**Input**:
```json
{
"agentId": "uuid-of-agent",
"capabilities": ["javascript", "react", "python"] // opcional
}
```
**Output**:
```json
{
"task": {
"id": "task-uuid",
"title": "Implement user authentication",
"description": "Create a JWT-based authentication system...",
"priority": "high",
"project": {
"id": "project-uuid",
"name": "My App",
"giteaRepoUrl": "http://gitea/owner/my-app",
"dockerImage": "myapp:latest"
}
}
}
```
**Ejemplo de uso**:
```typescript
// En Claude Code, el agente puede hacer:
const task = await mcp.callTool('get_next_task', {
agentId: process.env.AGENT_ID,
capabilities: ['javascript', 'typescript', 'react']
})
```
---
## update_task_status
Actualiza el estado de una tarea.
**Input**:
```json
{
"taskId": "task-uuid",
"status": "in_progress" | "needs_input" | "ready_to_test" | "completed",
"metadata": {
"durationMinutes": 45,
"linesChanged": 250
}
}
```
**Output**:
```json
{
"success": true
}
```
**Estados válidos**:
- `in_progress`: Agente trabajando activamente
- `needs_input`: Agente necesita información del usuario
- `ready_to_test`: Tarea completada, lista para testing
- `completed`: Tarea completamente finalizada
---
## ask_user_question
Solicita información al usuario cuando el agente necesita clarificación.
**Input**:
```json
{
"taskId": "task-uuid",
"question": "Which authentication library should I use: Passport.js or NextAuth?",
"context": "The task requires implementing OAuth authentication. I found two popular options..."
}
```
**Output**:
```json
{
"success": true,
"message": "Question sent to user",
"questionId": "question-uuid"
}
```
**Comportamiento**:
1. Cambia el estado de la tarea a `needs_input`
2. Notifica al frontend vía WebSocket
3. Usuario responde desde el dashboard
4. Agente puede hacer polling con `check_question_response`
---
## check_question_response
Verifica si el usuario ha respondido una pregunta.
**Input**:
```json
{
"taskId": "task-uuid"
}
```
**Output (sin respuesta)**:
```json
{
"hasResponse": false,
"message": "No response yet"
}
```
**Output (con respuesta)**:
```json
{
"hasResponse": true,
"response": "Use NextAuth, it integrates better with our Next.js stack",
"question": "Which authentication library should I use..."
}
```
---
## create_branch
Crea una nueva rama en Gitea para la tarea.
**Input**:
```json
{
"taskId": "task-uuid",
"branchName": "feature/user-auth" // opcional, se genera automático
}
```
**Output**:
```json
{
"success": true,
"branchName": "task-abc123-implement-user-authentication",
"repoUrl": "http://gitea/owner/my-app"
}
```
**Comportamiento**:
- Si no se especifica `branchName`, se genera como: `task-{shortId}-{title-slugified}`
- Se crea desde la rama default del proyecto (main/develop)
- Se actualiza el campo `branchName` en la tarea
---
## create_pull_request
Crea un Pull Request en Gitea con los cambios de la tarea.
**Input**:
```json
{
"taskId": "task-uuid",
"title": "Implement JWT-based authentication",
"description": "## Changes\n- Added JWT middleware\n- Created auth routes\n- Added tests\n\n## Test Plan\n- [ ] Test login flow\n- [ ] Test token refresh"
}
```
**Output**:
```json
{
"success": true,
"prUrl": "http://gitea/owner/my-app/pulls/42",
"prNumber": 42
}
```
**Comportamiento**:
- Crea PR desde la rama de la tarea hacia la rama default
- Actualiza campos `prNumber` y `prUrl` en la tarea
- Emite evento WebSocket `task:pr_created`
---
## trigger_preview_deploy
Despliega la tarea en un preview environment aislado en Kubernetes.
**Input**:
```json
{
"taskId": "task-uuid"
}
```
**Output**:
```json
{
"success": true,
"previewUrl": "https://task-abc123.preview.aiworker.dev",
"namespace": "preview-task-abc123"
}
```
**Comportamiento**:
1. Crea namespace en K8s: `preview-task-{shortId}`
2. Deploya la aplicación con la imagen del proyecto
3. Crea ingress con URL única
4. Actualiza tarea a estado `ready_to_test`
5. Guarda `previewUrl` y `previewNamespace` en la tarea
---
## get_task_details
Obtiene detalles completos de una tarea incluyendo preguntas pendientes.
**Input**:
```json
{
"taskId": "task-uuid"
}
```
**Output**:
```json
{
"task": {
"id": "task-uuid",
"title": "Implement user authentication",
"description": "...",
"state": "needs_input",
"branchName": "task-abc123-implement-user-authentication",
"prUrl": "http://gitea/owner/my-app/pulls/42",
"previewUrl": "https://task-abc123.preview.aiworker.dev",
"questions": [
{
"id": "q-uuid",
"question": "Which auth library?",
"status": "pending",
"askedAt": "2026-01-19T10:00:00Z"
}
],
"project": {
"name": "My App",
"giteaRepoUrl": "..."
}
}
}
```
---
## log_activity
Registra actividad del agente en los logs del sistema.
**Input**:
```json
{
"agentId": "agent-uuid",
"level": "info" | "debug" | "warn" | "error",
"message": "Starting task implementation",
"metadata": {
"taskId": "task-uuid",
"operation": "code_generation",
"filesModified": 5
}
}
```
**Output**:
```json
{
"success": true
}
```
**Niveles de log**:
- `debug`: Información detallada de debugging
- `info`: Eventos normales (default)
- `warn`: Situaciones que requieren atención
- `error`: Errores que impidieron completar una operación
---
## heartbeat
Envía señal de vida para indicar que el agente está activo.
**Input**:
```json
{
"agentId": "agent-uuid",
"status": "idle" | "busy" | "error"
}
```
**Output**:
```json
{
"success": true
}
```
**Comportamiento**:
- Actualiza `lastHeartbeat` timestamp
- Actualiza `status` del agente
- Si no se recibe heartbeat por 5 minutos, el agente se marca como `offline`
---
## Flujo Típico de una Tarea
```mermaid
sequenceDiagram
Agent->>MCP: get_next_task()
MCP-->>Agent: task details
Agent->>MCP: create_branch()
Agent->>Agent: Work on task
Agent->>?MCP: ask_user_question() (si necesita)
Agent->>Agent: Wait for response
Agent->>MCP: check_question_response()
Agent->>Agent: Continue working
Agent->>Git: commit & push
Agent->>MCP: create_pull_request()
Agent->>MCP: trigger_preview_deploy()
Agent->>MCP: update_task_status("ready_to_test")
```
## Ejemplo de Uso Completo
```typescript
// Dentro del agente Claude Code
async function processTask() {
// 1. Get task
const taskResult = await mcp.callTool('get_next_task', {
agentId: process.env.AGENT_ID
})
const task = JSON.parse(taskResult.content[0].text).task
if (!task) {
console.log('No tasks available')
return
}
console.log(`Working on: ${task.title}`)
// 2. Create branch
const branchResult = await mcp.callTool('create_branch', {
taskId: task.id
})
const { branchName } = JSON.parse(branchResult.content[0].text)
// 3. Clone and checkout
await exec(`git clone ${task.project.giteaRepoUrl} /workspace/task-${task.id}`)
await exec(`cd /workspace/task-${task.id} && git checkout ${branchName}`)
// 4. Do the work...
// (Claude Code generates and commits code)
// 5. Need clarification?
if (needsClarification) {
await mcp.callTool('ask_user_question', {
taskId: task.id,
question: 'Should I add error handling for network failures?',
context: 'The API calls can fail...'
})
// Wait for response
let response
while (!response) {
await sleep(5000)
const checkResult = await mcp.callTool('check_question_response', {
taskId: task.id
})
const check = JSON.parse(checkResult.content[0].text)
if (check.hasResponse) {
response = check.response
}
}
}
// 6. Create PR
await mcp.callTool('create_pull_request', {
taskId: task.id,
title: task.title,
description: `## Summary\nImplemented ${task.title}\n\n## Changes\n- Feature A\n- Feature B`
})
// 7. Deploy preview
await mcp.callTool('trigger_preview_deploy', {
taskId: task.id
})
// 8. Mark as done
await mcp.callTool('update_task_status', {
taskId: task.id,
status: 'ready_to_test'
})
console.log('Task completed!')
}
```
## Error Handling
Todos los tools pueden retornar errores:
```json
{
"content": [{
"type": "text",
"text": "Error: Task not found"
}],
"isError": true
}
```
El agente debe manejar estos errores apropiadamente:
```typescript
const result = await mcp.callTool('update_task_status', { ... })
if (result.isError) {
console.error('Tool failed:', result.content[0].text)
// Handle error
}
```
## Rate Limiting
Para evitar abuse, los tools tienen rate limits:
- `get_next_task`: 1 por segundo
- `ask_user_question`: 5 por minuto por tarea
- `create_pr`: 1 por minuto
- `trigger_preview_deploy`: 1 por minuto
- Otros: 10 por segundo
Si se excede el rate limit, el tool retorna error 429.

495
docs/06-deployment/ci-cd.md Normal file
View File

@@ -0,0 +1,495 @@
# CI/CD Pipeline
## Arquitectura CI/CD
```
Git Push → Gitea Webhook → Backend → BullMQ → Deploy Worker → K8s
Notifications
```
## Gitea Actions (GitHub Actions compatible)
### Workflow para Backend
```yaml
# .gitea/workflows/backend.yml
name: Backend CI/CD
on:
push:
branches: [main, develop, staging]
paths:
- 'backend/**'
pull_request:
branches: [main, develop]
paths:
- 'backend/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: 1.3.6
- name: Install dependencies
working-directory: ./backend
run: bun install
- name: Run linter
working-directory: ./backend
run: bun run lint
- name: Run tests
working-directory: ./backend
run: bun test
build:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging'
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.DOCKER_REGISTRY }}
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ./backend
push: true
tags: |
${{ secrets.DOCKER_REGISTRY }}/aiworker-backend:${{ github.sha }}
${{ secrets.DOCKER_REGISTRY }}/aiworker-backend:latest
cache-from: type=registry,ref=${{ secrets.DOCKER_REGISTRY }}/aiworker-backend:buildcache
cache-to: type=registry,ref=${{ secrets.DOCKER_REGISTRY }}/aiworker-backend:buildcache,mode=max
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Trigger deployment
run: |
curl -X POST ${{ secrets.AIWORKER_API_URL }}/api/deployments \
-H "Authorization: Bearer ${{ secrets.AIWORKER_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{
"projectId": "backend",
"environment": "${{ github.ref == 'refs/heads/main' && 'production' || 'staging' }}",
"commitHash": "${{ github.sha }}",
"branch": "${{ github.ref_name }}"
}'
```
### Workflow para Frontend
```yaml
# .gitea/workflows/frontend.yml
name: Frontend CI/CD
on:
push:
branches: [main, staging]
paths:
- 'frontend/**'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: 1.3.6
- name: Install and build
working-directory: ./frontend
run: |
bun install
bun run build
- name: Build Docker image
run: |
docker build -t aiworker-frontend:${{ github.sha }} ./frontend
docker tag aiworker-frontend:${{ github.sha }} aiworker-frontend:latest
- name: Push to registry
run: |
echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
docker push aiworker-frontend:${{ github.sha }}
docker push aiworker-frontend:latest
- name: Deploy
run: |
kubectl set image deployment/frontend frontend=aiworker-frontend:${{ github.sha }} -n control-plane
```
## Webhooks Handler
```typescript
// services/gitea/webhooks.ts
export async function handlePushEvent(payload: any) {
const { ref, commits, repository } = payload
const branch = ref.replace('refs/heads/', '')
logger.info(`Push to ${repository.full_name}:${branch}`, {
commits: commits.length,
})
// Find project by repo
const project = await db.query.projects.findFirst({
where: eq(projects.giteaRepoUrl, repository.clone_url),
})
if (!project) {
logger.warn('Project not found for repo:', repository.clone_url)
return
}
// Determine environment based on branch
let environment: 'dev' | 'staging' | 'production' | null = null
if (branch === 'main' || branch === 'master') {
environment = 'production'
} else if (branch === 'staging') {
environment = 'staging'
} else if (branch === 'develop' || branch === 'dev') {
environment = 'dev'
}
if (!environment) {
logger.debug('Ignoring push to non-deployment branch:', branch)
return
}
// Create deployment
const deploymentId = crypto.randomUUID()
const commitHash = commits[commits.length - 1].id
await db.insert(deployments).values({
id: deploymentId,
projectId: project.id,
environment,
deploymentType: 'automatic',
branch,
commitHash,
status: 'pending',
})
// Enqueue deployment job
await enqueueDeploy({
deploymentId,
projectId: project.id,
environment,
branch,
commitHash,
})
logger.info(`Deployment queued: ${environment} for ${project.name}`)
}
```
## Manual Deployment
```typescript
// api/routes/deployments.ts
router.post('/deployments', async (req, res) => {
const { projectId, environment, commitHash, branch } = req.body
// Validate
const project = await db.query.projects.findFirst({
where: eq(projects.id, projectId),
})
if (!project) {
return res.status(404).json({ error: 'Project not found' })
}
// Create deployment record
const deploymentId = crypto.randomUUID()
await db.insert(deployments).values({
id: deploymentId,
projectId,
environment,
deploymentType: 'manual',
branch,
commitHash,
status: 'pending',
triggeredBy: req.user?.id,
})
// Enqueue
await enqueueDeploy({
deploymentId,
projectId,
environment,
branch,
commitHash,
})
res.status(201).json({
deploymentId,
status: 'pending',
})
})
```
## Deployment Worker
```typescript
// services/queue/deploy-worker.ts
import { Worker } from 'bullmq'
import { K8sClient } from '../kubernetes/client'
import { db } from '../../db/client'
import { deployments } from '../../db/schema'
import { eq } from 'drizzle-orm'
const k8sClient = new K8sClient()
export const deployWorker = new Worker(
'deploys',
async (job) => {
const { deploymentId, projectId, environment, branch, commitHash } = job.data
logger.info(`Starting deployment: ${environment}`, { deploymentId })
// Update status
await db.update(deployments)
.set({
status: 'in_progress',
startedAt: new Date(),
})
.where(eq(deployments.id, deploymentId))
job.updateProgress(10)
try {
// Get project config
const project = await db.query.projects.findFirst({
where: eq(projects.id, projectId),
})
if (!project) {
throw new Error('Project not found')
}
job.updateProgress(20)
// Build image tag
const imageTag = `${project.dockerImage}:${commitHash.slice(0, 7)}`
// Determine namespace
const namespace =
environment === 'production'
? `${project.k8sNamespace}-prod`
: environment === 'staging'
? `${project.k8sNamespace}-staging`
: `${project.k8sNamespace}-dev`
job.updateProgress(30)
// Create/update deployment
await k8sClient.createOrUpdateDeployment({
namespace,
name: `${project.name}-${environment}`,
image: imageTag,
envVars: project.envVars as Record<string, string>,
replicas: environment === 'production' ? project.replicas : 1,
resources: {
cpu: project.cpuLimit || '500m',
memory: project.memoryLimit || '512Mi',
},
})
job.updateProgress(70)
// Create/update service
await k8sClient.createOrUpdateService({
namespace,
name: `${project.name}-${environment}`,
port: 3000,
})
job.updateProgress(80)
// Create/update ingress
const host =
environment === 'production'
? `${project.name}.aiworker.dev`
: `${environment}-${project.name}.aiworker.dev`
const url = await k8sClient.createOrUpdateIngress({
namespace,
name: `${project.name}-${environment}`,
host,
serviceName: `${project.name}-${environment}`,
servicePort: 3000,
})
job.updateProgress(90)
// Wait for deployment to be ready
await k8sClient.waitForDeployment(namespace, `${project.name}-${environment}`, 300)
job.updateProgress(100)
// Update deployment as completed
const completedAt = new Date()
const durationSeconds = Math.floor(
(completedAt.getTime() - job.processedOn!) / 1000
)
await db.update(deployments)
.set({
status: 'completed',
completedAt,
url,
durationSeconds,
})
.where(eq(deployments.id, deploymentId))
// Emit event
emitWebSocketEvent('deploy:completed', {
deploymentId,
environment,
url,
})
logger.info(`Deployment completed: ${environment}${url}`)
return { success: true, url }
} catch (error: any) {
logger.error('Deployment failed:', error)
// Update as failed
await db.update(deployments)
.set({
status: 'failed',
errorMessage: error.message,
completedAt: new Date(),
})
.where(eq(deployments.id, deploymentId))
// Emit event
emitWebSocketEvent('deploy:failed', {
deploymentId,
environment,
error: error.message,
})
throw error
}
},
{
connection: getRedis(),
concurrency: 3,
}
)
```
## Rollback
```typescript
// api/routes/deployments.ts
router.post('/deployments/:id/rollback', async (req, res) => {
const { id } = req.params
// Get deployment
const deployment = await db.query.deployments.findFirst({
where: eq(deployments.id, id),
})
if (!deployment) {
return res.status(404).json({ error: 'Deployment not found' })
}
// Get previous successful deployment
const previousDeployment = await db.query.deployments.findFirst({
where: and(
eq(deployments.projectId, deployment.projectId),
eq(deployments.environment, deployment.environment),
eq(deployments.status, 'completed'),
lt(deployments.createdAt, deployment.createdAt)
),
orderBy: [desc(deployments.createdAt)],
})
if (!previousDeployment) {
return res.status(400).json({ error: 'No previous deployment to rollback to' })
}
// Create rollback deployment
const rollbackId = crypto.randomUUID()
await db.insert(deployments).values({
id: rollbackId,
projectId: deployment.projectId,
environment: deployment.environment,
deploymentType: 'rollback',
branch: previousDeployment.branch,
commitHash: previousDeployment.commitHash,
status: 'pending',
triggeredBy: req.user?.id,
})
// Enqueue
await enqueueDeploy({
deploymentId: rollbackId,
projectId: deployment.projectId,
environment: deployment.environment,
branch: previousDeployment.branch!,
commitHash: previousDeployment.commitHash!,
})
res.json({
deploymentId: rollbackId,
rollingBackTo: previousDeployment.commitHash,
})
})
```
## Health Checks Post-Deploy
```typescript
async function verifyDeployment(url: string): Promise<boolean> {
const maxAttempts = 10
const delayMs = 3000
for (let i = 0; i < maxAttempts; i++) {
try {
const response = await fetch(`${url}/health`, {
method: 'GET',
signal: AbortSignal.timeout(5000),
})
if (response.ok) {
logger.info(`Deployment healthy: ${url}`)
return true
}
} catch (error) {
logger.debug(`Health check attempt ${i + 1} failed`)
}
await new Promise((resolve) => setTimeout(resolve, delayMs))
}
logger.error(`Deployment failed health checks: ${url}`)
return false
}
```

View File

@@ -0,0 +1,531 @@
# GitOps con ArgoCD
## ¿Qué es GitOps?
GitOps usa Git como fuente única de verdad para infraestructura y aplicaciones. Los cambios se hacen via commits, y herramientas como ArgoCD sincronizan automáticamente el estado deseado en Git con el estado real en Kubernetes.
## Instalación de ArgoCD
```bash
# Create namespace
kubectl create namespace argocd
# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for pods
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server -n argocd --timeout=300s
# Get initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Port forward to access UI
kubectl port-forward svc/argocd-server -n argocd 8080:443
# Access at: https://localhost:8080
# Username: admin
# Password: (from above command)
```
## Estructura de Repositorio GitOps
```
gitops-repo/
├── projects/
│ ├── backend/
│ │ ├── base/
│ │ │ ├── deployment.yaml
│ │ │ ├── service.yaml
│ │ │ └── kustomization.yaml
│ │ ├── dev/
│ │ │ ├── kustomization.yaml
│ │ │ └── patches.yaml
│ │ ├── staging/
│ │ │ ├── kustomization.yaml
│ │ │ └── patches.yaml
│ │ └── production/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ │
│ └── my-project/
│ ├── base/
│ ├── dev/
│ ├── staging/
│ └── production/
└── argocd/
├── applications/
│ ├── backend-dev.yaml
│ ├── backend-staging.yaml
│ ├── backend-production.yaml
│ └── my-project-production.yaml
└── app-of-apps.yaml
```
## Base Manifests con Kustomize
```yaml
# projects/backend/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: aiworker/backend:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1
memory: 2Gi
---
# projects/backend/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- port: 3000
targetPort: 3000
---
# projects/backend/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app: backend
managed-by: argocd
```
## Environment Overlays
```yaml
# projects/backend/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: control-plane
bases:
- ../base
patchesStrategicMerge:
- patches.yaml
images:
- name: aiworker/backend
newTag: v1.2.3 # This gets updated automatically
replicas:
- name: backend
count: 3
configMapGenerator:
- name: backend-config
literals:
- NODE_ENV=production
- LOG_LEVEL=info
---
# projects/backend/production/patches.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
template:
spec:
containers:
- name: backend
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 4Gi
```
## ArgoCD Application
```yaml
# argocd/applications/backend-production.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backend-production
namespace: argocd
spec:
project: default
source:
repoURL: https://git.aiworker.dev/aiworker/gitops
targetRevision: HEAD
path: projects/backend/production
destination:
server: https://kubernetes.default.svc
namespace: control-plane
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
syncOptions:
- CreateNamespace=false
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
revisionHistoryLimit: 10
```
## App of Apps Pattern
```yaml
# argocd/app-of-apps.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: aiworker-apps
namespace: argocd
spec:
project: default
source:
repoURL: https://git.aiworker.dev/aiworker/gitops
targetRevision: HEAD
path: argocd/applications
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
```
## Actualización de Imagen desde Backend
```typescript
// services/gitops/updater.ts
import { Octokit } from '@octokit/rest'
import yaml from 'js-yaml'
import { logger } from '../../utils/logger'
export class GitOpsUpdater {
private octokit: Octokit
private repo: string
private owner: string
constructor() {
this.octokit = new Octokit({
baseUrl: process.env.GITEA_URL,
auth: process.env.GITEA_TOKEN,
})
this.repo = 'gitops'
this.owner = 'aiworker'
}
async updateImage(params: {
project: string
environment: string
imageTag: string
}) {
const { project, environment, imageTag } = params
const path = `projects/${project}/${environment}/kustomization.yaml`
logger.info(`Updating GitOps: ${project}/${environment}${imageTag}`)
try {
// 1. Get current file
const { data: fileData } = await this.octokit.repos.getContent({
owner: this.owner,
repo: this.repo,
path,
})
if (Array.isArray(fileData) || fileData.type !== 'file') {
throw new Error('Invalid file')
}
// 2. Decode content
const content = Buffer.from(fileData.content, 'base64').toString('utf-8')
const kustomization = yaml.load(content) as any
// 3. Update image tag
if (!kustomization.images) {
kustomization.images = []
}
const imageIndex = kustomization.images.findIndex(
(img: any) => img.name === `aiworker/${project}`
)
if (imageIndex >= 0) {
kustomization.images[imageIndex].newTag = imageTag
} else {
kustomization.images.push({
name: `aiworker/${project}`,
newTag: imageTag,
})
}
// 4. Encode new content
const newContent = yaml.dump(kustomization)
const newContentBase64 = Buffer.from(newContent).toString('base64')
// 5. Commit changes
await this.octokit.repos.createOrUpdateFileContents({
owner: this.owner,
repo: this.repo,
path,
message: `Update ${project} ${environment} to ${imageTag}`,
content: newContentBase64,
sha: fileData.sha,
})
logger.info(`GitOps updated: ${project}/${environment}`)
return { success: true }
} catch (error: any) {
logger.error('Failed to update GitOps:', error)
throw error
}
}
}
```
## Integración con CI/CD
```typescript
// services/queue/deploy-worker.ts
import { GitOpsUpdater } from '../gitops/updater'
const gitopsUpdater = new GitOpsUpdater()
export const deployWorker = new Worker('deploys', async (job) => {
const { deploymentId, projectId, environment, commitHash } = job.data
// ... deployment logic ...
// Update GitOps repo
await gitopsUpdater.updateImage({
project: project.name,
environment,
imageTag: commitHash.slice(0, 7),
})
// ArgoCD will automatically sync within 3 minutes
// Or trigger manual sync:
await triggerArgoCDSync(project.name, environment)
logger.info('GitOps updated, ArgoCD will sync')
})
```
## Trigger ArgoCD Sync
```typescript
// services/gitops/argocd.ts
export async function triggerArgoCDSync(project: string, environment: string) {
const appName = `${project}-${environment}`
const argoCDUrl = process.env.ARGOCD_URL || 'https://argocd.aiworker.dev'
const token = process.env.ARGOCD_TOKEN
const response = await fetch(`${argoCDUrl}/api/v1/applications/${appName}/sync`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
prune: false,
dryRun: false,
strategy: {
hook: {},
},
}),
})
if (!response.ok) {
throw new Error(`ArgoCD sync failed: ${response.statusText}`)
}
logger.info(`Triggered ArgoCD sync: ${appName}`)
}
```
## Health Status from ArgoCD
```typescript
// services/gitops/argocd.ts
export async function getApplicationStatus(appName: string) {
const argoCDUrl = process.env.ARGOCD_URL
const token = process.env.ARGOCD_TOKEN
const response = await fetch(`${argoCDUrl}/api/v1/applications/${appName}`, {
headers: {
'Authorization': `Bearer ${token}`,
},
})
const app = await response.json()
return {
syncStatus: app.status.sync.status, // Synced, OutOfSync
healthStatus: app.status.health.status, // Healthy, Progressing, Degraded
lastSyncedAt: app.status.operationState?.finishedAt,
}
}
```
## Monitoring Dashboard
```typescript
// api/routes/gitops.ts
router.get('/gitops/status', async (req, res) => {
const apps = ['backend-production', 'backend-staging', 'backend-dev']
const statuses = await Promise.all(
apps.map(async (app) => {
const status = await getApplicationStatus(app)
return {
name: app,
...status,
}
})
)
res.json({ applications: statuses })
})
```
## Benefits of GitOps
### 1. Declarative
Todo el estado deseado está en Git, versionado y auditable.
### 2. Auditabilidad
Cada cambio tiene un commit con autor, timestamp y descripción.
### 3. Rollback Fácil
```bash
# Rollback to previous version
git revert HEAD
git push
# ArgoCD automatically syncs back
```
### 4. Disaster Recovery
Cluster destruido? Simplemente:
```bash
# Reinstall ArgoCD
kubectl apply -f argocd-install.yaml
# Deploy app-of-apps
kubectl apply -f app-of-apps.yaml
# Todo vuelve al estado en Git
```
### 5. Multi-Cluster
```yaml
# Deploy same app to multiple clusters
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backend-cluster-2
spec:
destination:
server: https://cluster-2.example.com
namespace: control-plane
# ... same source
```
## Best Practices
### 1. Separate Repo
Mantener GitOps separado del código de aplicación:
- **App repo**: Código fuente
- **GitOps repo**: Manifests de K8s
### 2. Environment Branches (Optional)
```
main → production
staging → staging environment
dev → dev environment
```
### 3. Secrets Management
No commitear secrets en Git. Usar:
- **Sealed Secrets**
- **External Secrets Operator**
- **Vault**
### 4. Progressive Rollout
```yaml
# Use Argo Rollouts for canary/blue-green
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: backend
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause: {duration: 5m}
- setWeight: 50
- pause: {duration: 5m}
- setWeight: 100
```
## Troubleshooting
```bash
# Ver estado de aplicación
argocd app get backend-production
# Ver diferencias
argocd app diff backend-production
# Sync manual
argocd app sync backend-production
# Ver logs
kubectl logs -n argocd deployment/argocd-application-controller
# Refresh (fetch latest from Git)
argocd app refresh backend-production
```

View File

@@ -0,0 +1,500 @@
# Preview Environments
Los preview environments son deployments temporales y aislados para cada tarea, permitiendo testing independiente antes del merge.
## Arquitectura
```
Task Branch
Build & Push Image
Create K8s Namespace (preview-task-{id})
Deploy App + Database (if needed)
Create Ingress (https://task-{id}.preview.aiworker.dev)
Ready for Testing
```
## Creación de Preview Environment
### 1. Trigger desde Agente
```typescript
// Agent completes task
await mcp.callTool('trigger_preview_deploy', {
taskId: task.id,
})
```
### 2. Backend Handler
```typescript
// services/mcp/handlers.ts
async function triggerPreviewDeploy(args: { taskId: string }) {
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, args.taskId),
with: { project: true },
})
if (!task || !task.branchName) {
throw new Error('Task or branch not found')
}
const shortId = task.id.slice(0, 8)
const namespace = `preview-task-${shortId}`
const url = `https://task-${shortId}.preview.aiworker.dev`
// Create deployment job
const deploymentId = crypto.randomUUID()
await db.insert(deployments).values({
id: deploymentId,
projectId: task.projectId,
environment: 'preview',
branch: task.branchName,
commitHash: await getLatestCommit(task),
k8sNamespace: namespace,
status: 'pending',
})
// Enqueue
await enqueueDeploy({
deploymentId,
projectId: task.projectId,
taskId: task.id,
environment: 'preview',
branch: task.branchName,
namespace,
})
// Update task
await db.update(tasks)
.set({
state: 'ready_to_test',
previewNamespace: namespace,
previewUrl: url,
previewDeployedAt: new Date(),
})
.where(eq(tasks.id, task.id))
return {
content: [{
type: 'text',
text: JSON.stringify({ success: true, previewUrl: url, namespace }),
}],
}
}
```
### 3. Deploy Worker
```typescript
// services/queue/preview-deploy-worker.ts
export const previewDeployWorker = new Worker('deploys', async (job) => {
const { deploymentId, taskId, projectId, branch, namespace } = job.data
const project = await db.query.projects.findFirst({
where: eq(projects.id, projectId),
})
// 1. Create namespace with TTL annotation
await k8sClient.createNamespace(namespace, {
project: projectId,
environment: 'preview',
taskId,
ttl: '168h', // 7 days
'created-at': new Date().toISOString(),
})
job.updateProgress(20)
// 2. Build image (or use existing)
const imageTag = `${project.dockerImage}:${branch}`
job.updateProgress(40)
// 3. Deploy application
await k8sClient.createDeployment({
namespace,
name: `${project.name}-preview`,
image: imageTag,
replicas: 1,
envVars: {
...project.envVars,
NODE_ENV: 'preview',
PREVIEW_MODE: 'true',
},
resources: {
requests: { cpu: '250m', memory: '512Mi' },
limits: { cpu: '1', memory: '2Gi' },
},
})
job.updateProgress(60)
// 4. Create service
await k8sClient.createService({
namespace,
name: `${project.name}-preview`,
port: 3000,
})
job.updateProgress(70)
// 5. Create ingress with basic auth
const host = `task-${taskId.slice(0, 8)}.preview.aiworker.dev`
await k8sClient.createIngress({
namespace,
name: `${project.name}-preview`,
host,
serviceName: `${project.name}-preview`,
servicePort: 3000,
annotations: {
'nginx.ingress.kubernetes.io/auth-type': 'basic',
'nginx.ingress.kubernetes.io/auth-secret': 'preview-basic-auth',
'nginx.ingress.kubernetes.io/auth-realm': 'Preview Environment',
},
})
job.updateProgress(90)
// 6. Wait for ready
await k8sClient.waitForDeployment(namespace, `${project.name}-preview`, 300)
job.updateProgress(100)
// Update deployment record
await db.update(deployments)
.set({
status: 'completed',
url: `https://${host}`,
completedAt: new Date(),
})
.where(eq(deployments.id, deploymentId))
logger.info(`Preview environment ready: ${host}`)
return { success: true, url: `https://${host}` }
})
```
## Preview con Base de Datos
Para tareas que requieren DB, crear una instancia temporal:
```typescript
async function createPreviewWithDatabase(params: {
namespace: string
projectName: string
taskId: string
}) {
const { namespace, projectName } = params
// 1. Deploy MySQL/PostgreSQL ephemeral
await k8sClient.createDeployment({
namespace,
name: 'db',
image: 'mysql:8.0',
replicas: 1,
envVars: {
MYSQL_ROOT_PASSWORD: 'preview123',
MYSQL_DATABASE: projectName,
},
resources: {
requests: { cpu: '250m', memory: '512Mi' },
limits: { cpu: '500m', memory: '1Gi' },
},
})
// 2. Create service
await k8sClient.createService({
namespace,
name: 'db',
port: 3306,
})
// 3. Run migrations
await k8sClient.runJob({
namespace,
name: 'db-migrate',
image: `${projectName}:latest`,
command: ['npm', 'run', 'migrate'],
envVars: {
DB_HOST: 'db',
DB_PORT: '3306',
DB_PASSWORD: 'preview123',
},
})
// 4. Seed data (optional)
await k8sClient.runJob({
namespace,
name: 'db-seed',
image: `${projectName}:latest`,
command: ['npm', 'run', 'seed'],
envVars: {
DB_HOST: 'db',
DB_PORT: '3306',
DB_PASSWORD: 'preview123',
},
})
}
```
## Basic Auth para Preview
```bash
# Create htpasswd file
htpasswd -c auth preview
# Password: preview123
# Create secret in all preview namespaces
kubectl create secret generic preview-basic-auth \
--from-file=auth \
-n preview-task-abc123
```
```typescript
// Auto-create in new preview namespaces
async function createPreviewAuthSecret(namespace: string) {
const htpasswd = 'preview:$apr1$...' // pre-generated
await k8sClient.createSecret({
namespace,
name: 'preview-basic-auth',
data: {
auth: Buffer.from(htpasswd).toString('base64'),
},
})
}
```
## Frontend: Preview URL Display
```typescript
// components/tasks/TaskCard.tsx
{task.previewUrl && (
<a
href={task.previewUrl}
target="_blank"
rel="noopener noreferrer"
className="mt-2 flex items-center gap-2 text-sm text-primary-600 hover:underline"
onClick={(e) => e.stopPropagation()}
>
<ExternalLink className="w-4 h-4" />
Ver Preview
</a>
)}
{task.state === 'ready_to_test' && (
<div className="mt-3 p-3 bg-purple-50 border border-purple-200 rounded-lg">
<p className="text-sm font-medium text-purple-900">
Preview Environment Ready!
</p>
<p className="text-xs text-purple-700 mt-1">
Credentials: preview / preview123
</p>
<div className="flex gap-2 mt-2">
<a
href={task.previewUrl}
target="_blank"
rel="noopener noreferrer"
className="btn-primary text-xs"
>
Open Preview
</a>
<button
onClick={() => approveTask(task.id)}
className="btn-secondary text-xs"
>
Approve
</button>
</div>
</div>
)}
```
## Cleanup de Preview Environments
### Automático (TTL)
```typescript
// Cron job que corre cada hora
async function cleanupExpiredPreviews() {
const namespaces = await k8sClient.listNamespaces({
labelSelector: 'environment=preview',
})
for (const ns of namespaces) {
const createdAt = new Date(ns.metadata?.annotations?.['created-at'])
const ttlHours = parseInt(ns.metadata?.labels?.ttl || '168')
const ageHours = (Date.now() - createdAt.getTime()) / (1000 * 60 * 60)
if (ageHours > ttlHours) {
logger.info(`Cleaning up expired preview: ${ns.metadata.name}`)
// Delete namespace (cascades to all resources)
await k8sClient.deleteNamespace(ns.metadata.name)
// Update task
await db.update(tasks)
.set({
previewNamespace: null,
previewUrl: null,
})
.where(eq(tasks.previewNamespace, ns.metadata.name))
}
}
}
// Schedule
setInterval(cleanupExpiredPreviews, 3600000) // Every hour
```
### Manual
```typescript
// api/routes/tasks.ts
router.delete('/tasks/:id/preview', async (req, res) => {
const { id } = req.params
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, id),
})
if (!task || !task.previewNamespace) {
return res.status(404).json({ error: 'Preview not found' })
}
// Delete namespace
await k8sClient.deleteNamespace(task.previewNamespace)
// Update task
await db.update(tasks)
.set({
previewNamespace: null,
previewUrl: null,
})
.where(eq(tasks.id, id))
res.json({ success: true })
})
```
## Resource Limits
Para prevenir abuse, aplicar límites estrictos en previews:
```yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: preview-quota
namespace: preview-task-abc123
spec:
hard:
requests.cpu: "500m"
requests.memory: "1Gi"
limits.cpu: "1"
limits.memory: "2Gi"
pods: "5"
services: "3"
```
## Logs de Preview
```typescript
// api/routes/tasks.ts
router.get('/tasks/:id/preview-logs', async (req, res) => {
const { id } = req.params
const task = await db.query.tasks.findFirst({
where: eq(tasks.id, id),
})
if (!task || !task.previewNamespace) {
return res.status(404).json({ error: 'Preview not found' })
}
const pods = await k8sClient.listPods(task.previewNamespace)
const appPod = pods.find((p) => p.metadata.labels.app)
if (!appPod) {
return res.status(404).json({ error: 'App pod not found' })
}
const logs = await k8sClient.getPodLogs(
task.previewNamespace,
appPod.metadata.name,
100 // tail lines
)
res.json({ logs })
})
```
## Monitoring
```typescript
// Get preview environments stats
router.get('/previews/stats', async (req, res) => {
const namespaces = await k8sClient.listNamespaces({
labelSelector: 'environment=preview',
})
const stats = {
total: namespaces.length,
totalCost: 0,
byAge: {
'<1h': 0,
'1-24h': 0,
'1-7d': 0,
'>7d': 0,
},
}
for (const ns of namespaces) {
const createdAt = new Date(ns.metadata?.annotations?.['created-at'])
const ageHours = (Date.now() - createdAt.getTime()) / (1000 * 60 * 60)
if (ageHours < 1) stats.byAge['<1h']++
else if (ageHours < 24) stats.byAge['1-24h']++
else if (ageHours < 168) stats.byAge['1-7d']++
else stats.byAge['>7d']++
// Estimate cost (example: $0.05/hour per namespace)
stats.totalCost += ageHours * 0.05
}
res.json(stats)
})
```
## Best Practices
1. **TTL**: Siempre configurar TTL para auto-cleanup
2. **Resource Limits**: Limitar CPU/memoria por preview
3. **Security**: Basic auth o limitación por IP
4. **Monitoring**: Alertar si muchos previews activos
5. **Cost Control**: Límite máximo de previews concurrentes
6. **Quick Spin-up**: Optimizar para <2min de deployment time
## Troubleshooting
```bash
# Ver todos los previews
kubectl get namespaces -l environment=preview
# Ver recursos de un preview
kubectl get all -n preview-task-abc123
# Ver logs de un preview
kubectl logs -n preview-task-abc123 deployment/app-preview
# Eliminar preview manualmente
kubectl delete namespace preview-task-abc123
```

View File

@@ -0,0 +1,660 @@
# Staging y Production Deployments
## Flujo de Promoción
```
Tareas Aprobadas
Merge a Staging
Deploy Staging
Tests Automáticos
Aprobación Manual
Merge a Production
Deploy Production
```
## Merge a Staging
### 1. Agrupar Tareas
```typescript
// api/routes/task-groups.ts
router.post('/task-groups', async (req, res) => {
const { projectId, taskIds, notes } = req.body
// Validate all tasks are approved
const tasks = await db.query.tasks.findMany({
where: inArray(tasks.id, taskIds),
})
const notApproved = tasks.filter((t) => t.state !== 'approved')
if (notApproved.length > 0) {
return res.status(400).json({
error: 'All tasks must be approved',
notApproved: notApproved.map((t) => t.id),
})
}
// Create task group
const groupId = crypto.randomUUID()
await db.insert(taskGroups).values({
id: groupId,
projectId,
taskIds: JSON.stringify(taskIds),
status: 'pending',
notes,
createdBy: req.user?.id,
})
// Enqueue merge job
await enqueueMerge({
taskGroupId: groupId,
projectId,
taskIds,
targetBranch: 'staging',
})
res.status(201).json({
taskGroupId: groupId,
status: 'pending',
})
})
```
### 2. Merge Worker
```typescript
// services/queue/merge-worker.ts
export const mergeWorker = new Worker('merges', async (job) => {
const { taskGroupId, projectId, taskIds, targetBranch } = job.data
logger.info(`Merging tasks to ${targetBranch}:`, taskIds)
const project = await db.query.projects.findFirst({
where: eq(projects.id, projectId),
})
const tasks = await db.query.tasks.findMany({
where: inArray(tasks.id, taskIds),
})
job.updateProgress(10)
// 1. Clone repo
const repoDir = `/tmp/merge-${taskGroupId}`
await exec(`git clone ${project.giteaRepoUrl} ${repoDir}`)
process.chdir(repoDir)
// 2. Checkout target branch
await exec(`git checkout ${targetBranch}`)
job.updateProgress(20)
// 3. Merge each task's branch
for (const task of tasks) {
if (!task.branchName) {
logger.warn(`Task ${task.id} has no branch, skipping`)
continue
}
try {
await exec(`git fetch origin ${task.branchName}`)
await exec(`git merge origin/${task.branchName} --no-ff -m "Merge task: ${task.title}"`)
logger.info(`Merged ${task.branchName}`)
job.updateProgress(20 + (40 / tasks.length))
} catch (error) {
logger.error(`Failed to merge ${task.branchName}:`, error)
// Create conflict resolution task
await db.update(tasks)
.set({ state: 'needs_input' })
.where(eq(tasks.id, task.id))
throw new Error(`Merge conflict in ${task.branchName}`)
}
}
job.updateProgress(60)
// 4. Push to staging
await exec(`git push origin ${targetBranch}`)
job.updateProgress(70)
// 5. Create staging PR (if using main as production)
if (targetBranch === 'staging') {
const pr = await giteaClient.createPullRequest(
project.giteaOwner,
project.giteaRepoName,
{
title: `Deploy to Production - ${new Date().toISOString().split('T')[0]}`,
body: generateStagingPRDescription(tasks),
head: 'staging',
base: 'main',
}
)
await db.update(taskGroups)
.set({
stagingBranch: 'staging',
stagingPrNumber: pr.number,
stagingPrUrl: pr.html_url,
})
.where(eq(taskGroups.id, taskGroupId))
}
job.updateProgress(80)
// 6. Update tasks
for (const task of tasks) {
await db.update(tasks)
.set({
state: 'staging',
deployedStagingAt: new Date(),
})
.where(eq(tasks.id, task.id))
}
// 7. Update task group
await db.update(taskGroups)
.set({ status: 'staging' })
.where(eq(taskGroups.id, taskGroupId))
job.updateProgress(90)
// 8. Trigger staging deployment
await enqueueDeploy({
deploymentId: crypto.randomUUID(),
projectId,
environment: 'staging',
branch: 'staging',
commitHash: await getLatestCommit(repoDir, 'staging'),
})
job.updateProgress(100)
logger.info(`Merge completed: ${taskGroupId}`)
return { success: true }
})
function generateStagingPRDescription(tasks: Task[]) {
return `
## Tasks Included
${tasks.map((t) => `- [x] ${t.title} (#${t.id.slice(0, 8)})`).join('\n')}
## Changes
${tasks.map((t) => `### ${t.title}\n${t.description}\n`).join('\n')}
## Testing Checklist
${tasks.map((t) => `- [ ] Test: ${t.title}`).join('\n')}
---
🤖 Generated by AiWorker
`.trim()
}
```
## Staging Deployment
```yaml
# projects/my-app/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-app-staging
bases:
- ../base
images:
- name: aiworker/my-app
newTag: staging-abc123
replicas:
- name: my-app
count: 2
configMapGenerator:
- name: app-config
literals:
- NODE_ENV=staging
- LOG_LEVEL=debug
- SENTRY_ENVIRONMENT=staging
patchesStrategicMerge:
- patches.yaml
---
# projects/my-app/staging/patches.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: staging-db-credentials
key: url
```
## Automated Tests en Staging
```typescript
// services/testing/staging-tests.ts
export async function runStagingTests(params: {
projectId: string
stagingUrl: string
}) {
const { projectId, stagingUrl } = params
logger.info(`Running staging tests for: ${stagingUrl}`)
const tests = [
testHealthEndpoint,
testAuthentication,
testCriticalFeatures,
testPerformance,
]
const results = []
for (const test of tests) {
try {
const result = await test(stagingUrl)
results.push({ test: test.name, passed: result.passed, details: result })
if (!result.passed) {
logger.error(`Test failed: ${test.name}`, result)
}
} catch (error) {
results.push({ test: test.name, passed: false, error: error.message })
}
}
const allPassed = results.every((r) => r.passed)
// Store results
await db.insert(testRuns).values({
id: crypto.randomUUID(),
projectId,
environment: 'staging',
results: JSON.stringify(results),
passed: allPassed,
runAt: new Date(),
})
return { allPassed, results }
}
async function testHealthEndpoint(baseUrl: string) {
const response = await fetch(`${baseUrl}/health`)
return {
passed: response.ok,
status: response.status,
}
}
async function testAuthentication(baseUrl: string) {
// Test login
const loginResponse = await fetch(`${baseUrl}/api/auth/login`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
email: 'test@example.com',
password: 'test123',
}),
})
return {
passed: loginResponse.ok,
hasToken: !!(await loginResponse.json()).token,
}
}
```
## Production Deployment
### 1. Aprobación Manual
```typescript
// api/routes/task-groups.ts
router.post('/task-groups/:id/approve-production', async (req, res) => {
const { id } = req.params
const taskGroup = await db.query.taskGroups.findFirst({
where: eq(taskGroups.id, id),
})
if (!taskGroup || taskGroup.status !== 'staging') {
return res.status(400).json({ error: 'Task group not ready for production' })
}
// Run final checks
const stagingTests = await getLatestTestResults(taskGroup.projectId, 'staging')
if (!stagingTests?.passed) {
return res.status(400).json({ error: 'Staging tests not passing' })
}
// Merge staging to main
await enqueueMerge({
taskGroupId: id,
projectId: taskGroup.projectId,
taskIds: JSON.parse(taskGroup.taskIds),
targetBranch: 'main',
})
// Update status
await db.update(taskGroups)
.set({ status: 'production' })
.where(eq(taskGroups.id, id))
res.json({ success: true, status: 'deploying' })
})
```
### 2. Production Deployment con Blue-Green
```typescript
// services/deployment/blue-green.ts
export async function blueGreenDeploy(params: {
projectId: string
namespace: string
newVersion: string
}) {
const { projectId, namespace, newVersion } = params
const project = await db.query.projects.findFirst({
where: eq(projects.id, projectId),
})
logger.info(`Blue-green deployment: ${project.name}${newVersion}`)
// 1. Deploy "green" (new version) alongside "blue" (current)
await k8sClient.createDeployment({
namespace,
name: `${project.name}-green`,
image: `${project.dockerImage}:${newVersion}`,
replicas: project.replicas,
envVars: project.envVars,
labels: {
app: project.name,
version: 'green',
},
})
// 2. Wait for green to be ready
await k8sClient.waitForDeployment(namespace, `${project.name}-green`, 300)
// 3. Run smoke tests on green
const greenUrl = await k8sClient.getServiceUrl(namespace, `${project.name}-green`)
const smokeTests = await runSmokeTests(greenUrl)
if (!smokeTests.passed) {
logger.error('Smoke tests failed on green deployment')
throw new Error('Smoke tests failed')
}
// 4. Switch service to point to green
await k8sClient.updateServiceSelector(namespace, project.name, {
app: project.name,
version: 'green',
})
logger.info('Traffic switched to green')
// 5. Wait 5 minutes for monitoring
await sleep(300000)
// 6. Check error rates
const errorRate = await getErrorRate(project.name, 5)
if (errorRate > 0.01) {
// >1% errors
logger.error('High error rate detected, rolling back')
// Rollback: switch service back to blue
await k8sClient.updateServiceSelector(namespace, project.name, {
app: project.name,
version: 'blue',
})
throw new Error('Rollback due to high error rate')
}
// 7. Delete blue (old version)
await k8sClient.deleteDeployment(namespace, `${project.name}-blue`)
// 8. Rename green to blue for next deployment
await k8sClient.patchDeployment(namespace, `${project.name}-green`, {
metadata: {
name: `${project.name}-blue`,
labels: { version: 'blue' },
},
})
logger.info('Blue-green deployment completed successfully')
return { success: true }
}
```
### 3. Production Deployment con Canary
```yaml
# Using Argo Rollouts
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-app
namespace: my-app-production
spec:
replicas: 10
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: aiworker/my-app:v1.2.3
ports:
- containerPort: 3000
strategy:
canary:
steps:
# 10% of traffic
- setWeight: 10
- pause: {duration: 5m}
# Check metrics
- analysis:
templates:
- templateName: error-rate
args:
- name: service-name
value: my-app
# 50% of traffic
- setWeight: 50
- pause: {duration: 10m}
# Full rollout
- setWeight: 100
---
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: error-rate
spec:
args:
- name: service-name
metrics:
- name: error-rate
interval: 1m
successCondition: result < 0.01 # <1% errors
provider:
prometheus:
address: http://prometheus:9090
query: |
rate(http_requests_total{service="{{args.service-name}}",status=~"5.."}[5m])
/
rate(http_requests_total{service="{{args.service-name}}"}[5m])
```
## Rollback
```typescript
// api/routes/deployments.ts
router.post('/deployments/:id/rollback', async (req, res) => {
const { id } = req.params
const deployment = await db.query.deployments.findFirst({
where: eq(deployments.id, id),
})
if (!deployment || deployment.environment !== 'production') {
return res.status(400).json({ error: 'Can only rollback production' })
}
// Find previous successful deployment
const previous = await db.query.deployments.findFirst({
where: and(
eq(deployments.projectId, deployment.projectId),
eq(deployments.environment, 'production'),
eq(deployments.status, 'completed'),
lt(deployments.createdAt, deployment.createdAt)
),
orderBy: [desc(deployments.createdAt)],
})
if (!previous) {
return res.status(400).json({ error: 'No previous deployment found' })
}
logger.warn(`Rolling back to ${previous.commitHash}`)
// Create rollback deployment
const rollbackId = crypto.randomUUID()
await db.insert(deployments).values({
id: rollbackId,
projectId: deployment.projectId,
environment: 'production',
deploymentType: 'rollback',
branch: previous.branch,
commitHash: previous.commitHash,
status: 'pending',
triggeredBy: req.user?.id,
})
// Enqueue immediate deployment
await enqueueDeploy({
deploymentId: rollbackId,
projectId: deployment.projectId,
environment: 'production',
branch: previous.branch,
commitHash: previous.commitHash,
}, {
priority: 1, // Highest priority
})
res.json({
rollbackId,
rollingBackTo: previous.commitHash,
})
})
```
## Monitoring Production
```typescript
// services/monitoring/production-monitor.ts
export async function monitorProduction() {
const projects = await db.query.projects.findMany()
for (const project of projects) {
const metrics = await getProductionMetrics(project.name)
// Check error rate
if (metrics.errorRate > 0.05) {
// >5%
await alertTeam({
severity: 'critical',
message: `High error rate in ${project.name}: ${metrics.errorRate * 100}%`,
})
}
// Check response time
if (metrics.p95ResponseTime > 1000) {
// >1s
await alertTeam({
severity: 'warning',
message: `Slow response time in ${project.name}: ${metrics.p95ResponseTime}ms`,
})
}
// Check pod health
const pods = await k8sClient.listPods(`${project.k8sNamespace}-prod`)
const unhealthy = pods.filter((p) => p.status.phase !== 'Running')
if (unhealthy.length > 0) {
await alertTeam({
severity: 'warning',
message: `Unhealthy pods in ${project.name}: ${unhealthy.length}`,
})
}
}
}
// Run every minute
setInterval(monitorProduction, 60000)
```
## Best Practices
1. **Always test in staging first**
2. **Automated tests must pass before production**
3. **Use blue-green or canary for production**
4. **Monitor error rates closely after deployment**
5. **Have rollback plan ready**
6. **Deploy during low-traffic hours**
7. **Notify team before production deployment**
8. **Keep previous version running for quick rollback**
## Deployment Checklist
- [ ] All tasks tested in preview
- [ ] All tasks approved
- [ ] Merged to staging
- [ ] Staging tests passing
- [ ] Database migrations run (if any)
- [ ] Team notified
- [ ] Monitoring dashboards ready
- [ ] Rollback plan documented
- [ ] Deploy to production
- [ ] Monitor for 30 minutes
- [ ] Confirm success or rollback

313
docs/CONTAINER-REGISTRY.md Normal file
View File

@@ -0,0 +1,313 @@
# Gitea Container Registry - Guía de Uso
El Container Registry de Gitea está habilitado y listo para usar.
---
## 🔐 Credenciales
**Registry URL**: `git.fuq.tv`
**Usuario**: `admin`
**Token**: `7401126cfb56ab2aebba17755bdc968c20768c27`
---
## 🐳 Uso con Docker
### Login
```bash
docker login git.fuq.tv -u admin -p 7401126cfb56ab2aebba17755bdc968c20768c27
# O de forma segura
echo "7401126cfb56ab2aebba17755bdc968c20768c27" | docker login git.fuq.tv -u admin --password-stdin
```
### Formato de Imágenes
```
git.fuq.tv/<owner>/<package-name>:<tag>
```
Ejemplos:
- `git.fuq.tv/admin/aiworker-backend:v1.0.0`
- `git.fuq.tv/admin/aiworker-frontend:latest`
- `git.fuq.tv/aiworker/my-app:v2.1.0`
### Build y Push
```bash
# 1. Build imagen
docker build -t git.fuq.tv/admin/aiworker-backend:v1.0.0 .
# 2. Push al registry
docker push git.fuq.tv/admin/aiworker-backend:v1.0.0
# 3. También tag como latest
docker tag git.fuq.tv/admin/aiworker-backend:v1.0.0 git.fuq.tv/admin/aiworker-backend:latest
docker push git.fuq.tv/admin/aiworker-backend:latest
```
### Pull
```bash
docker pull git.fuq.tv/admin/aiworker-backend:v1.0.0
```
---
## ☸️ Uso en Kubernetes
### Opción 1: Usar ImagePullSecrets (Recomendado)
El secret ya está creado en los namespaces `control-plane` y `agents`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aiworker-backend
namespace: control-plane
spec:
template:
spec:
imagePullSecrets:
- name: gitea-registry
containers:
- name: backend
image: git.fuq.tv/admin/aiworker-backend:v1.0.0
```
### Opción 2: Service Account con ImagePullSecrets
```bash
# Patch del service account default
kubectl patch serviceaccount default -n control-plane \
-p '{"imagePullSecrets": [{"name": "gitea-registry"}]}'
# Ahora todos los pods usarán automáticamente el secret
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aiworker-backend
namespace: control-plane
spec:
template:
spec:
# No need to specify imagePullSecrets, uses SA default
containers:
- name: backend
image: git.fuq.tv/admin/aiworker-backend:v1.0.0
```
### Crear Secret en Otros Namespaces
```bash
kubectl create secret docker-registry gitea-registry \
--docker-server=git.fuq.tv \
--docker-username=admin \
--docker-password=7401126cfb56ab2aebba17755bdc968c20768c27 \
-n <namespace>
```
---
## 📦 Ver Packages en Gitea UI
1. Ve a https://git.fuq.tv
2. Login (admin / admin123)
3. Click en tu perfil → **Packages**
4. Verás todas las imágenes subidas
---
## 🚀 CI/CD con Gitea Actions
### Ejemplo .gitea/workflows/build.yml
```yaml
name: Build and Push Docker Image
on:
push:
branches: [main]
tags:
- 'v*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Registry
uses: docker/login-action@v3
with:
registry: git.fuq.tv
username: admin
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: git.fuq.tv/admin/aiworker-backend
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=git.fuq.tv/admin/aiworker-backend:buildcache
cache-to: type=registry,ref=git.fuq.tv/admin/aiworker-backend:buildcache,mode=max
```
---
## 🔨 Build Manual (sin Docker daemon)
Si no tienes Docker corriendo localmente, puedes usar **buildah** o **podman**:
```bash
# Con buildah
buildah bud -t git.fuq.tv/admin/myapp:v1.0.0 .
buildah push git.fuq.tv/admin/myapp:v1.0.0
# Con podman
podman build -t git.fuq.tv/admin/myapp:v1.0.0 .
podman push git.fuq.tv/admin/myapp:v1.0.0
```
---
## 🧪 Ejemplo Completo: Backend de AiWorker
### Dockerfile
```dockerfile
FROM oven/bun:1.3.6-alpine
WORKDIR /app
# Dependencies
COPY package.json bun.lockb ./
RUN bun install --production
# Source
COPY src ./src
COPY drizzle ./drizzle
# Run
EXPOSE 3000
CMD ["bun", "src/index.ts"]
```
### Build y Push
```bash
# Build
docker build -t git.fuq.tv/admin/aiworker-backend:v1.0.0 .
# Push
docker push git.fuq.tv/admin/aiworker-backend:v1.0.0
# Tag latest
docker tag git.fuq.tv/admin/aiworker-backend:v1.0.0 git.fuq.tv/admin/aiworker-backend:latest
docker push git.fuq.tv/admin/aiworker-backend:latest
```
### Deploy en K8s
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: control-plane
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
imagePullSecrets:
- name: gitea-registry
containers:
- name: backend
image: git.fuq.tv/admin/aiworker-backend:v1.0.0
ports:
- containerPort: 3000
env:
- name: DB_HOST
value: mariadb.control-plane.svc.cluster.local
- name: REDIS_HOST
value: redis.control-plane.svc.cluster.local
```
---
## 🔄 Actualizar Deployment con Nueva Imagen
```bash
# Opción 1: Set image
kubectl set image deployment/backend backend=git.fuq.tv/admin/aiworker-backend:v1.1.0 -n control-plane
# Opción 2: Rollout restart (usa :latest)
kubectl rollout restart deployment/backend -n control-plane
# Ver progreso
kubectl rollout status deployment/backend -n control-plane
```
---
## 🗑️ Cleanup de Imágenes Viejas
Desde la UI de Gitea:
1. Packages → Select package
2. Versions → Delete old versions
O vía API:
```bash
curl -X DELETE "https://git.fuq.tv/api/v1/packages/admin/container/aiworker-backend/v1.0.0" \
-H "Authorization: token 7401126cfb56ab2aebba17755bdc968c20768c27"
```
---
## 📊 Ventajas del Registry en Gitea
**Integrado** - Mismo sistema que Git
**Autenticación única** - Mismos usuarios
**Sin costos extra** - Ya está incluido
**Storage HA** - Longhorn con 3 réplicas
**TLS automático** - Cert-Manager
**Privado** - No público como Docker Hub
---
## 🎯 Resumen
**Registry**: `git.fuq.tv`
**Login**: `admin / 7401126cfb56ab2aebba17755bdc968c20768c27`
**Formato**: `git.fuq.tv/<owner>/<image>:<tag>`
**K8s Secret**: `gitea-registry` (en control-plane y agents)
**Próximos pasos:**
1. Crear Dockerfile para backend
2. Build imagen
3. Push a `git.fuq.tv/admin/aiworker-backend:v1.0.0`
4. Deploy en K8s

86
docs/README.md Normal file
View File

@@ -0,0 +1,86 @@
# AiWorker - Documentación
Sistema de orquestación de agentes IA (Claude Code) para automatización del ciclo completo de desarrollo.
## Índice de Documentación
### 01. Arquitectura
- [Overview General](./01-arquitectura/overview.md)
- [Stack Tecnológico](./01-arquitectura/stack-tecnologico.md)
- [Flujo de Datos](./01-arquitectura/flujo-de-datos.md)
- [Modelo de Datos](./01-arquitectura/modelo-datos.md)
### 02. Backend
- [Estructura del Proyecto](./02-backend/estructura.md)
- [Database Schema (MySQL)](./02-backend/database-schema.md)
- [MCP Server](./02-backend/mcp-server.md)
- [Integración con Gitea](./02-backend/gitea-integration.md)
- [Sistema de Colas](./02-backend/queue-system.md)
- [API Endpoints](./02-backend/api-endpoints.md)
### 03. Frontend
- [Estructura del Proyecto](./03-frontend/estructura.md)
- [Componentes Principales](./03-frontend/componentes.md)
- [Gestión de Estado](./03-frontend/estado.md)
- [Kanban Board](./03-frontend/kanban.md)
- [Consolas Web](./03-frontend/consolas-web.md)
### 04. Kubernetes
- [Setup del Cluster](./04-kubernetes/cluster-setup.md)
- [Estructura de Namespaces](./04-kubernetes/namespaces.md)
- [Deployments](./04-kubernetes/deployments.md)
- [Gitea en K8s](./04-kubernetes/gitea-deployment.md)
- [Networking y Ingress](./04-kubernetes/networking.md)
### 05. Agentes Claude Code
- [Pods de Agentes](./05-agents/claude-code-pods.md)
- [Herramientas MCP](./05-agents/mcp-tools.md)
- [Comunicación con Backend](./05-agents/comunicacion.md)
- [Ciclo de Vida](./05-agents/ciclo-vida.md)
### 06. Deployment
- [CI/CD Pipeline](./06-deployment/ci-cd.md)
- [GitOps con ArgoCD](./06-deployment/gitops.md)
- [Preview Environments](./06-deployment/preview-envs.md)
- [Staging y Producción](./06-deployment/staging-production.md)
## Quick Start
```bash
# Instalar dependencias
cd backend && bun install
cd ../frontend && bun install
# Iniciar servicios locales (Docker Compose)
docker-compose up -d
# Iniciar backend
cd backend && bun run dev
# Iniciar frontend
cd frontend && bun run dev
```
## Stack Tecnológico
- **Frontend**: React 19.2 + TailwindCSS + Vite
- **Backend**: Bun 1.3.6 + Express + TypeScript
- **Database**: MySQL 8.0
- **Cache/Queue**: Redis
- **Git Server**: Gitea (auto-alojado)
- **Orchestration**: Kubernetes
- **CI/CD**: ArgoCD + GitHub Actions
- **Agents**: Claude Code (Anthropic)
## Versiones
- React: 19.2
- Bun: 1.3.6
- Node: 20+ (para compatibilidad)
- MySQL: 8.0
- Kubernetes: 1.28+
- Gitea: latest
## Contribución
Esta es una documentación viva que se actualiza según evoluciona el proyecto.