21 changed files with 1955 additions and 48 deletions
@ -0,0 +1,268 @@ |
|||||||
|
# Security Implementation Plan |
||||||
|
|
||||||
|
This document outlines the implementation of security improvements in two tiers: |
||||||
|
1. **Lightweight** - Single container, application-level improvements |
||||||
|
2. **Enterprise** - Multi-container/Kubernetes with process isolation |
||||||
|
|
||||||
|
## Architecture Overview |
||||||
|
|
||||||
|
### Lightweight (Single Container) |
||||||
|
- Application-level security controls |
||||||
|
- Resource limits enforced in code |
||||||
|
- Rate limiting in application |
||||||
|
- Audit logging |
||||||
|
- Works with current Docker setup |
||||||
|
|
||||||
|
### Enterprise (Kubernetes) |
||||||
|
- Process isolation per tenant |
||||||
|
- Network policies |
||||||
|
- Resource quotas per namespace |
||||||
|
- Separate volumes per tenant |
||||||
|
- Scales horizontally |
||||||
|
|
||||||
|
## Implementation Plan |
||||||
|
|
||||||
|
### Phase 1: Lightweight Improvements (Single Container) |
||||||
|
|
||||||
|
These improvements work in the current single-container setup and provide immediate security benefits. |
||||||
|
|
||||||
|
#### 1.1 Resource Limits Per User |
||||||
|
|
||||||
|
**Implementation**: Application-level tracking and enforcement |
||||||
|
|
||||||
|
**Files to create/modify**: |
||||||
|
- `src/lib/services/security/resource-limits.ts` - Track and enforce limits |
||||||
|
- `src/routes/api/repos/[npub]/[repo]/+server.ts` - Check limits before operations |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Per-user repository count limit (configurable, default: 100) |
||||||
|
- Per-user disk quota (configurable, default: 10GB) |
||||||
|
- Per-repository size limit (already exists: 2GB) |
||||||
|
- Per-file size limit (already exists: 500MB) |
||||||
|
|
||||||
|
**Configuration**: |
||||||
|
```typescript |
||||||
|
// Environment variables |
||||||
|
MAX_REPOS_PER_USER=100 |
||||||
|
MAX_DISK_QUOTA_PER_USER=10737418240 // 10GB in bytes |
||||||
|
``` |
||||||
|
|
||||||
|
#### 1.2 Rate Limiting |
||||||
|
|
||||||
|
**Implementation**: In-memory or Redis-based rate limiting |
||||||
|
|
||||||
|
**Files to create/modify**: |
||||||
|
- `src/lib/services/security/rate-limiter.ts` - Rate limiting logic |
||||||
|
- `src/hooks.server.ts` - Apply rate limits to requests |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Per-IP rate limiting (requests per minute) |
||||||
|
- Per-user rate limiting (operations per minute) |
||||||
|
- Different limits for different operations: |
||||||
|
- Git operations (clone/push): 60/min |
||||||
|
- File operations: 30/min |
||||||
|
- API requests: 120/min |
||||||
|
|
||||||
|
**Configuration**: |
||||||
|
```typescript |
||||||
|
// Environment variables |
||||||
|
RATE_LIMIT_ENABLED=true |
||||||
|
RATE_LIMIT_WINDOW_MS=60000 // 1 minute |
||||||
|
RATE_LIMIT_MAX_REQUESTS=120 |
||||||
|
``` |
||||||
|
|
||||||
|
#### 1.3 Audit Logging |
||||||
|
|
||||||
|
**Implementation**: Structured logging to files/console |
||||||
|
|
||||||
|
**Files to create/modify**: |
||||||
|
- `src/lib/services/security/audit-logger.ts` - Audit logging service |
||||||
|
- All API endpoints - Add audit log entries |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Log all repository access attempts |
||||||
|
- Log all file operations (read/write/delete) |
||||||
|
- Log authentication attempts (success/failure) |
||||||
|
- Log ownership transfers |
||||||
|
- Include: timestamp, user pubkey, IP, action, result |
||||||
|
|
||||||
|
**Log Format**: |
||||||
|
```json |
||||||
|
{ |
||||||
|
"timestamp": "2024-01-01T12:00:00Z", |
||||||
|
"user": "abc123...", |
||||||
|
"ip": "192.168.1.1", |
||||||
|
"action": "repo.clone", |
||||||
|
"repo": "npub1.../myrepo", |
||||||
|
"result": "success", |
||||||
|
"metadata": {} |
||||||
|
} |
||||||
|
``` |
||||||
|
|
||||||
|
**Storage**: |
||||||
|
- **Console**: Always logs to stdout (JSON format, prefixed with `[AUDIT]`) |
||||||
|
- **File**: Optional file logging (if `AUDIT_LOG_FILE` is set) |
||||||
|
- Daily rotation: Creates new file each day (e.g., `audit-2024-01-01.log`) |
||||||
|
- Location: Configurable via `AUDIT_LOG_FILE` environment variable |
||||||
|
- Default location: Console only (no file logging by default) |
||||||
|
|
||||||
|
**Retention**: |
||||||
|
- **Default**: 90 days (configurable via `AUDIT_LOG_RETENTION_DAYS`) |
||||||
|
- **Automatic cleanup**: Old log files are automatically deleted |
||||||
|
- **Rotation**: Logs rotate daily at midnight (based on date change) |
||||||
|
- **Set to 0**: Disables automatic cleanup (manual cleanup required) |
||||||
|
|
||||||
|
**Example Configuration**: |
||||||
|
```bash |
||||||
|
# Log to /var/log/gitrepublic/audit.log (with daily rotation) |
||||||
|
AUDIT_LOG_FILE=/var/log/gitrepublic/audit.log |
||||||
|
AUDIT_LOG_RETENTION_DAYS=90 |
||||||
|
|
||||||
|
# Or use Docker volume |
||||||
|
AUDIT_LOG_FILE=/app/logs/audit.log |
||||||
|
AUDIT_LOG_RETENTION_DAYS=30 |
||||||
|
``` |
||||||
|
|
||||||
|
#### 1.4 Enhanced git-http-backend Hardening |
||||||
|
|
||||||
|
**Implementation**: Additional security measures for git-http-backend |
||||||
|
|
||||||
|
**Files to modify**: |
||||||
|
- `src/routes/api/git/[...path]/+server.ts` - Add security measures |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Validate PATH_INFO to prevent manipulation |
||||||
|
- Set restrictive environment variables |
||||||
|
- Timeout for git operations |
||||||
|
- Resource limits for spawned processes |
||||||
|
|
||||||
|
### Phase 2: Enterprise Improvements (Kubernetes) |
||||||
|
|
||||||
|
These require multi-container architecture and Kubernetes. |
||||||
|
|
||||||
|
#### 2.1 Container-per-Tenant Architecture |
||||||
|
|
||||||
|
**Architecture**: |
||||||
|
- Each user (npub) gets their own namespace |
||||||
|
- Each namespace has: |
||||||
|
- Application pod (gitrepublic instance) |
||||||
|
- Persistent volume for repositories |
||||||
|
- Service for networking |
||||||
|
- Resource quotas |
||||||
|
|
||||||
|
**Kubernetes Resources**: |
||||||
|
- `k8s/namespace-template.yaml` - Namespace per tenant |
||||||
|
- `k8s/deployment-template.yaml` - Application deployment |
||||||
|
- `k8s/service-template.yaml` - Service definition |
||||||
|
- `k8s/pvc-template.yaml` - Persistent volume claim |
||||||
|
- `k8s/resource-quota.yaml` - Resource limits |
||||||
|
|
||||||
|
#### 2.2 Network Isolation |
||||||
|
|
||||||
|
**Implementation**: Kubernetes Network Policies |
||||||
|
|
||||||
|
**Files to create**: |
||||||
|
- `k8s/network-policy.yaml` - Network isolation rules |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Namespace-level network isolation |
||||||
|
- Only allow traffic from ingress controller |
||||||
|
- Block inter-namespace communication |
||||||
|
- Allow egress to Nostr relays only |
||||||
|
|
||||||
|
#### 2.3 Resource Quotas |
||||||
|
|
||||||
|
**Implementation**: Kubernetes ResourceQuota |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- CPU limits per tenant |
||||||
|
- Memory limits per tenant |
||||||
|
- Storage limits per tenant |
||||||
|
- Pod count limits |
||||||
|
|
||||||
|
#### 2.4 Separate Volumes Per Tenant |
||||||
|
|
||||||
|
**Implementation**: Kubernetes PersistentVolumeClaims |
||||||
|
|
||||||
|
**Features**: |
||||||
|
- Each tenant gets their own volume |
||||||
|
- Volume size limits |
||||||
|
- Backup/restore per tenant |
||||||
|
- Snapshot support |
||||||
|
|
||||||
|
## Hybrid Approach (Recommended) |
||||||
|
|
||||||
|
The hybrid approach implements lightweight improvements first, then provides a migration path to enterprise architecture. |
||||||
|
|
||||||
|
### Benefits: |
||||||
|
1. **Immediate security improvements** - Lightweight features work now |
||||||
|
2. **Scalable architecture** - Can migrate to Kubernetes when needed |
||||||
|
3. **Cost-effective** - Start simple, scale as needed |
||||||
|
4. **Flexible deployment** - Works in both scenarios |
||||||
|
|
||||||
|
### Implementation Strategy: |
||||||
|
|
||||||
|
1. **Start with lightweight** - Implement Phase 1 features |
||||||
|
2. **Design for scale** - Code structure supports multi-container |
||||||
|
3. **Add Kubernetes support** - Phase 2 when needed |
||||||
|
4. **Gradual migration** - Move tenants to K8s as needed |
||||||
|
|
||||||
|
## File Structure |
||||||
|
|
||||||
|
``` |
||||||
|
src/lib/services/security/ |
||||||
|
├── resource-limits.ts # Resource limit tracking |
||||||
|
├── rate-limiter.ts # Rate limiting |
||||||
|
├── audit-logger.ts # Audit logging |
||||||
|
└── quota-manager.ts # Disk quota management |
||||||
|
|
||||||
|
k8s/ |
||||||
|
├── base/ |
||||||
|
│ ├── namespace.yaml |
||||||
|
│ ├── deployment.yaml |
||||||
|
│ ├── service.yaml |
||||||
|
│ └── pvc.yaml |
||||||
|
├── overlays/ |
||||||
|
│ ├── single-container/ # Single container setup |
||||||
|
│ └── multi-tenant/ # Kubernetes setup |
||||||
|
└── helm-chart/ # Optional Helm chart |
||||||
|
``` |
||||||
|
|
||||||
|
## Configuration |
||||||
|
|
||||||
|
### Lightweight Mode (Single Container) |
||||||
|
```yaml |
||||||
|
# docker-compose.yml or .env |
||||||
|
SECURITY_MODE=lightweight |
||||||
|
MAX_REPOS_PER_USER=100 |
||||||
|
MAX_DISK_QUOTA_PER_USER=10737418240 |
||||||
|
RATE_LIMIT_ENABLED=true |
||||||
|
AUDIT_LOGGING_ENABLED=true |
||||||
|
``` |
||||||
|
|
||||||
|
### Enterprise Mode (Kubernetes) |
||||||
|
```yaml |
||||||
|
# Kubernetes ConfigMap |
||||||
|
security: |
||||||
|
mode: enterprise |
||||||
|
isolation: container-per-tenant |
||||||
|
networkPolicy: enabled |
||||||
|
resourceQuotas: enabled |
||||||
|
``` |
||||||
|
|
||||||
|
## Migration Path |
||||||
|
|
||||||
|
### From Lightweight to Enterprise: |
||||||
|
|
||||||
|
1. **Phase 1**: Deploy lightweight improvements (no architecture change) |
||||||
|
2. **Phase 2**: Add Kubernetes support alongside single container |
||||||
|
3. **Phase 3**: Migrate high-value tenants to Kubernetes |
||||||
|
4. **Phase 4**: Full Kubernetes deployment (optional) |
||||||
|
|
||||||
|
## Priority Implementation Order |
||||||
|
|
||||||
|
1. ✅ **Audit Logging** - Easy, high value, works everywhere |
||||||
|
2. ✅ **Rate Limiting** - Prevents abuse, works in single container |
||||||
|
3. ✅ **Resource Limits** - Prevents resource exhaustion |
||||||
|
4. ⏳ **Enhanced git-http-backend** - Additional hardening |
||||||
|
5. ⏳ **Kubernetes Support** - When scaling needed |
||||||
@ -0,0 +1,218 @@ |
|||||||
|
# Kubernetes Deployment Guide |
||||||
|
|
||||||
|
This directory contains Kubernetes manifests for enterprise-grade multi-tenant deployment of gitrepublic-web. |
||||||
|
|
||||||
|
## Architecture |
||||||
|
|
||||||
|
### Enterprise Mode (Kubernetes) |
||||||
|
- **Container-per-tenant**: Each user (npub) gets their own namespace |
||||||
|
- **Process isolation**: Complete isolation between tenants |
||||||
|
- **Network isolation**: Network policies prevent inter-tenant communication |
||||||
|
- **Resource quotas**: Per-tenant CPU, memory, and storage limits |
||||||
|
- **Separate volumes**: Each tenant has their own PersistentVolume |
||||||
|
|
||||||
|
### Lightweight Mode (Single Container) |
||||||
|
- Application-level security controls |
||||||
|
- Works with current Docker setup |
||||||
|
- See `SECURITY_IMPLEMENTATION.md` for details |
||||||
|
|
||||||
|
## Directory Structure |
||||||
|
|
||||||
|
``` |
||||||
|
k8s/ |
||||||
|
├── base/ # Base Kubernetes manifests (templates) |
||||||
|
│ ├── namespace.yaml # Namespace per tenant |
||||||
|
│ ├── resource-quota.yaml # Resource limits per tenant |
||||||
|
│ ├── limit-range.yaml # Default container limits |
||||||
|
│ ├── deployment.yaml # Application deployment |
||||||
|
│ ├── service.yaml # Service definition |
||||||
|
│ ├── pvc.yaml # Persistent volume claim |
||||||
|
│ └── network-policy.yaml # Network isolation |
||||||
|
├── overlays/ |
||||||
|
│ ├── single-container/ # Single container setup (lightweight) |
||||||
|
│ └── multi-tenant/ # Kubernetes setup (enterprise) |
||||||
|
└── README.md # This file |
||||||
|
``` |
||||||
|
|
||||||
|
## Usage |
||||||
|
|
||||||
|
### Single Container (Lightweight) |
||||||
|
|
||||||
|
Use the existing `docker-compose.yml` or `Dockerfile`. Security improvements are application-level and work automatically. |
||||||
|
|
||||||
|
### Kubernetes (Enterprise) |
||||||
|
|
||||||
|
#### Option 1: Manual Deployment |
||||||
|
|
||||||
|
1. **Create namespace for tenant**: |
||||||
|
```bash |
||||||
|
export TENANT_ID="npub1abc123..." |
||||||
|
export GIT_DOMAIN="git.example.com" |
||||||
|
export NOSTR_RELAYS="wss://relay1.com,wss://relay2.com" |
||||||
|
export STORAGE_CLASS="fast-ssd" |
||||||
|
|
||||||
|
# Replace variables in templates |
||||||
|
envsubst < k8s/base/namespace.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/resource-quota.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/limit-range.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/pvc.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/deployment.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/service.yaml | kubectl apply -f - |
||||||
|
envsubst < k8s/base/network-policy.yaml | kubectl apply -f - |
||||||
|
``` |
||||||
|
|
||||||
|
#### Option 2: Operator Pattern (Recommended) |
||||||
|
|
||||||
|
Create a Kubernetes operator that: |
||||||
|
- Watches for new repository announcements |
||||||
|
- Automatically creates namespaces for new tenants |
||||||
|
- Manages tenant lifecycle |
||||||
|
- Handles scaling and resource allocation |
||||||
|
|
||||||
|
#### Option 3: Helm Chart |
||||||
|
|
||||||
|
Package as Helm chart for easier deployment: |
||||||
|
```bash |
||||||
|
helm install gitrepublic ./helm-chart \ |
||||||
|
--set tenant.id=npub1abc123... \ |
||||||
|
--set git.domain=git.example.com |
||||||
|
``` |
||||||
|
|
||||||
|
## Configuration |
||||||
|
|
||||||
|
### Environment Variables |
||||||
|
|
||||||
|
| Variable | Description | Default | |
||||||
|
|----------|-------------|---------| |
||||||
|
| `SECURITY_MODE` | `lightweight` or `enterprise` | `lightweight` | |
||||||
|
| `MAX_REPOS_PER_USER` | Max repos per user | `100` | |
||||||
|
| `MAX_DISK_QUOTA_PER_USER` | Max disk per user (bytes) | `10737418240` (10GB) | |
||||||
|
| `RATE_LIMIT_ENABLED` | Enable rate limiting | `true` | |
||||||
|
| `AUDIT_LOGGING_ENABLED` | Enable audit logging | `true` | |
||||||
|
|
||||||
|
### Resource Quotas |
||||||
|
|
||||||
|
Adjust in `resource-quota.yaml`: |
||||||
|
- CPU: requests/limits per tenant |
||||||
|
- Memory: requests/limits per tenant |
||||||
|
- Storage: per-tenant volume size |
||||||
|
- Pods: max pods per tenant |
||||||
|
|
||||||
|
## Ingress Configuration |
||||||
|
|
||||||
|
Use an Ingress controller (e.g., nginx-ingress) to route traffic: |
||||||
|
|
||||||
|
```yaml |
||||||
|
apiVersion: networking.k8s.io/v1 |
||||||
|
kind: Ingress |
||||||
|
metadata: |
||||||
|
name: gitrepublic-ingress |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
ingressClassName: nginx |
||||||
|
rules: |
||||||
|
- host: ${TENANT_SUBDOMAIN}.git.example.com |
||||||
|
http: |
||||||
|
paths: |
||||||
|
- path: / |
||||||
|
pathType: Prefix |
||||||
|
backend: |
||||||
|
service: |
||||||
|
name: gitrepublic |
||||||
|
port: |
||||||
|
number: 80 |
||||||
|
``` |
||||||
|
|
||||||
|
## Monitoring |
||||||
|
|
||||||
|
### Recommended Tools |
||||||
|
- **Prometheus**: Metrics collection |
||||||
|
- **Grafana**: Dashboards |
||||||
|
- **Loki**: Log aggregation |
||||||
|
- **Jaeger**: Distributed tracing |
||||||
|
|
||||||
|
### Metrics to Monitor |
||||||
|
- Request rate per tenant |
||||||
|
- Resource usage per tenant |
||||||
|
- Error rates |
||||||
|
- Git operation durations |
||||||
|
- Disk usage per tenant |
||||||
|
|
||||||
|
## Backup Strategy |
||||||
|
|
||||||
|
### Per-Tenant Backups |
||||||
|
1. **Volume Snapshots**: Use Kubernetes VolumeSnapshots |
||||||
|
2. **Git Repo Backups**: Regular `git bundle` exports |
||||||
|
3. **Metadata Backups**: Export Nostr events |
||||||
|
|
||||||
|
### Example Backup Job |
||||||
|
```yaml |
||||||
|
apiVersion: batch/v1 |
||||||
|
kind: CronJob |
||||||
|
metadata: |
||||||
|
name: gitrepublic-backup |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
schedule: "0 2 * * *" # Daily at 2 AM |
||||||
|
jobTemplate: |
||||||
|
spec: |
||||||
|
template: |
||||||
|
spec: |
||||||
|
containers: |
||||||
|
- name: backup |
||||||
|
image: gitrepublic-backup:latest |
||||||
|
command: ["/backup.sh"] |
||||||
|
volumeMounts: |
||||||
|
- name: repos |
||||||
|
mountPath: /repos |
||||||
|
volumes: |
||||||
|
- name: repos |
||||||
|
persistentVolumeClaim: |
||||||
|
claimName: gitrepublic-repos |
||||||
|
``` |
||||||
|
|
||||||
|
## Migration from Lightweight to Enterprise |
||||||
|
|
||||||
|
1. **Export tenant data**: Backup repositories |
||||||
|
2. **Create namespace**: Set up K8s resources |
||||||
|
3. **Import data**: Restore to new volume |
||||||
|
4. **Update DNS**: Point to new service |
||||||
|
5. **Verify**: Test all operations |
||||||
|
6. **Decommission**: Remove old container |
||||||
|
|
||||||
|
## Security Considerations |
||||||
|
|
||||||
|
### Network Policies |
||||||
|
- Prevents inter-tenant communication |
||||||
|
- Restricts egress to necessary services only |
||||||
|
- Allows ingress from ingress controller only |
||||||
|
|
||||||
|
### Resource Quotas |
||||||
|
- Prevents resource exhaustion |
||||||
|
- Ensures fair resource allocation |
||||||
|
- Limits blast radius of issues |
||||||
|
|
||||||
|
### Process Isolation |
||||||
|
- Complete isolation between tenants |
||||||
|
- No shared memory or filesystem |
||||||
|
- Separate security contexts |
||||||
|
|
||||||
|
## Cost Considerations |
||||||
|
|
||||||
|
### Lightweight Mode |
||||||
|
- **Lower cost**: Single container, shared resources |
||||||
|
- **Lower isolation**: Application-level only |
||||||
|
- **Good for**: Small to medium deployments |
||||||
|
|
||||||
|
### Enterprise Mode |
||||||
|
- **Higher cost**: Multiple containers, separate volumes |
||||||
|
- **Higher isolation**: Process and network isolation |
||||||
|
- **Good for**: Large deployments, enterprise customers |
||||||
|
|
||||||
|
## Hybrid Approach |
||||||
|
|
||||||
|
Run both modes: |
||||||
|
- **Lightweight**: For most users (cost-effective) |
||||||
|
- **Enterprise**: For high-value tenants (isolation) |
||||||
|
|
||||||
|
Use a tenant classification system to route tenants to appropriate mode. |
||||||
@ -0,0 +1,78 @@ |
|||||||
|
# Deployment template for gitrepublic per tenant |
||||||
|
# Each tenant gets their own deployment in their own namespace |
||||||
|
|
||||||
|
apiVersion: apps/v1 |
||||||
|
kind: Deployment |
||||||
|
metadata: |
||||||
|
name: gitrepublic |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
labels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
spec: |
||||||
|
replicas: 1 # Scale as needed |
||||||
|
selector: |
||||||
|
matchLabels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
template: |
||||||
|
metadata: |
||||||
|
labels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
spec: |
||||||
|
securityContext: |
||||||
|
runAsNonRoot: true |
||||||
|
runAsUser: 1001 |
||||||
|
fsGroup: 1001 |
||||||
|
containers: |
||||||
|
- name: gitrepublic |
||||||
|
image: gitrepublic-web:latest |
||||||
|
imagePullPolicy: IfNotPresent |
||||||
|
ports: |
||||||
|
- containerPort: 6543 |
||||||
|
name: http |
||||||
|
protocol: TCP |
||||||
|
env: |
||||||
|
- name: NODE_ENV |
||||||
|
value: "production" |
||||||
|
- name: GIT_REPO_ROOT |
||||||
|
value: "/repos" |
||||||
|
- name: GIT_DOMAIN |
||||||
|
value: "${GIT_DOMAIN}" # Tenant-specific domain or shared |
||||||
|
- name: NOSTR_RELAYS |
||||||
|
value: "${NOSTR_RELAYS}" |
||||||
|
- name: PORT |
||||||
|
value: "6543" |
||||||
|
- name: SECURITY_MODE |
||||||
|
value: "enterprise" # Use enterprise mode in K8s |
||||||
|
volumeMounts: |
||||||
|
- name: repos |
||||||
|
mountPath: /repos |
||||||
|
resources: |
||||||
|
requests: |
||||||
|
cpu: "500m" |
||||||
|
memory: 512Mi |
||||||
|
limits: |
||||||
|
cpu: "2" |
||||||
|
memory: 2Gi |
||||||
|
livenessProbe: |
||||||
|
httpGet: |
||||||
|
path: / |
||||||
|
port: 6543 |
||||||
|
initialDelaySeconds: 30 |
||||||
|
periodSeconds: 10 |
||||||
|
timeoutSeconds: 5 |
||||||
|
failureThreshold: 3 |
||||||
|
readinessProbe: |
||||||
|
httpGet: |
||||||
|
path: / |
||||||
|
port: 6543 |
||||||
|
initialDelaySeconds: 10 |
||||||
|
periodSeconds: 5 |
||||||
|
timeoutSeconds: 3 |
||||||
|
failureThreshold: 3 |
||||||
|
volumes: |
||||||
|
- name: repos |
||||||
|
persistentVolumeClaim: |
||||||
|
claimName: gitrepublic-repos |
||||||
@ -0,0 +1,17 @@ |
|||||||
|
# LimitRange for default resource limits per container |
||||||
|
# Ensures containers have resource requests/limits even if not specified |
||||||
|
|
||||||
|
apiVersion: v1 |
||||||
|
kind: LimitRange |
||||||
|
metadata: |
||||||
|
name: gitrepublic-limits |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
limits: |
||||||
|
- default: |
||||||
|
cpu: "1" |
||||||
|
memory: 1Gi |
||||||
|
defaultRequest: |
||||||
|
cpu: "500m" |
||||||
|
memory: 512Mi |
||||||
|
type: Container |
||||||
@ -0,0 +1,11 @@ |
|||||||
|
# Kubernetes namespace template for per-tenant isolation |
||||||
|
# This is a template - in production, create one namespace per tenant (npub) |
||||||
|
|
||||||
|
apiVersion: v1 |
||||||
|
kind: Namespace |
||||||
|
metadata: |
||||||
|
name: gitrepublic-tenant-${TENANT_ID} |
||||||
|
labels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
managed-by: gitrepublic-operator # If using operator pattern |
||||||
@ -0,0 +1,45 @@ |
|||||||
|
# NetworkPolicy for tenant isolation |
||||||
|
# Prevents inter-tenant communication and restricts egress |
||||||
|
|
||||||
|
apiVersion: networking.k8s.io/v1 |
||||||
|
kind: NetworkPolicy |
||||||
|
metadata: |
||||||
|
name: gitrepublic-isolation |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
podSelector: |
||||||
|
matchLabels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
policyTypes: |
||||||
|
- Ingress |
||||||
|
- Egress |
||||||
|
ingress: |
||||||
|
# Allow traffic from ingress controller only |
||||||
|
- from: |
||||||
|
- namespaceSelector: |
||||||
|
matchLabels: |
||||||
|
name: ingress-nginx # Adjust to your ingress controller namespace |
||||||
|
- podSelector: |
||||||
|
matchLabels: |
||||||
|
app: ingress-nginx |
||||||
|
ports: |
||||||
|
- protocol: TCP |
||||||
|
port: 6543 |
||||||
|
# Deny all other ingress (including from other tenants) |
||||||
|
egress: |
||||||
|
# Allow DNS |
||||||
|
- to: |
||||||
|
- namespaceSelector: |
||||||
|
matchLabels: |
||||||
|
name: kube-system |
||||||
|
ports: |
||||||
|
- protocol: UDP |
||||||
|
port: 53 |
||||||
|
# Allow egress to Nostr relays (WSS) |
||||||
|
- to: |
||||||
|
- namespaceSelector: {} # Any namespace (for external services) |
||||||
|
ports: |
||||||
|
- protocol: TCP |
||||||
|
port: 443 |
||||||
|
# Deny all other egress |
||||||
@ -0,0 +1,15 @@ |
|||||||
|
# PersistentVolumeClaim for tenant repositories |
||||||
|
# Each tenant gets their own volume for complete isolation |
||||||
|
|
||||||
|
apiVersion: v1 |
||||||
|
kind: PersistentVolumeClaim |
||||||
|
metadata: |
||||||
|
name: gitrepublic-repos |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
accessModes: |
||||||
|
- ReadWriteOnce |
||||||
|
resources: |
||||||
|
requests: |
||||||
|
storage: 100Gi # Adjust per tenant needs |
||||||
|
storageClassName: ${STORAGE_CLASS} # e.g., "fast-ssd" or "standard" |
||||||
@ -0,0 +1,30 @@ |
|||||||
|
# Resource quotas per tenant namespace |
||||||
|
# Limits CPU, memory, storage, and pod count per tenant |
||||||
|
|
||||||
|
apiVersion: v1 |
||||||
|
kind: ResourceQuota |
||||||
|
metadata: |
||||||
|
name: gitrepublic-quota |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
spec: |
||||||
|
hard: |
||||||
|
# CPU limits |
||||||
|
requests.cpu: "2" |
||||||
|
limits.cpu: "4" |
||||||
|
|
||||||
|
# Memory limits |
||||||
|
requests.memory: 2Gi |
||||||
|
limits.memory: 4Gi |
||||||
|
|
||||||
|
# Storage limits |
||||||
|
persistentvolumeclaims: "1" |
||||||
|
requests.storage: 100Gi |
||||||
|
limits.storage: 200Gi |
||||||
|
|
||||||
|
# Pod limits |
||||||
|
pods: "2" # Application pod + optional sidecar |
||||||
|
|
||||||
|
# Optional: Limit other resources |
||||||
|
services: "1" |
||||||
|
secrets: "5" |
||||||
|
configmaps: "3" |
||||||
@ -0,0 +1,21 @@ |
|||||||
|
# Service for gitrepublic tenant |
||||||
|
# Exposes the application within the cluster |
||||||
|
|
||||||
|
apiVersion: v1 |
||||||
|
kind: Service |
||||||
|
metadata: |
||||||
|
name: gitrepublic |
||||||
|
namespace: gitrepublic-tenant-${TENANT_ID} |
||||||
|
labels: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
|
spec: |
||||||
|
type: ClusterIP # Use Ingress for external access |
||||||
|
ports: |
||||||
|
- port: 80 |
||||||
|
targetPort: 6543 |
||||||
|
protocol: TCP |
||||||
|
name: http |
||||||
|
selector: |
||||||
|
app: gitrepublic |
||||||
|
tenant: ${TENANT_ID} |
||||||
@ -0,0 +1,336 @@ |
|||||||
|
/** |
||||||
|
* Audit logging service |
||||||
|
* Logs all security-relevant events for monitoring and compliance |
||||||
|
*
|
||||||
|
* Storage: |
||||||
|
* - Console: Always logs to console (stdout) in JSON format |
||||||
|
* - File: Optional file logging with rotation (if AUDIT_LOG_FILE is set) |
||||||
|
*
|
||||||
|
* Retention: |
||||||
|
* - Configurable via AUDIT_LOG_RETENTION_DAYS (default: 90 days) |
||||||
|
* - Old log files are automatically cleaned up |
||||||
|
*/ |
||||||
|
|
||||||
|
import { appendFile, mkdir, readdir, unlink, stat } from 'fs/promises'; |
||||||
|
import { join, dirname } from 'path'; |
||||||
|
import { existsSync } from 'fs'; |
||||||
|
|
||||||
|
export interface AuditLogEntry { |
||||||
|
timestamp: string; |
||||||
|
user?: string; // pubkey (hex or npub)
|
||||||
|
ip?: string; |
||||||
|
action: string; |
||||||
|
resource?: string; // repo path, file path, etc.
|
||||||
|
result: 'success' | 'failure' | 'denied'; |
||||||
|
error?: string; |
||||||
|
metadata?: Record<string, any>; |
||||||
|
} |
||||||
|
|
||||||
|
export class AuditLogger { |
||||||
|
private enabled: boolean; |
||||||
|
private logFile?: string; |
||||||
|
private logDir?: string; |
||||||
|
private retentionDays: number; |
||||||
|
private currentLogFile?: string; |
||||||
|
private logRotationInterval?: NodeJS.Timeout; |
||||||
|
private cleanupInterval?: NodeJS.Timeout; |
||||||
|
private writeQueue: string[] = []; |
||||||
|
private writing = false; |
||||||
|
|
||||||
|
constructor() { |
||||||
|
this.enabled = process.env.AUDIT_LOGGING_ENABLED !== 'false'; |
||||||
|
this.logFile = process.env.AUDIT_LOG_FILE; |
||||||
|
this.retentionDays = parseInt(process.env.AUDIT_LOG_RETENTION_DAYS || '90', 10); |
||||||
|
|
||||||
|
if (this.logFile) { |
||||||
|
this.logDir = dirname(this.logFile); |
||||||
|
this.currentLogFile = this.getCurrentLogFile(); |
||||||
|
this.ensureLogDirectory(); |
||||||
|
this.startLogRotation(); |
||||||
|
this.startCleanup(); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Get current log file name with date suffix |
||||||
|
*/ |
||||||
|
private getCurrentLogFile(): string { |
||||||
|
if (!this.logFile) return ''; |
||||||
|
const date = new Date().toISOString().split('T')[0]; // YYYY-MM-DD
|
||||||
|
const baseName = this.logFile.replace(/\.log$/, '') || 'audit'; |
||||||
|
return `${baseName}-${date}.log`; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Ensure log directory exists |
||||||
|
*/ |
||||||
|
private async ensureLogDirectory(): Promise<void> { |
||||||
|
if (!this.logDir) return; |
||||||
|
try { |
||||||
|
if (!existsSync(this.logDir)) { |
||||||
|
await mkdir(this.logDir, { recursive: true }); |
||||||
|
} |
||||||
|
} catch (error) { |
||||||
|
console.error('[AUDIT] Failed to create log directory:', error); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Start log rotation (check daily for new log file) |
||||||
|
*/ |
||||||
|
private startLogRotation(): void { |
||||||
|
// Check every hour if we need to rotate
|
||||||
|
this.logRotationInterval = setInterval(() => { |
||||||
|
const newLogFile = this.getCurrentLogFile(); |
||||||
|
if (newLogFile !== this.currentLogFile) { |
||||||
|
this.currentLogFile = newLogFile; |
||||||
|
// Flush any pending writes before rotating
|
||||||
|
this.flushQueue(); |
||||||
|
} |
||||||
|
}, 60 * 60 * 1000); // 1 hour
|
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Start cleanup of old log files |
||||||
|
*/ |
||||||
|
private startCleanup(): void { |
||||||
|
// Run cleanup daily
|
||||||
|
this.cleanupInterval = setInterval(() => { |
||||||
|
this.cleanupOldLogs().catch(err => { |
||||||
|
console.error('[AUDIT] Failed to cleanup old logs:', err); |
||||||
|
}); |
||||||
|
}, 24 * 60 * 60 * 1000); // 24 hours
|
||||||
|
|
||||||
|
// Run initial cleanup
|
||||||
|
this.cleanupOldLogs().catch(err => { |
||||||
|
console.error('[AUDIT] Failed to cleanup old logs:', err); |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Clean up log files older than retention period |
||||||
|
*/ |
||||||
|
private async cleanupOldLogs(): Promise<void> { |
||||||
|
if (!this.logDir || !existsSync(this.logDir)) return; |
||||||
|
|
||||||
|
try { |
||||||
|
const files = await readdir(this.logDir); |
||||||
|
const cutoffDate = new Date(); |
||||||
|
cutoffDate.setDate(cutoffDate.getDate() - this.retentionDays); |
||||||
|
const cutoffTime = cutoffDate.getTime(); |
||||||
|
|
||||||
|
for (const file of files) { |
||||||
|
if (!file.endsWith('.log')) continue; |
||||||
|
|
||||||
|
const filePath = join(this.logDir, file); |
||||||
|
try { |
||||||
|
const stats = await stat(filePath); |
||||||
|
if (stats.mtime.getTime() < cutoffTime) { |
||||||
|
await unlink(filePath); |
||||||
|
console.log(`[AUDIT] Deleted old log file: ${file}`); |
||||||
|
} |
||||||
|
} catch (err) { |
||||||
|
// Ignore errors for individual files
|
||||||
|
} |
||||||
|
} |
||||||
|
} catch (error) { |
||||||
|
console.error('[AUDIT] Error during log cleanup:', error); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Write log entry to file (async, non-blocking) |
||||||
|
*/ |
||||||
|
private async writeToFile(logLine: string): Promise<void> { |
||||||
|
if (!this.currentLogFile || !this.logDir) return; |
||||||
|
|
||||||
|
this.writeQueue.push(logLine); |
||||||
|
|
||||||
|
if (this.writing) return; // Already writing, queue will be processed
|
||||||
|
|
||||||
|
this.writing = true; |
||||||
|
|
||||||
|
try { |
||||||
|
while (this.writeQueue.length > 0) { |
||||||
|
const batch = this.writeQueue.splice(0, 100); // Process in batches
|
||||||
|
const content = batch.join('\n') + '\n'; |
||||||
|
await appendFile(join(this.logDir, this.currentLogFile), content, 'utf8'); |
||||||
|
} |
||||||
|
} catch (error) { |
||||||
|
console.error('[AUDIT] Failed to write to log file:', error); |
||||||
|
// Put items back in queue (but limit queue size to prevent memory issues)
|
||||||
|
this.writeQueue = [...this.writeQueue, ...this.writeQueue].slice(0, 1000); |
||||||
|
} finally { |
||||||
|
this.writing = false; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Flush pending writes |
||||||
|
*/ |
||||||
|
private async flushQueue(): Promise<void> { |
||||||
|
if (this.writeQueue.length > 0 && !this.writing) { |
||||||
|
await this.writeToFile(''); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log an audit event |
||||||
|
*/ |
||||||
|
log(entry: Omit<AuditLogEntry, 'timestamp'>): void { |
||||||
|
if (!this.enabled) return; |
||||||
|
|
||||||
|
const fullEntry: AuditLogEntry = { |
||||||
|
...entry, |
||||||
|
timestamp: new Date().toISOString() |
||||||
|
}; |
||||||
|
|
||||||
|
// Log to console (structured JSON)
|
||||||
|
const logLine = JSON.stringify(fullEntry); |
||||||
|
console.log(`[AUDIT] ${logLine}`); |
||||||
|
|
||||||
|
// Write to file if configured (async, non-blocking)
|
||||||
|
if (this.logFile) { |
||||||
|
this.writeToFile(logLine).catch(err => { |
||||||
|
console.error('[AUDIT] Failed to write log entry:', err); |
||||||
|
}); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Cleanup on shutdown |
||||||
|
*/ |
||||||
|
destroy(): void { |
||||||
|
if (this.logRotationInterval) { |
||||||
|
clearInterval(this.logRotationInterval); |
||||||
|
} |
||||||
|
if (this.cleanupInterval) { |
||||||
|
clearInterval(this.cleanupInterval); |
||||||
|
} |
||||||
|
this.flushQueue(); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log repository access |
||||||
|
*/ |
||||||
|
logRepoAccess( |
||||||
|
user: string | null, |
||||||
|
ip: string | null, |
||||||
|
action: 'clone' | 'fetch' | 'push' | 'view' | 'list', |
||||||
|
repo: string, |
||||||
|
result: 'success' | 'failure' | 'denied', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user: user || undefined, |
||||||
|
ip: ip || undefined, |
||||||
|
action: `repo.${action}`, |
||||||
|
resource: repo, |
||||||
|
result, |
||||||
|
error |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log file operation |
||||||
|
*/ |
||||||
|
logFileOperation( |
||||||
|
user: string | null, |
||||||
|
ip: string | null, |
||||||
|
action: 'read' | 'write' | 'delete' | 'create', |
||||||
|
repo: string, |
||||||
|
filePath: string, |
||||||
|
result: 'success' | 'failure' | 'denied', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user: user || undefined, |
||||||
|
ip: ip || undefined, |
||||||
|
action: `file.${action}`, |
||||||
|
resource: `${repo}:${filePath}`, |
||||||
|
result, |
||||||
|
error, |
||||||
|
metadata: { filePath } |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log authentication attempt |
||||||
|
*/ |
||||||
|
logAuth( |
||||||
|
user: string | null, |
||||||
|
ip: string | null, |
||||||
|
method: 'NIP-07' | 'NIP-98' | 'none', |
||||||
|
result: 'success' | 'failure', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user: user || undefined, |
||||||
|
ip: ip || undefined, |
||||||
|
action: `auth.${method.toLowerCase()}`, |
||||||
|
result, |
||||||
|
error |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log ownership transfer |
||||||
|
*/ |
||||||
|
logOwnershipTransfer( |
||||||
|
fromUser: string, |
||||||
|
toUser: string, |
||||||
|
repo: string, |
||||||
|
result: 'success' | 'failure', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user: fromUser, |
||||||
|
action: 'ownership.transfer', |
||||||
|
resource: repo, |
||||||
|
result, |
||||||
|
error, |
||||||
|
metadata: { toUser } |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log repository creation |
||||||
|
*/ |
||||||
|
logRepoCreate( |
||||||
|
user: string, |
||||||
|
repo: string, |
||||||
|
result: 'success' | 'failure', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user, |
||||||
|
action: 'repo.create', |
||||||
|
resource: repo, |
||||||
|
result, |
||||||
|
error |
||||||
|
}); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Log repository fork |
||||||
|
*/ |
||||||
|
logRepoFork( |
||||||
|
user: string, |
||||||
|
originalRepo: string, |
||||||
|
forkRepo: string, |
||||||
|
result: 'success' | 'failure', |
||||||
|
error?: string |
||||||
|
): void { |
||||||
|
this.log({ |
||||||
|
user, |
||||||
|
action: 'repo.fork', |
||||||
|
resource: forkRepo, |
||||||
|
result, |
||||||
|
error, |
||||||
|
metadata: { originalRepo } |
||||||
|
}); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
// Singleton instance
|
||||||
|
export const auditLogger = new AuditLogger(); |
||||||
@ -0,0 +1,125 @@ |
|||||||
|
/** |
||||||
|
* Rate limiting service |
||||||
|
* Prevents abuse by limiting requests per user/IP |
||||||
|
*/ |
||||||
|
|
||||||
|
interface RateLimitEntry { |
||||||
|
count: number; |
||||||
|
resetAt: number; |
||||||
|
} |
||||||
|
|
||||||
|
export class RateLimiter { |
||||||
|
private enabled: boolean; |
||||||
|
private windowMs: number; |
||||||
|
private limits: Map<string, Map<string, RateLimitEntry>>; // type -> identifier -> entry
|
||||||
|
private cleanupInterval: NodeJS.Timeout | null = null; |
||||||
|
|
||||||
|
constructor() { |
||||||
|
this.enabled = process.env.RATE_LIMIT_ENABLED !== 'false'; |
||||||
|
this.windowMs = parseInt(process.env.RATE_LIMIT_WINDOW_MS || '60000', 10); // 1 minute default
|
||||||
|
this.limits = new Map(); |
||||||
|
|
||||||
|
// Cleanup old entries every 5 minutes
|
||||||
|
if (this.enabled) { |
||||||
|
this.cleanupInterval = setInterval(() => this.cleanup(), 5 * 60 * 1000); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Check if a request should be rate limited |
||||||
|
* @param type - Type of operation (e.g., 'git', 'api', 'file') |
||||||
|
* @param identifier - User pubkey or IP address |
||||||
|
* @param maxRequests - Maximum requests allowed in the window |
||||||
|
* @returns true if allowed, false if rate limited |
||||||
|
*/ |
||||||
|
checkLimit(type: string, identifier: string, maxRequests: number): { allowed: boolean; remaining: number; resetAt: number } { |
||||||
|
if (!this.enabled) { |
||||||
|
return { allowed: true, remaining: Infinity, resetAt: Date.now() + this.windowMs }; |
||||||
|
} |
||||||
|
|
||||||
|
const now = Date.now(); |
||||||
|
const key = `${type}:${identifier}`; |
||||||
|
|
||||||
|
if (!this.limits.has(type)) { |
||||||
|
this.limits.set(type, new Map()); |
||||||
|
} |
||||||
|
|
||||||
|
const typeLimits = this.limits.get(type)!; |
||||||
|
const entry = typeLimits.get(identifier); |
||||||
|
|
||||||
|
if (!entry || entry.resetAt < now) { |
||||||
|
// Create new entry or reset expired entry
|
||||||
|
typeLimits.set(identifier, { |
||||||
|
count: 1, |
||||||
|
resetAt: now + this.windowMs |
||||||
|
}); |
||||||
|
return { allowed: true, remaining: maxRequests - 1, resetAt: now + this.windowMs }; |
||||||
|
} |
||||||
|
|
||||||
|
if (entry.count >= maxRequests) { |
||||||
|
return { allowed: false, remaining: 0, resetAt: entry.resetAt }; |
||||||
|
} |
||||||
|
|
||||||
|
entry.count++; |
||||||
|
return { allowed: true, remaining: maxRequests - entry.count, resetAt: entry.resetAt }; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Get rate limit configuration for operation type |
||||||
|
*/ |
||||||
|
private getLimitForType(type: string): number { |
||||||
|
const envKey = `RATE_LIMIT_${type.toUpperCase()}_MAX`; |
||||||
|
const defaultLimits: Record<string, number> = { |
||||||
|
git: 60, // Git operations: 60/min
|
||||||
|
api: 120, // API requests: 120/min
|
||||||
|
file: 30, // File operations: 30/min
|
||||||
|
search: 20 // Search requests: 20/min
|
||||||
|
}; |
||||||
|
|
||||||
|
const envValue = process.env[envKey]; |
||||||
|
if (envValue) { |
||||||
|
return parseInt(envValue, 10); |
||||||
|
} |
||||||
|
|
||||||
|
return defaultLimits[type] || 60; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Check rate limit for a specific operation type |
||||||
|
*/ |
||||||
|
check(type: string, identifier: string): { allowed: boolean; remaining: number; resetAt: number } { |
||||||
|
const maxRequests = this.getLimitForType(type); |
||||||
|
return this.checkLimit(type, identifier, maxRequests); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Clean up expired entries |
||||||
|
*/ |
||||||
|
private cleanup(): void { |
||||||
|
const now = Date.now(); |
||||||
|
for (const [type, typeLimits] of this.limits.entries()) { |
||||||
|
for (const [identifier, entry] of typeLimits.entries()) { |
||||||
|
if (entry.resetAt < now) { |
||||||
|
typeLimits.delete(identifier); |
||||||
|
} |
||||||
|
} |
||||||
|
if (typeLimits.size === 0) { |
||||||
|
this.limits.delete(type); |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Cleanup on shutdown |
||||||
|
*/ |
||||||
|
destroy(): void { |
||||||
|
if (this.cleanupInterval) { |
||||||
|
clearInterval(this.cleanupInterval); |
||||||
|
this.cleanupInterval = null; |
||||||
|
} |
||||||
|
this.limits.clear(); |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
// Singleton instance
|
||||||
|
export const rateLimiter = new RateLimiter(); |
||||||
@ -0,0 +1,177 @@ |
|||||||
|
/** |
||||||
|
* Resource limits service |
||||||
|
* Tracks and enforces per-user resource limits |
||||||
|
*/ |
||||||
|
|
||||||
|
import { statSync } from 'fs'; |
||||||
|
import { join } from 'path'; |
||||||
|
import { readdir } from 'fs/promises'; |
||||||
|
|
||||||
|
export interface ResourceUsage { |
||||||
|
repoCount: number; |
||||||
|
diskUsage: number; // bytes
|
||||||
|
maxRepos: number; |
||||||
|
maxDiskQuota: number; // bytes
|
||||||
|
} |
||||||
|
|
||||||
|
export class ResourceLimits { |
||||||
|
private repoRoot: string; |
||||||
|
private maxReposPerUser: number; |
||||||
|
private maxDiskQuotaPerUser: number; |
||||||
|
private cache: Map<string, { usage: ResourceUsage; timestamp: number }> = new Map(); |
||||||
|
private cacheTTL = 5 * 60 * 1000; // 5 minutes
|
||||||
|
|
||||||
|
constructor(repoRoot: string = '/repos') { |
||||||
|
this.repoRoot = repoRoot; |
||||||
|
this.maxReposPerUser = parseInt(process.env.MAX_REPOS_PER_USER || '100', 10); |
||||||
|
this.maxDiskQuotaPerUser = parseInt(process.env.MAX_DISK_QUOTA_PER_USER || '10737418240', 10); // 10GB default
|
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Get resource usage for a user (npub) |
||||||
|
*/ |
||||||
|
async getUsage(npub: string): Promise<ResourceUsage> { |
||||||
|
const cacheKey = npub; |
||||||
|
const cached = this.cache.get(cacheKey); |
||||||
|
const now = Date.now(); |
||||||
|
|
||||||
|
if (cached && (now - cached.timestamp) < this.cacheTTL) { |
||||||
|
return cached.usage; |
||||||
|
} |
||||||
|
|
||||||
|
const userRepoDir = join(this.repoRoot, npub); |
||||||
|
let repoCount = 0; |
||||||
|
let diskUsage = 0; |
||||||
|
|
||||||
|
try { |
||||||
|
// Count repositories
|
||||||
|
if (await this.dirExists(userRepoDir)) { |
||||||
|
const entries = await readdir(userRepoDir, { withFileTypes: true }); |
||||||
|
for (const entry of entries) { |
||||||
|
if (entry.isDirectory() && entry.name.endsWith('.git')) { |
||||||
|
repoCount++; |
||||||
|
// Calculate disk usage for this repo
|
||||||
|
try { |
||||||
|
const repoPath = join(userRepoDir, entry.name); |
||||||
|
diskUsage += this.calculateDirSize(repoPath); |
||||||
|
} catch { |
||||||
|
// Ignore errors calculating size
|
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
} catch { |
||||||
|
// User directory doesn't exist yet, usage is 0
|
||||||
|
} |
||||||
|
|
||||||
|
const usage: ResourceUsage = { |
||||||
|
repoCount, |
||||||
|
diskUsage, |
||||||
|
maxRepos: this.maxReposPerUser, |
||||||
|
maxDiskQuota: this.maxDiskQuotaPerUser |
||||||
|
}; |
||||||
|
|
||||||
|
this.cache.set(cacheKey, { usage, timestamp: now }); |
||||||
|
return usage; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Check if user can create a new repository |
||||||
|
*/ |
||||||
|
async canCreateRepo(npub: string): Promise<{ allowed: boolean; reason?: string; usage: ResourceUsage }> { |
||||||
|
const usage = await this.getUsage(npub); |
||||||
|
|
||||||
|
if (usage.repoCount >= usage.maxRepos) { |
||||||
|
return { |
||||||
|
allowed: false, |
||||||
|
reason: `Repository limit reached (${usage.repoCount}/${usage.maxRepos})`, |
||||||
|
usage |
||||||
|
}; |
||||||
|
} |
||||||
|
|
||||||
|
return { allowed: true, usage }; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Check if user has enough disk quota |
||||||
|
*/ |
||||||
|
async hasDiskQuota(npub: string, additionalBytes: number = 0): Promise<{ allowed: boolean; reason?: string; usage: ResourceUsage }> { |
||||||
|
const usage = await this.getUsage(npub); |
||||||
|
|
||||||
|
if (usage.diskUsage + additionalBytes > usage.maxDiskQuota) { |
||||||
|
return { |
||||||
|
allowed: false, |
||||||
|
reason: `Disk quota exceeded (${this.formatBytes(usage.diskUsage)}/${this.formatBytes(usage.maxDiskQuota)})`, |
||||||
|
usage |
||||||
|
}; |
||||||
|
} |
||||||
|
|
||||||
|
return { allowed: true, usage }; |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Invalidate cache for a user (call after repo operations) |
||||||
|
*/ |
||||||
|
invalidateCache(npub: string): void { |
||||||
|
this.cache.delete(npub); |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Calculate directory size recursively |
||||||
|
*/ |
||||||
|
private calculateDirSize(dirPath: string): number { |
||||||
|
try { |
||||||
|
let size = 0; |
||||||
|
const stats = statSync(dirPath); |
||||||
|
|
||||||
|
if (stats.isFile()) { |
||||||
|
return stats.size; |
||||||
|
} |
||||||
|
|
||||||
|
if (stats.isDirectory()) { |
||||||
|
// For performance, we'll do a simplified calculation
|
||||||
|
// In production, you might want to use a more efficient method
|
||||||
|
// or cache this calculation
|
||||||
|
try { |
||||||
|
const entries = require('fs').readdirSync(dirPath); |
||||||
|
for (const entry of entries) { |
||||||
|
try { |
||||||
|
size += this.calculateDirSize(join(dirPath, entry)); |
||||||
|
} catch { |
||||||
|
// Ignore errors (permissions, symlinks, etc.)
|
||||||
|
} |
||||||
|
} |
||||||
|
} catch { |
||||||
|
// Can't read directory
|
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
return size; |
||||||
|
} catch { |
||||||
|
return 0; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Check if directory exists |
||||||
|
*/ |
||||||
|
private async dirExists(path: string): Promise<boolean> { |
||||||
|
try { |
||||||
|
const stats = await import('fs/promises').then(m => m.stat(path)); |
||||||
|
return stats.isDirectory(); |
||||||
|
} catch { |
||||||
|
return false; |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
/** |
||||||
|
* Format bytes to human-readable string |
||||||
|
*/ |
||||||
|
private formatBytes(bytes: number): string { |
||||||
|
if (bytes === 0) return '0 B'; |
||||||
|
const k = 1024; |
||||||
|
const sizes = ['B', 'KB', 'MB', 'GB', 'TB']; |
||||||
|
const i = Math.floor(Math.log(bytes) / Math.log(k)); |
||||||
|
return Math.round(bytes / Math.pow(k, i) * 100) / 100 + ' ' + sizes[i]; |
||||||
|
} |
||||||
|
} |
||||||
Loading…
Reference in new issue