Browse Source

security improvements for multitenant (with Kubernetes manifests for later enterprise mode)

main
Silberengel 4 weeks ago
parent
commit
95f5a13277
  1. 14
      Dockerfile
  2. 60
      README.md
  3. 191
      SECURITY.md
  4. 268
      SECURITY_IMPLEMENTATION.md
  5. 2
      docker-compose.yml
  6. 218
      k8s/README.md
  7. 78
      k8s/base/deployment.yaml
  8. 17
      k8s/base/limit-range.yaml
  9. 11
      k8s/base/namespace.yaml
  10. 45
      k8s/base/network-policy.yaml
  11. 15
      k8s/base/pvc.yaml
  12. 30
      k8s/base/resource-quota.yaml
  13. 21
      k8s/base/service.yaml
  14. 63
      src/hooks.server.ts
  15. 9
      src/lib/services/git/file-manager.ts
  16. 336
      src/lib/services/security/audit-logger.ts
  17. 125
      src/lib/services/security/rate-limiter.ts
  18. 177
      src/lib/services/security/resource-limits.ts
  19. 168
      src/routes/api/git/[...path]/+server.ts
  20. 127
      src/routes/api/repos/[npub]/[repo]/file/+server.ts
  21. 28
      src/routes/api/repos/[npub]/[repo]/fork/+server.ts

14
Dockerfile

@ -49,13 +49,17 @@ COPY --from=builder /app/package.json ./
# Create directory for git repositories # Create directory for git repositories
RUN mkdir -p /repos && chmod 755 /repos RUN mkdir -p /repos && chmod 755 /repos
# Create non-root user for security # Create directory for audit logs (optional, if AUDIT_LOG_FILE is set)
RUN addgroup -g 1001 -S nodejs && \ RUN mkdir -p /app/logs && chmod 755 /app/logs
adduser -S nodejs -u 1001 && \
chown -R nodejs:nodejs /app /repos # Create dedicated non-root user for gitrepublic
# Using a dedicated user (not generic 'nodejs') is better security practice
RUN addgroup -g 1001 -S gitrepublic && \
adduser -S gitrepublic -u 1001 -G gitrepublic && \
chown -R gitrepublic:gitrepublic /app /repos /app/logs
# Switch to non-root user # Switch to non-root user
USER nodejs USER gitrepublic
# Expose port # Expose port
EXPOSE 6543 EXPOSE 6543

60
README.md

@ -318,13 +318,49 @@ npm install
npm run dev npm run dev
``` ```
### Environment Variables ### Security Features
### Lightweight Mode (Single Container)
- **Resource Limits**: Per-user repository count and disk quota limits
- **Rate Limiting**: Per-IP and per-user rate limiting for all operations
- **Audit Logging**: Comprehensive logging of all security-relevant events
- **Path Validation**: Strict path validation to prevent traversal attacks
- **git-http-backend Hardening**: Timeouts, process isolation, scoped access
### Enterprise Mode (Kubernetes)
- **Process Isolation**: Container-per-tenant architecture
- **Network Isolation**: Kubernetes Network Policies
- **Resource Quotas**: Per-tenant CPU, memory, and storage limits
- **Separate Volumes**: Each tenant has their own PersistentVolume
See `SECURITY.md` and `SECURITY_IMPLEMENTATION.md` for detailed information.
## Environment Variables
- `NOSTRGIT_SECRET_KEY`: Server's nsec (bech32 or hex) for signing repo announcements and initial commits (optional) - `NOSTRGIT_SECRET_KEY`: Server's nsec (bech32 or hex) for signing repo announcements and initial commits (optional)
- `GIT_REPO_ROOT`: Path to store git repositories (default: `/repos`) - `GIT_REPO_ROOT`: Path to store git repositories (default: `/repos`)
- `GIT_DOMAIN`: Domain for git repositories (default: `localhost:6543`) - `GIT_DOMAIN`: Domain for git repositories (default: `localhost:6543`)
- `NOSTR_RELAYS`: Comma-separated list of Nostr relays (default: `wss://theforest.nostr1.com,wss://nostr.land,wss://relay.damus.io`) - `NOSTR_RELAYS`: Comma-separated list of Nostr relays (default: `wss://theforest.nostr1.com,wss://nostr.land,wss://relay.damus.io`)
### Security Configuration
- `SECURITY_MODE`: `lightweight` (single container) or `enterprise` (Kubernetes) (default: `lightweight`)
- `MAX_REPOS_PER_USER`: Maximum repositories per user (default: `100`)
- `MAX_DISK_QUOTA_PER_USER`: Maximum disk quota per user in bytes (default: `10737418240` = 10GB)
- `RATE_LIMIT_ENABLED`: Enable rate limiting (default: `true`)
- `RATE_LIMIT_WINDOW_MS`: Rate limit window in milliseconds (default: `60000` = 1 minute)
- `RATE_LIMIT_GIT_MAX`: Max git operations per window (default: `60`)
- `RATE_LIMIT_API_MAX`: Max API requests per window (default: `120`)
- `RATE_LIMIT_FILE_MAX`: Max file operations per window (default: `30`)
- `RATE_LIMIT_SEARCH_MAX`: Max search requests per window (default: `20`)
- `AUDIT_LOGGING_ENABLED`: Enable audit logging (default: `true`)
- `AUDIT_LOG_FILE`: Optional file path for audit logs (default: console only)
- If set, logs are written to files with daily rotation (e.g., `audit-2024-01-01.log`)
- Example: `/var/log/gitrepublic/audit.log` → creates `audit-2024-01-01.log`, `audit-2024-01-02.log`, etc.
- `AUDIT_LOG_RETENTION_DAYS`: Number of days to keep audit log files (default: `90`)
- Old log files are automatically deleted after this period
- Set to `0` to disable automatic cleanup
### Git HTTP Backend Setup ### Git HTTP Backend Setup
The server uses `git-http-backend` for git operations. Ensure it's installed: The server uses `git-http-backend` for git operations. Ensure it's installed:
@ -382,14 +418,34 @@ Requires NIP-98 authentication. Your git client needs to support NIP-98 or you c
- **Forking**: Click "Fork" button on repository page - **Forking**: Click "Fork" button on repository page
- **Transfer Ownership**: Use the transfer API endpoint or create a kind 1641 event manually - **Transfer Ownership**: Use the transfer API endpoint or create a kind 1641 event manually
## Security Features
### Lightweight Mode (Single Container)
- **Resource Limits**: Per-user repository count and disk quota limits
- **Rate Limiting**: Per-IP and per-user rate limiting for all operations
- **Audit Logging**: Comprehensive logging of all security-relevant events
- **Path Validation**: Strict path validation to prevent traversal attacks
- **git-http-backend Hardening**: Timeouts, process isolation, scoped access
### Enterprise Mode (Kubernetes)
- **Process Isolation**: Container-per-tenant architecture
- **Network Isolation**: Kubernetes Network Policies
- **Resource Quotas**: Per-tenant CPU, memory, and storage limits
- **Separate Volumes**: Each tenant has their own PersistentVolume
See `SECURITY.md` and `SECURITY_IMPLEMENTATION.md` for detailed information.
## Security Considerations ## Security Considerations
- **Path Traversal**: All file paths are validated and sanitized - **Path Traversal**: All file paths are validated and sanitized
- **Input Validation**: Commit messages, author info, and file paths are validated - **Input Validation**: Commit messages, author info, and file paths are validated
- **Size Limits**: 2 GB per repository, 100 MB per file - **Size Limits**: 2 GB per repository, 500 MB per file
- **Authentication**: All write operations require NIP-98 authentication - **Authentication**: All write operations require NIP-98 authentication
- **Authorization**: Ownership and maintainer checks for all operations - **Authorization**: Ownership and maintainer checks for all operations
- **Private Repositories**: Access restricted to owners and maintainers - **Private Repositories**: Access restricted to owners and maintainers
- **Resource Limits**: Per-user repository count and disk quota limits (configurable)
- **Rate Limiting**: Per-IP and per-user rate limiting (configurable)
- **Audit Logging**: All security-relevant events are logged
## License ## License

191
SECURITY.md

@ -0,0 +1,191 @@
# Security Analysis
## Current Security Model
This is a **multi-tenant system** where multiple users (identified by Nostr pubkeys/npubs) share the same server instance with **application-level isolation** but **no process or filesystem isolation**.
### Security Measures in Place
1. **Path Validation**
- ✅ File paths are validated and sanitized
- ✅ Path traversal attempts (`..`) are blocked
- ✅ Absolute paths are rejected
- ✅ Null bytes and control characters are blocked
- ✅ Path length limits enforced (4096 chars)
2. **Input Validation**
- ✅ npub format validation (must be valid bech32)
- ✅ Repository name validation (alphanumeric, hyphens, underscores, dots only)
- ✅ No path separators allowed in repo names
3. **Access Control**
- ✅ Repository ownership verified via Nostr events
- ✅ Private repos require NIP-98 authentication
- ✅ Maintainer checks before allowing write operations
- ✅ Ownership transfer chain validation
4. **Path Construction**
- ✅ Uses `path.join()` which prevents path traversal
- ✅ Repository path: `join(repoRoot, npub, `${repoName}.git`)`
- ✅ File paths within repos are validated separately
## Security Concerns
### ⚠ **Critical: No Process Isolation**
**Issue**: All repositories run in the same Node.js process. If an attacker compromises one repository or finds a code execution vulnerability, they could potentially:
- Access other users' repositories
- Read/write files outside the repo directory
- Access server configuration or secrets
**Mitigation**:
- Path validation prevents most traversal attacks
- Access control checks prevent unauthorized access
- But a process-level compromise would bypass these
### ✅ **High: git-http-backend Security** - IMPROVED
**Previous Issue**: `git-http-backend` was spawned with `GIT_PROJECT_ROOT` set to the entire `repoRoot`, allowing potential access to all repositories.
**Current Protection** (✅ IMPLEMENTED):
- ✅ `GIT_PROJECT_ROOT` now set to **specific repository path** (not entire repoRoot)
- ✅ `PATH_INFO` adjusted to be relative to the repository
- ✅ Path validation ensures repository path is within `repoRoot`
- ✅ Limits git-http-backend's view to only the intended repository
**Remaining Concerns**:
- No chroot/jail isolation (git-http-backend still runs in same process context)
- If git-http-backend has vulnerabilities, it could still access files within the repo
- ✅ Runs as dedicated `gitrepublic` user (non-root) - IMPLEMENTED
### ⚠ **Medium: No Resource Limits Per Tenant**
**Issue**: No per-user resource limits:
- One user could exhaust disk space (2GB per repo limit, but unlimited repos)
- One user could exhaust memory/CPU
- No rate limiting per user
**Current Protection**:
- 2GB repository size limit
- 500MB per-file limit
- But no per-user quotas
### ✅ **Medium: Filesystem Access** - IMPROVED
**Previous Issue**: Repository paths were not validated to ensure they stayed within `repoRoot`.
**Current Protection** (✅ IMPLEMENTED):
- ✅ Repository path validation using `resolve()` to check absolute paths
- ✅ Ensures resolved repository path starts with resolved `repoRoot`
- ✅ Prevents path traversal attacks at the repository level
- ✅ File path validation within repositories (already existed)
- ✅ Access control checks for private repos
**Remaining Concerns**:
- No chroot/jail isolation
- All repos readable by the same process user
- Relies on application logic, not OS-level isolation
### ⚠ **Low: Network Isolation**
**Issue**: All repos accessible from same endpoints:
- No network-level isolation between tenants
- All repos share same IP/domain
**Impact**: Low - this is expected for a multi-tenant service
## Security Improvements Made
### ✅ Implemented (2024)
1. **✅ Repository Path Validation**
- Added `resolve()` checks to ensure repository paths stay within `repoRoot`
- Prevents path traversal attacks at the repository level
- Applied to all git operations (GET and POST handlers)
2. **✅ git-http-backend Isolation**
- Changed `GIT_PROJECT_ROOT` from entire `repoRoot` to specific repository path
- Adjusted `PATH_INFO` to be relative to the repository
- Limits git-http-backend's view to only the intended repository
3. **✅ File Path Validation** (Already existed)
- Validates file paths within repositories
- Prevents path traversal within repos
- Blocks absolute paths, null bytes, control characters
## Recommendations
### ✅ Implemented (2024)
1. **✅ Resource Limits** - IMPLEMENTED
- ✅ Per-user repository count limits (configurable via `MAX_REPOS_PER_USER`)
- ✅ Per-user disk quota (configurable via `MAX_DISK_QUOTA_PER_USER`)
- ✅ Rate limiting per user/IP (configurable via `RATE_LIMIT_*` env vars)
- ✅ Applied to fork operations and repository creation
2. **✅ Audit Logging** - IMPLEMENTED
- ✅ Logs all repository access attempts
- ✅ Logs all file operations (read/write/delete)
- ✅ Logs authentication attempts
- ✅ Logs ownership transfers
- ✅ Structured JSON logging format
3. **✅ Enhanced git-http-backend Security** - IMPLEMENTED
- ✅ Operation timeouts (5 minutes max)
- ✅ Process isolation (no shell, minimal environment)
- ✅ Audit logging for all git operations
- ✅ Path validation and scoping
- ⚠ Chroot/jail still not implemented (complex, requires root or capabilities)
### Remaining (Medium Priority)
### Medium Priority
4. **Process Isolation** (Complex)
- Run each tenant in separate container/process
- Use Docker with per-tenant containers
- Significant architectural change
5. **Filesystem Isolation**
- Use bind mounts with restricted permissions
- Implement per-tenant filesystem quotas
- Use separate volumes per tenant
6. **✅ Audit Logging** - IMPLEMENTED
- ✅ Log all repository access attempts
- ✅ Log all file operations
- ⏳ Monitor for suspicious patterns (requires log analysis tools)
### Long-term
7. **Container-per-Tenant Architecture**
- Each user gets their own container
- Complete isolation
- Higher resource overhead
8. **Kubernetes Namespaces**
- Use K8s namespaces for tenant isolation
- Network policies for isolation
- Resource quotas per namespace
## Current Security Posture
**For a decentralized, open-source git hosting service**, the current security model is **reasonable but not enterprise-grade**:
**Adequate for**:
- Public repositories
- Open-source projects
- Personal/community hosting
- Low-to-medium security requirements
**Not adequate for**:
- Enterprise multi-tenant SaaS
- Highly sensitive/regulated data
- Environments requiring strict compliance (HIPAA, PCI-DSS, etc.)
- High-security government/military use
## Conclusion
The system uses **application-level security** with good input validation and access control, but lacks **OS-level isolation**. This is a common trade-off for multi-tenant services - it's simpler and more resource-efficient, but less secure than process/container isolation.
**Recommendation**: For most use cases (public repos, open-source hosting), the current model is acceptable. For enterprise or high-security use cases, consider implementing process/container isolation.

268
SECURITY_IMPLEMENTATION.md

@ -0,0 +1,268 @@
# Security Implementation Plan
This document outlines the implementation of security improvements in two tiers:
1. **Lightweight** - Single container, application-level improvements
2. **Enterprise** - Multi-container/Kubernetes with process isolation
## Architecture Overview
### Lightweight (Single Container)
- Application-level security controls
- Resource limits enforced in code
- Rate limiting in application
- Audit logging
- Works with current Docker setup
### Enterprise (Kubernetes)
- Process isolation per tenant
- Network policies
- Resource quotas per namespace
- Separate volumes per tenant
- Scales horizontally
## Implementation Plan
### Phase 1: Lightweight Improvements (Single Container)
These improvements work in the current single-container setup and provide immediate security benefits.
#### 1.1 Resource Limits Per User
**Implementation**: Application-level tracking and enforcement
**Files to create/modify**:
- `src/lib/services/security/resource-limits.ts` - Track and enforce limits
- `src/routes/api/repos/[npub]/[repo]/+server.ts` - Check limits before operations
**Features**:
- Per-user repository count limit (configurable, default: 100)
- Per-user disk quota (configurable, default: 10GB)
- Per-repository size limit (already exists: 2GB)
- Per-file size limit (already exists: 500MB)
**Configuration**:
```typescript
// Environment variables
MAX_REPOS_PER_USER=100
MAX_DISK_QUOTA_PER_USER=10737418240 // 10GB in bytes
```
#### 1.2 Rate Limiting
**Implementation**: In-memory or Redis-based rate limiting
**Files to create/modify**:
- `src/lib/services/security/rate-limiter.ts` - Rate limiting logic
- `src/hooks.server.ts` - Apply rate limits to requests
**Features**:
- Per-IP rate limiting (requests per minute)
- Per-user rate limiting (operations per minute)
- Different limits for different operations:
- Git operations (clone/push): 60/min
- File operations: 30/min
- API requests: 120/min
**Configuration**:
```typescript
// Environment variables
RATE_LIMIT_ENABLED=true
RATE_LIMIT_WINDOW_MS=60000 // 1 minute
RATE_LIMIT_MAX_REQUESTS=120
```
#### 1.3 Audit Logging
**Implementation**: Structured logging to files/console
**Files to create/modify**:
- `src/lib/services/security/audit-logger.ts` - Audit logging service
- All API endpoints - Add audit log entries
**Features**:
- Log all repository access attempts
- Log all file operations (read/write/delete)
- Log authentication attempts (success/failure)
- Log ownership transfers
- Include: timestamp, user pubkey, IP, action, result
**Log Format**:
```json
{
"timestamp": "2024-01-01T12:00:00Z",
"user": "abc123...",
"ip": "192.168.1.1",
"action": "repo.clone",
"repo": "npub1.../myrepo",
"result": "success",
"metadata": {}
}
```
**Storage**:
- **Console**: Always logs to stdout (JSON format, prefixed with `[AUDIT]`)
- **File**: Optional file logging (if `AUDIT_LOG_FILE` is set)
- Daily rotation: Creates new file each day (e.g., `audit-2024-01-01.log`)
- Location: Configurable via `AUDIT_LOG_FILE` environment variable
- Default location: Console only (no file logging by default)
**Retention**:
- **Default**: 90 days (configurable via `AUDIT_LOG_RETENTION_DAYS`)
- **Automatic cleanup**: Old log files are automatically deleted
- **Rotation**: Logs rotate daily at midnight (based on date change)
- **Set to 0**: Disables automatic cleanup (manual cleanup required)
**Example Configuration**:
```bash
# Log to /var/log/gitrepublic/audit.log (with daily rotation)
AUDIT_LOG_FILE=/var/log/gitrepublic/audit.log
AUDIT_LOG_RETENTION_DAYS=90
# Or use Docker volume
AUDIT_LOG_FILE=/app/logs/audit.log
AUDIT_LOG_RETENTION_DAYS=30
```
#### 1.4 Enhanced git-http-backend Hardening
**Implementation**: Additional security measures for git-http-backend
**Files to modify**:
- `src/routes/api/git/[...path]/+server.ts` - Add security measures
**Features**:
- Validate PATH_INFO to prevent manipulation
- Set restrictive environment variables
- Timeout for git operations
- Resource limits for spawned processes
### Phase 2: Enterprise Improvements (Kubernetes)
These require multi-container architecture and Kubernetes.
#### 2.1 Container-per-Tenant Architecture
**Architecture**:
- Each user (npub) gets their own namespace
- Each namespace has:
- Application pod (gitrepublic instance)
- Persistent volume for repositories
- Service for networking
- Resource quotas
**Kubernetes Resources**:
- `k8s/namespace-template.yaml` - Namespace per tenant
- `k8s/deployment-template.yaml` - Application deployment
- `k8s/service-template.yaml` - Service definition
- `k8s/pvc-template.yaml` - Persistent volume claim
- `k8s/resource-quota.yaml` - Resource limits
#### 2.2 Network Isolation
**Implementation**: Kubernetes Network Policies
**Files to create**:
- `k8s/network-policy.yaml` - Network isolation rules
**Features**:
- Namespace-level network isolation
- Only allow traffic from ingress controller
- Block inter-namespace communication
- Allow egress to Nostr relays only
#### 2.3 Resource Quotas
**Implementation**: Kubernetes ResourceQuota
**Features**:
- CPU limits per tenant
- Memory limits per tenant
- Storage limits per tenant
- Pod count limits
#### 2.4 Separate Volumes Per Tenant
**Implementation**: Kubernetes PersistentVolumeClaims
**Features**:
- Each tenant gets their own volume
- Volume size limits
- Backup/restore per tenant
- Snapshot support
## Hybrid Approach (Recommended)
The hybrid approach implements lightweight improvements first, then provides a migration path to enterprise architecture.
### Benefits:
1. **Immediate security improvements** - Lightweight features work now
2. **Scalable architecture** - Can migrate to Kubernetes when needed
3. **Cost-effective** - Start simple, scale as needed
4. **Flexible deployment** - Works in both scenarios
### Implementation Strategy:
1. **Start with lightweight** - Implement Phase 1 features
2. **Design for scale** - Code structure supports multi-container
3. **Add Kubernetes support** - Phase 2 when needed
4. **Gradual migration** - Move tenants to K8s as needed
## File Structure
```
src/lib/services/security/
├── resource-limits.ts # Resource limit tracking
├── rate-limiter.ts # Rate limiting
├── audit-logger.ts # Audit logging
└── quota-manager.ts # Disk quota management
k8s/
├── base/
│ ├── namespace.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ └── pvc.yaml
├── overlays/
│ ├── single-container/ # Single container setup
│ └── multi-tenant/ # Kubernetes setup
└── helm-chart/ # Optional Helm chart
```
## Configuration
### Lightweight Mode (Single Container)
```yaml
# docker-compose.yml or .env
SECURITY_MODE=lightweight
MAX_REPOS_PER_USER=100
MAX_DISK_QUOTA_PER_USER=10737418240
RATE_LIMIT_ENABLED=true
AUDIT_LOGGING_ENABLED=true
```
### Enterprise Mode (Kubernetes)
```yaml
# Kubernetes ConfigMap
security:
mode: enterprise
isolation: container-per-tenant
networkPolicy: enabled
resourceQuotas: enabled
```
## Migration Path
### From Lightweight to Enterprise:
1. **Phase 1**: Deploy lightweight improvements (no architecture change)
2. **Phase 2**: Add Kubernetes support alongside single container
3. **Phase 3**: Migrate high-value tenants to Kubernetes
4. **Phase 4**: Full Kubernetes deployment (optional)
## Priority Implementation Order
1. ✅ **Audit Logging** - Easy, high value, works everywhere
2. ✅ **Rate Limiting** - Prevents abuse, works in single container
3. ✅ **Resource Limits** - Prevents resource exhaustion
4. ⏳ **Enhanced git-http-backend** - Additional hardening
5. ⏳ **Kubernetes Support** - When scaling needed

2
docker-compose.yml

@ -20,6 +20,8 @@ services:
volumes: volumes:
# Persist git repositories # Persist git repositories
- ./repos:/repos - ./repos:/repos
# Optional: persist audit logs
# - ./logs:/app/logs
# Optional: mount config file if needed # Optional: mount config file if needed
# - ./config:/app/config:ro # - ./config:/app/config:ro
restart: unless-stopped restart: unless-stopped

218
k8s/README.md

@ -0,0 +1,218 @@
# Kubernetes Deployment Guide
This directory contains Kubernetes manifests for enterprise-grade multi-tenant deployment of gitrepublic-web.
## Architecture
### Enterprise Mode (Kubernetes)
- **Container-per-tenant**: Each user (npub) gets their own namespace
- **Process isolation**: Complete isolation between tenants
- **Network isolation**: Network policies prevent inter-tenant communication
- **Resource quotas**: Per-tenant CPU, memory, and storage limits
- **Separate volumes**: Each tenant has their own PersistentVolume
### Lightweight Mode (Single Container)
- Application-level security controls
- Works with current Docker setup
- See `SECURITY_IMPLEMENTATION.md` for details
## Directory Structure
```
k8s/
├── base/ # Base Kubernetes manifests (templates)
│ ├── namespace.yaml # Namespace per tenant
│ ├── resource-quota.yaml # Resource limits per tenant
│ ├── limit-range.yaml # Default container limits
│ ├── deployment.yaml # Application deployment
│ ├── service.yaml # Service definition
│ ├── pvc.yaml # Persistent volume claim
│ └── network-policy.yaml # Network isolation
├── overlays/
│ ├── single-container/ # Single container setup (lightweight)
│ └── multi-tenant/ # Kubernetes setup (enterprise)
└── README.md # This file
```
## Usage
### Single Container (Lightweight)
Use the existing `docker-compose.yml` or `Dockerfile`. Security improvements are application-level and work automatically.
### Kubernetes (Enterprise)
#### Option 1: Manual Deployment
1. **Create namespace for tenant**:
```bash
export TENANT_ID="npub1abc123..."
export GIT_DOMAIN="git.example.com"
export NOSTR_RELAYS="wss://relay1.com,wss://relay2.com"
export STORAGE_CLASS="fast-ssd"
# Replace variables in templates
envsubst < k8s/base/namespace.yaml | kubectl apply -f -
envsubst < k8s/base/resource-quota.yaml | kubectl apply -f -
envsubst < k8s/base/limit-range.yaml | kubectl apply -f -
envsubst < k8s/base/pvc.yaml | kubectl apply -f -
envsubst < k8s/base/deployment.yaml | kubectl apply -f -
envsubst < k8s/base/service.yaml | kubectl apply -f -
envsubst < k8s/base/network-policy.yaml | kubectl apply -f -
```
#### Option 2: Operator Pattern (Recommended)
Create a Kubernetes operator that:
- Watches for new repository announcements
- Automatically creates namespaces for new tenants
- Manages tenant lifecycle
- Handles scaling and resource allocation
#### Option 3: Helm Chart
Package as Helm chart for easier deployment:
```bash
helm install gitrepublic ./helm-chart \
--set tenant.id=npub1abc123... \
--set git.domain=git.example.com
```
## Configuration
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `SECURITY_MODE` | `lightweight` or `enterprise` | `lightweight` |
| `MAX_REPOS_PER_USER` | Max repos per user | `100` |
| `MAX_DISK_QUOTA_PER_USER` | Max disk per user (bytes) | `10737418240` (10GB) |
| `RATE_LIMIT_ENABLED` | Enable rate limiting | `true` |
| `AUDIT_LOGGING_ENABLED` | Enable audit logging | `true` |
### Resource Quotas
Adjust in `resource-quota.yaml`:
- CPU: requests/limits per tenant
- Memory: requests/limits per tenant
- Storage: per-tenant volume size
- Pods: max pods per tenant
## Ingress Configuration
Use an Ingress controller (e.g., nginx-ingress) to route traffic:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitrepublic-ingress
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
ingressClassName: nginx
rules:
- host: ${TENANT_SUBDOMAIN}.git.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitrepublic
port:
number: 80
```
## Monitoring
### Recommended Tools
- **Prometheus**: Metrics collection
- **Grafana**: Dashboards
- **Loki**: Log aggregation
- **Jaeger**: Distributed tracing
### Metrics to Monitor
- Request rate per tenant
- Resource usage per tenant
- Error rates
- Git operation durations
- Disk usage per tenant
## Backup Strategy
### Per-Tenant Backups
1. **Volume Snapshots**: Use Kubernetes VolumeSnapshots
2. **Git Repo Backups**: Regular `git bundle` exports
3. **Metadata Backups**: Export Nostr events
### Example Backup Job
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: gitrepublic-backup
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
schedule: "0 2 * * *" # Daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: gitrepublic-backup:latest
command: ["/backup.sh"]
volumeMounts:
- name: repos
mountPath: /repos
volumes:
- name: repos
persistentVolumeClaim:
claimName: gitrepublic-repos
```
## Migration from Lightweight to Enterprise
1. **Export tenant data**: Backup repositories
2. **Create namespace**: Set up K8s resources
3. **Import data**: Restore to new volume
4. **Update DNS**: Point to new service
5. **Verify**: Test all operations
6. **Decommission**: Remove old container
## Security Considerations
### Network Policies
- Prevents inter-tenant communication
- Restricts egress to necessary services only
- Allows ingress from ingress controller only
### Resource Quotas
- Prevents resource exhaustion
- Ensures fair resource allocation
- Limits blast radius of issues
### Process Isolation
- Complete isolation between tenants
- No shared memory or filesystem
- Separate security contexts
## Cost Considerations
### Lightweight Mode
- **Lower cost**: Single container, shared resources
- **Lower isolation**: Application-level only
- **Good for**: Small to medium deployments
### Enterprise Mode
- **Higher cost**: Multiple containers, separate volumes
- **Higher isolation**: Process and network isolation
- **Good for**: Large deployments, enterprise customers
## Hybrid Approach
Run both modes:
- **Lightweight**: For most users (cost-effective)
- **Enterprise**: For high-value tenants (isolation)
Use a tenant classification system to route tenants to appropriate mode.

78
k8s/base/deployment.yaml

@ -0,0 +1,78 @@
# Deployment template for gitrepublic per tenant
# Each tenant gets their own deployment in their own namespace
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitrepublic
namespace: gitrepublic-tenant-${TENANT_ID}
labels:
app: gitrepublic
tenant: ${TENANT_ID}
spec:
replicas: 1 # Scale as needed
selector:
matchLabels:
app: gitrepublic
tenant: ${TENANT_ID}
template:
metadata:
labels:
app: gitrepublic
tenant: ${TENANT_ID}
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
containers:
- name: gitrepublic
image: gitrepublic-web:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6543
name: http
protocol: TCP
env:
- name: NODE_ENV
value: "production"
- name: GIT_REPO_ROOT
value: "/repos"
- name: GIT_DOMAIN
value: "${GIT_DOMAIN}" # Tenant-specific domain or shared
- name: NOSTR_RELAYS
value: "${NOSTR_RELAYS}"
- name: PORT
value: "6543"
- name: SECURITY_MODE
value: "enterprise" # Use enterprise mode in K8s
volumeMounts:
- name: repos
mountPath: /repos
resources:
requests:
cpu: "500m"
memory: 512Mi
limits:
cpu: "2"
memory: 2Gi
livenessProbe:
httpGet:
path: /
port: 6543
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 6543
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumes:
- name: repos
persistentVolumeClaim:
claimName: gitrepublic-repos

17
k8s/base/limit-range.yaml

@ -0,0 +1,17 @@
# LimitRange for default resource limits per container
# Ensures containers have resource requests/limits even if not specified
apiVersion: v1
kind: LimitRange
metadata:
name: gitrepublic-limits
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
limits:
- default:
cpu: "1"
memory: 1Gi
defaultRequest:
cpu: "500m"
memory: 512Mi
type: Container

11
k8s/base/namespace.yaml

@ -0,0 +1,11 @@
# Kubernetes namespace template for per-tenant isolation
# This is a template - in production, create one namespace per tenant (npub)
apiVersion: v1
kind: Namespace
metadata:
name: gitrepublic-tenant-${TENANT_ID}
labels:
app: gitrepublic
tenant: ${TENANT_ID}
managed-by: gitrepublic-operator # If using operator pattern

45
k8s/base/network-policy.yaml

@ -0,0 +1,45 @@
# NetworkPolicy for tenant isolation
# Prevents inter-tenant communication and restricts egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: gitrepublic-isolation
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
podSelector:
matchLabels:
app: gitrepublic
tenant: ${TENANT_ID}
policyTypes:
- Ingress
- Egress
ingress:
# Allow traffic from ingress controller only
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx # Adjust to your ingress controller namespace
- podSelector:
matchLabels:
app: ingress-nginx
ports:
- protocol: TCP
port: 6543
# Deny all other ingress (including from other tenants)
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow egress to Nostr relays (WSS)
- to:
- namespaceSelector: {} # Any namespace (for external services)
ports:
- protocol: TCP
port: 443
# Deny all other egress

15
k8s/base/pvc.yaml

@ -0,0 +1,15 @@
# PersistentVolumeClaim for tenant repositories
# Each tenant gets their own volume for complete isolation
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitrepublic-repos
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi # Adjust per tenant needs
storageClassName: ${STORAGE_CLASS} # e.g., "fast-ssd" or "standard"

30
k8s/base/resource-quota.yaml

@ -0,0 +1,30 @@
# Resource quotas per tenant namespace
# Limits CPU, memory, storage, and pod count per tenant
apiVersion: v1
kind: ResourceQuota
metadata:
name: gitrepublic-quota
namespace: gitrepublic-tenant-${TENANT_ID}
spec:
hard:
# CPU limits
requests.cpu: "2"
limits.cpu: "4"
# Memory limits
requests.memory: 2Gi
limits.memory: 4Gi
# Storage limits
persistentvolumeclaims: "1"
requests.storage: 100Gi
limits.storage: 200Gi
# Pod limits
pods: "2" # Application pod + optional sidecar
# Optional: Limit other resources
services: "1"
secrets: "5"
configmaps: "3"

21
k8s/base/service.yaml

@ -0,0 +1,21 @@
# Service for gitrepublic tenant
# Exposes the application within the cluster
apiVersion: v1
kind: Service
metadata:
name: gitrepublic
namespace: gitrepublic-tenant-${TENANT_ID}
labels:
app: gitrepublic
tenant: ${TENANT_ID}
spec:
type: ClusterIP # Use Ingress for external access
ports:
- port: 80
targetPort: 6543
protocol: TCP
name: http
selector:
app: gitrepublic
tenant: ${TENANT_ID}

63
src/hooks.server.ts

@ -1,11 +1,14 @@
/** /**
* Server-side hooks for gitrepublic-web * Server-side hooks for gitrepublic-web
* Initializes repo polling service * Initializes repo polling service and security middleware
*/ */
import type { Handle } from '@sveltejs/kit'; import type { Handle } from '@sveltejs/kit';
import { error } from '@sveltejs/kit';
import { RepoPollingService } from './lib/services/nostr/repo-polling.js'; import { RepoPollingService } from './lib/services/nostr/repo-polling.js';
import { GIT_DOMAIN, DEFAULT_NOSTR_RELAYS } from './lib/config.js'; import { GIT_DOMAIN, DEFAULT_NOSTR_RELAYS } from './lib/config.js';
import { rateLimiter } from './lib/services/security/rate-limiter.js';
import { auditLogger } from './lib/services/security/audit-logger.js';
// Initialize polling service // Initialize polling service
const repoRoot = process.env.GIT_REPO_ROOT || '/repos'; const repoRoot = process.env.GIT_REPO_ROOT || '/repos';
@ -20,5 +23,61 @@ if (typeof process !== 'undefined') {
} }
export const handle: Handle = async ({ event, resolve }) => { export const handle: Handle = async ({ event, resolve }) => {
return resolve(event); // Rate limiting
const clientIp = event.getClientAddress();
const url = event.url;
// Determine rate limit type based on path
let rateLimitType = 'api';
if (url.pathname.startsWith('/api/git/')) {
rateLimitType = 'git';
} else if (url.pathname.startsWith('/api/repos/') && url.pathname.includes('/file')) {
rateLimitType = 'file';
} else if (url.pathname.startsWith('/api/search')) {
rateLimitType = 'search';
}
// Check rate limit
const rateLimitResult = rateLimiter.check(rateLimitType, clientIp);
if (!rateLimitResult.allowed) {
auditLogger.log({
ip: clientIp,
action: `rate_limit.${rateLimitType}`,
result: 'denied',
metadata: { path: url.pathname }
});
return error(429, `Rate limit exceeded. Try again after ${new Date(rateLimitResult.resetAt).toISOString()}`);
}
// Audit log the request (basic info)
// Detailed audit logging happens in individual endpoints
const startTime = Date.now();
try {
const response = await resolve(event);
// Log successful request if it's a security-sensitive operation
if (url.pathname.startsWith('/api/')) {
const duration = Date.now() - startTime;
auditLogger.log({
ip: clientIp,
action: `request.${event.request.method.toLowerCase()}`,
resource: url.pathname,
result: 'success',
metadata: { status: response.status, duration }
});
}
return response;
} catch (err) {
// Log failed request
auditLogger.log({
ip: clientIp,
action: `request.${event.request.method.toLowerCase()}`,
resource: url.pathname,
result: 'failure',
error: err instanceof Error ? err.message : String(err)
});
throw err;
}
}; };

9
src/lib/services/git/file-manager.ts

@ -58,7 +58,14 @@ export class FileManager {
* Get the full path to a repository * Get the full path to a repository
*/ */
private getRepoPath(npub: string, repoName: string): string { private getRepoPath(npub: string, repoName: string): string {
return join(this.repoRoot, npub, `${repoName}.git`); const repoPath = join(this.repoRoot, npub, `${repoName}.git`);
// Security: Ensure the resolved path is within repoRoot to prevent path traversal
const resolvedPath = resolve(repoPath);
const resolvedRoot = resolve(this.repoRoot);
if (!resolvedPath.startsWith(resolvedRoot + '/') && resolvedPath !== resolvedRoot) {
throw new Error('Path traversal detected: repository path outside allowed root');
}
return repoPath;
} }
/** /**

336
src/lib/services/security/audit-logger.ts

@ -0,0 +1,336 @@
/**
* Audit logging service
* Logs all security-relevant events for monitoring and compliance
*
* Storage:
* - Console: Always logs to console (stdout) in JSON format
* - File: Optional file logging with rotation (if AUDIT_LOG_FILE is set)
*
* Retention:
* - Configurable via AUDIT_LOG_RETENTION_DAYS (default: 90 days)
* - Old log files are automatically cleaned up
*/
import { appendFile, mkdir, readdir, unlink, stat } from 'fs/promises';
import { join, dirname } from 'path';
import { existsSync } from 'fs';
export interface AuditLogEntry {
timestamp: string;
user?: string; // pubkey (hex or npub)
ip?: string;
action: string;
resource?: string; // repo path, file path, etc.
result: 'success' | 'failure' | 'denied';
error?: string;
metadata?: Record<string, any>;
}
export class AuditLogger {
private enabled: boolean;
private logFile?: string;
private logDir?: string;
private retentionDays: number;
private currentLogFile?: string;
private logRotationInterval?: NodeJS.Timeout;
private cleanupInterval?: NodeJS.Timeout;
private writeQueue: string[] = [];
private writing = false;
constructor() {
this.enabled = process.env.AUDIT_LOGGING_ENABLED !== 'false';
this.logFile = process.env.AUDIT_LOG_FILE;
this.retentionDays = parseInt(process.env.AUDIT_LOG_RETENTION_DAYS || '90', 10);
if (this.logFile) {
this.logDir = dirname(this.logFile);
this.currentLogFile = this.getCurrentLogFile();
this.ensureLogDirectory();
this.startLogRotation();
this.startCleanup();
}
}
/**
* Get current log file name with date suffix
*/
private getCurrentLogFile(): string {
if (!this.logFile) return '';
const date = new Date().toISOString().split('T')[0]; // YYYY-MM-DD
const baseName = this.logFile.replace(/\.log$/, '') || 'audit';
return `${baseName}-${date}.log`;
}
/**
* Ensure log directory exists
*/
private async ensureLogDirectory(): Promise<void> {
if (!this.logDir) return;
try {
if (!existsSync(this.logDir)) {
await mkdir(this.logDir, { recursive: true });
}
} catch (error) {
console.error('[AUDIT] Failed to create log directory:', error);
}
}
/**
* Start log rotation (check daily for new log file)
*/
private startLogRotation(): void {
// Check every hour if we need to rotate
this.logRotationInterval = setInterval(() => {
const newLogFile = this.getCurrentLogFile();
if (newLogFile !== this.currentLogFile) {
this.currentLogFile = newLogFile;
// Flush any pending writes before rotating
this.flushQueue();
}
}, 60 * 60 * 1000); // 1 hour
}
/**
* Start cleanup of old log files
*/
private startCleanup(): void {
// Run cleanup daily
this.cleanupInterval = setInterval(() => {
this.cleanupOldLogs().catch(err => {
console.error('[AUDIT] Failed to cleanup old logs:', err);
});
}, 24 * 60 * 60 * 1000); // 24 hours
// Run initial cleanup
this.cleanupOldLogs().catch(err => {
console.error('[AUDIT] Failed to cleanup old logs:', err);
});
}
/**
* Clean up log files older than retention period
*/
private async cleanupOldLogs(): Promise<void> {
if (!this.logDir || !existsSync(this.logDir)) return;
try {
const files = await readdir(this.logDir);
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - this.retentionDays);
const cutoffTime = cutoffDate.getTime();
for (const file of files) {
if (!file.endsWith('.log')) continue;
const filePath = join(this.logDir, file);
try {
const stats = await stat(filePath);
if (stats.mtime.getTime() < cutoffTime) {
await unlink(filePath);
console.log(`[AUDIT] Deleted old log file: ${file}`);
}
} catch (err) {
// Ignore errors for individual files
}
}
} catch (error) {
console.error('[AUDIT] Error during log cleanup:', error);
}
}
/**
* Write log entry to file (async, non-blocking)
*/
private async writeToFile(logLine: string): Promise<void> {
if (!this.currentLogFile || !this.logDir) return;
this.writeQueue.push(logLine);
if (this.writing) return; // Already writing, queue will be processed
this.writing = true;
try {
while (this.writeQueue.length > 0) {
const batch = this.writeQueue.splice(0, 100); // Process in batches
const content = batch.join('\n') + '\n';
await appendFile(join(this.logDir, this.currentLogFile), content, 'utf8');
}
} catch (error) {
console.error('[AUDIT] Failed to write to log file:', error);
// Put items back in queue (but limit queue size to prevent memory issues)
this.writeQueue = [...this.writeQueue, ...this.writeQueue].slice(0, 1000);
} finally {
this.writing = false;
}
}
/**
* Flush pending writes
*/
private async flushQueue(): Promise<void> {
if (this.writeQueue.length > 0 && !this.writing) {
await this.writeToFile('');
}
}
/**
* Log an audit event
*/
log(entry: Omit<AuditLogEntry, 'timestamp'>): void {
if (!this.enabled) return;
const fullEntry: AuditLogEntry = {
...entry,
timestamp: new Date().toISOString()
};
// Log to console (structured JSON)
const logLine = JSON.stringify(fullEntry);
console.log(`[AUDIT] ${logLine}`);
// Write to file if configured (async, non-blocking)
if (this.logFile) {
this.writeToFile(logLine).catch(err => {
console.error('[AUDIT] Failed to write log entry:', err);
});
}
}
/**
* Cleanup on shutdown
*/
destroy(): void {
if (this.logRotationInterval) {
clearInterval(this.logRotationInterval);
}
if (this.cleanupInterval) {
clearInterval(this.cleanupInterval);
}
this.flushQueue();
}
/**
* Log repository access
*/
logRepoAccess(
user: string | null,
ip: string | null,
action: 'clone' | 'fetch' | 'push' | 'view' | 'list',
repo: string,
result: 'success' | 'failure' | 'denied',
error?: string
): void {
this.log({
user: user || undefined,
ip: ip || undefined,
action: `repo.${action}`,
resource: repo,
result,
error
});
}
/**
* Log file operation
*/
logFileOperation(
user: string | null,
ip: string | null,
action: 'read' | 'write' | 'delete' | 'create',
repo: string,
filePath: string,
result: 'success' | 'failure' | 'denied',
error?: string
): void {
this.log({
user: user || undefined,
ip: ip || undefined,
action: `file.${action}`,
resource: `${repo}:${filePath}`,
result,
error,
metadata: { filePath }
});
}
/**
* Log authentication attempt
*/
logAuth(
user: string | null,
ip: string | null,
method: 'NIP-07' | 'NIP-98' | 'none',
result: 'success' | 'failure',
error?: string
): void {
this.log({
user: user || undefined,
ip: ip || undefined,
action: `auth.${method.toLowerCase()}`,
result,
error
});
}
/**
* Log ownership transfer
*/
logOwnershipTransfer(
fromUser: string,
toUser: string,
repo: string,
result: 'success' | 'failure',
error?: string
): void {
this.log({
user: fromUser,
action: 'ownership.transfer',
resource: repo,
result,
error,
metadata: { toUser }
});
}
/**
* Log repository creation
*/
logRepoCreate(
user: string,
repo: string,
result: 'success' | 'failure',
error?: string
): void {
this.log({
user,
action: 'repo.create',
resource: repo,
result,
error
});
}
/**
* Log repository fork
*/
logRepoFork(
user: string,
originalRepo: string,
forkRepo: string,
result: 'success' | 'failure',
error?: string
): void {
this.log({
user,
action: 'repo.fork',
resource: forkRepo,
result,
error,
metadata: { originalRepo }
});
}
}
// Singleton instance
export const auditLogger = new AuditLogger();

125
src/lib/services/security/rate-limiter.ts

@ -0,0 +1,125 @@
/**
* Rate limiting service
* Prevents abuse by limiting requests per user/IP
*/
interface RateLimitEntry {
count: number;
resetAt: number;
}
export class RateLimiter {
private enabled: boolean;
private windowMs: number;
private limits: Map<string, Map<string, RateLimitEntry>>; // type -> identifier -> entry
private cleanupInterval: NodeJS.Timeout | null = null;
constructor() {
this.enabled = process.env.RATE_LIMIT_ENABLED !== 'false';
this.windowMs = parseInt(process.env.RATE_LIMIT_WINDOW_MS || '60000', 10); // 1 minute default
this.limits = new Map();
// Cleanup old entries every 5 minutes
if (this.enabled) {
this.cleanupInterval = setInterval(() => this.cleanup(), 5 * 60 * 1000);
}
}
/**
* Check if a request should be rate limited
* @param type - Type of operation (e.g., 'git', 'api', 'file')
* @param identifier - User pubkey or IP address
* @param maxRequests - Maximum requests allowed in the window
* @returns true if allowed, false if rate limited
*/
checkLimit(type: string, identifier: string, maxRequests: number): { allowed: boolean; remaining: number; resetAt: number } {
if (!this.enabled) {
return { allowed: true, remaining: Infinity, resetAt: Date.now() + this.windowMs };
}
const now = Date.now();
const key = `${type}:${identifier}`;
if (!this.limits.has(type)) {
this.limits.set(type, new Map());
}
const typeLimits = this.limits.get(type)!;
const entry = typeLimits.get(identifier);
if (!entry || entry.resetAt < now) {
// Create new entry or reset expired entry
typeLimits.set(identifier, {
count: 1,
resetAt: now + this.windowMs
});
return { allowed: true, remaining: maxRequests - 1, resetAt: now + this.windowMs };
}
if (entry.count >= maxRequests) {
return { allowed: false, remaining: 0, resetAt: entry.resetAt };
}
entry.count++;
return { allowed: true, remaining: maxRequests - entry.count, resetAt: entry.resetAt };
}
/**
* Get rate limit configuration for operation type
*/
private getLimitForType(type: string): number {
const envKey = `RATE_LIMIT_${type.toUpperCase()}_MAX`;
const defaultLimits: Record<string, number> = {
git: 60, // Git operations: 60/min
api: 120, // API requests: 120/min
file: 30, // File operations: 30/min
search: 20 // Search requests: 20/min
};
const envValue = process.env[envKey];
if (envValue) {
return parseInt(envValue, 10);
}
return defaultLimits[type] || 60;
}
/**
* Check rate limit for a specific operation type
*/
check(type: string, identifier: string): { allowed: boolean; remaining: number; resetAt: number } {
const maxRequests = this.getLimitForType(type);
return this.checkLimit(type, identifier, maxRequests);
}
/**
* Clean up expired entries
*/
private cleanup(): void {
const now = Date.now();
for (const [type, typeLimits] of this.limits.entries()) {
for (const [identifier, entry] of typeLimits.entries()) {
if (entry.resetAt < now) {
typeLimits.delete(identifier);
}
}
if (typeLimits.size === 0) {
this.limits.delete(type);
}
}
}
/**
* Cleanup on shutdown
*/
destroy(): void {
if (this.cleanupInterval) {
clearInterval(this.cleanupInterval);
this.cleanupInterval = null;
}
this.limits.clear();
}
}
// Singleton instance
export const rateLimiter = new RateLimiter();

177
src/lib/services/security/resource-limits.ts

@ -0,0 +1,177 @@
/**
* Resource limits service
* Tracks and enforces per-user resource limits
*/
import { statSync } from 'fs';
import { join } from 'path';
import { readdir } from 'fs/promises';
export interface ResourceUsage {
repoCount: number;
diskUsage: number; // bytes
maxRepos: number;
maxDiskQuota: number; // bytes
}
export class ResourceLimits {
private repoRoot: string;
private maxReposPerUser: number;
private maxDiskQuotaPerUser: number;
private cache: Map<string, { usage: ResourceUsage; timestamp: number }> = new Map();
private cacheTTL = 5 * 60 * 1000; // 5 minutes
constructor(repoRoot: string = '/repos') {
this.repoRoot = repoRoot;
this.maxReposPerUser = parseInt(process.env.MAX_REPOS_PER_USER || '100', 10);
this.maxDiskQuotaPerUser = parseInt(process.env.MAX_DISK_QUOTA_PER_USER || '10737418240', 10); // 10GB default
}
/**
* Get resource usage for a user (npub)
*/
async getUsage(npub: string): Promise<ResourceUsage> {
const cacheKey = npub;
const cached = this.cache.get(cacheKey);
const now = Date.now();
if (cached && (now - cached.timestamp) < this.cacheTTL) {
return cached.usage;
}
const userRepoDir = join(this.repoRoot, npub);
let repoCount = 0;
let diskUsage = 0;
try {
// Count repositories
if (await this.dirExists(userRepoDir)) {
const entries = await readdir(userRepoDir, { withFileTypes: true });
for (const entry of entries) {
if (entry.isDirectory() && entry.name.endsWith('.git')) {
repoCount++;
// Calculate disk usage for this repo
try {
const repoPath = join(userRepoDir, entry.name);
diskUsage += this.calculateDirSize(repoPath);
} catch {
// Ignore errors calculating size
}
}
}
}
} catch {
// User directory doesn't exist yet, usage is 0
}
const usage: ResourceUsage = {
repoCount,
diskUsage,
maxRepos: this.maxReposPerUser,
maxDiskQuota: this.maxDiskQuotaPerUser
};
this.cache.set(cacheKey, { usage, timestamp: now });
return usage;
}
/**
* Check if user can create a new repository
*/
async canCreateRepo(npub: string): Promise<{ allowed: boolean; reason?: string; usage: ResourceUsage }> {
const usage = await this.getUsage(npub);
if (usage.repoCount >= usage.maxRepos) {
return {
allowed: false,
reason: `Repository limit reached (${usage.repoCount}/${usage.maxRepos})`,
usage
};
}
return { allowed: true, usage };
}
/**
* Check if user has enough disk quota
*/
async hasDiskQuota(npub: string, additionalBytes: number = 0): Promise<{ allowed: boolean; reason?: string; usage: ResourceUsage }> {
const usage = await this.getUsage(npub);
if (usage.diskUsage + additionalBytes > usage.maxDiskQuota) {
return {
allowed: false,
reason: `Disk quota exceeded (${this.formatBytes(usage.diskUsage)}/${this.formatBytes(usage.maxDiskQuota)})`,
usage
};
}
return { allowed: true, usage };
}
/**
* Invalidate cache for a user (call after repo operations)
*/
invalidateCache(npub: string): void {
this.cache.delete(npub);
}
/**
* Calculate directory size recursively
*/
private calculateDirSize(dirPath: string): number {
try {
let size = 0;
const stats = statSync(dirPath);
if (stats.isFile()) {
return stats.size;
}
if (stats.isDirectory()) {
// For performance, we'll do a simplified calculation
// In production, you might want to use a more efficient method
// or cache this calculation
try {
const entries = require('fs').readdirSync(dirPath);
for (const entry of entries) {
try {
size += this.calculateDirSize(join(dirPath, entry));
} catch {
// Ignore errors (permissions, symlinks, etc.)
}
}
} catch {
// Can't read directory
}
}
return size;
} catch {
return 0;
}
}
/**
* Check if directory exists
*/
private async dirExists(path: string): Promise<boolean> {
try {
const stats = await import('fs/promises').then(m => m.stat(path));
return stats.isDirectory();
} catch {
return false;
}
}
/**
* Format bytes to human-readable string
*/
private formatBytes(bytes: number): string {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return Math.round(bytes / Math.pow(k, i) * 100) / 100 + ' ' + sizes[i];
}
}

168
src/routes/api/git/[...path]/+server.ts

@ -9,7 +9,7 @@ import { RepoManager } from '$lib/services/git/repo-manager.js';
import { nip19 } from 'nostr-tools'; import { nip19 } from 'nostr-tools';
import { spawn, execSync } from 'child_process'; import { spawn, execSync } from 'child_process';
import { existsSync } from 'fs'; import { existsSync } from 'fs';
import { join } from 'path'; import { join, resolve } from 'path';
import { DEFAULT_NOSTR_RELAYS } from '$lib/config.js'; import { DEFAULT_NOSTR_RELAYS } from '$lib/config.js';
import { NostrClient } from '$lib/services/nostr/nostr-client.js'; import { NostrClient } from '$lib/services/nostr/nostr-client.js';
import { KIND } from '$lib/types/nostr.js'; import { KIND } from '$lib/types/nostr.js';
@ -17,6 +17,7 @@ import type { NostrEvent } from '$lib/types/nostr.js';
import { verifyNIP98Auth } from '$lib/services/nostr/nip98-auth.js'; import { verifyNIP98Auth } from '$lib/services/nostr/nip98-auth.js';
import { OwnershipTransferService } from '$lib/services/nostr/ownership-transfer-service.js'; import { OwnershipTransferService } from '$lib/services/nostr/ownership-transfer-service.js';
import { MaintainerService } from '$lib/services/nostr/maintainer-service.js'; import { MaintainerService } from '$lib/services/nostr/maintainer-service.js';
import { auditLogger } from '$lib/services/security/audit-logger.js';
const repoRoot = process.env.GIT_REPO_ROOT || '/repos'; const repoRoot = process.env.GIT_REPO_ROOT || '/repos';
const repoManager = new RepoManager(repoRoot); const repoManager = new RepoManager(repoRoot);
@ -131,8 +132,14 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
return error(400, 'Invalid npub format'); return error(400, 'Invalid npub format');
} }
// Get repository path // Get repository path with security validation
const repoPath = join(repoRoot, npub, `${repoName}.git`); const repoPath = join(repoRoot, npub, `${repoName}.git`);
// Security: Ensure the resolved path is within repoRoot to prevent path traversal
const resolvedPath = resolve(repoPath);
const resolvedRoot = resolve(repoRoot);
if (!resolvedPath.startsWith(resolvedRoot + '/') && resolvedPath !== resolvedRoot) {
return error(403, 'Invalid repository path');
}
if (!repoManager.repoExists(repoPath)) { if (!repoManager.repoExists(repoPath)) {
return error(404, 'Repository not found'); return error(404, 'Repository not found');
} }
@ -179,6 +186,15 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
// Verify user can view the repo // Verify user can view the repo
const canView = await maintainerService.canView(authResult.pubkey || null, originalOwnerPubkey, repoName); const canView = await maintainerService.canView(authResult.pubkey || null, originalOwnerPubkey, repoName);
if (!canView) { if (!canView) {
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
auditLogger.logRepoAccess(
authResult.pubkey || null,
clientIp,
'clone',
`${npub}/${repoName}`,
'denied',
'Insufficient permissions'
);
return error(403, 'You do not have permission to access this private repository.'); return error(403, 'You do not have permission to access this private repository.');
} }
} }
@ -190,14 +206,18 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
} }
// Build PATH_INFO // Build PATH_INFO
// For info/refs, git-http-backend expects: /{npub}/{repo-name}.git/info/refs // Security: Since we're setting GIT_PROJECT_ROOT to the specific repo path,
// For other operations: /{npub}/{repo-name}.git/{git-path} // PATH_INFO should be relative to that repo (just the git operation path)
const pathInfo = gitPath ? `/${npub}/${repoName}.git/${gitPath}` : `/${npub}/${repoName}.git/info/refs`; // For info/refs: /info/refs
// For other operations: /{git-path}
const pathInfo = gitPath ? `/${gitPath}` : `/info/refs`;
// Set up environment variables for git-http-backend // Set up environment variables for git-http-backend
// Security: Use the specific repository path, not the entire repoRoot
// This limits git-http-backend's view to only this repository
const envVars = { const envVars = {
...process.env, ...process.env,
GIT_PROJECT_ROOT: repoRoot, GIT_PROJECT_ROOT: resolve(repoPath), // Use specific repo path, not repoRoot
GIT_HTTP_EXPORT_ALL: '1', GIT_HTTP_EXPORT_ALL: '1',
REQUEST_METHOD: request.method, REQUEST_METHOD: request.method,
PATH_INFO: pathInfo, PATH_INFO: pathInfo,
@ -207,13 +227,35 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
HTTP_USER_AGENT: request.headers.get('User-Agent') || '', HTTP_USER_AGENT: request.headers.get('User-Agent') || '',
}; };
// Execute git-http-backend // Execute git-http-backend with timeout and security hardening
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
const operation = service === 'git-upload-pack' || gitPath === 'git-upload-pack' ? 'fetch' : 'clone';
return new Promise((resolve) => { return new Promise((resolve) => {
// Security: Set timeout for git operations (5 minutes max)
const timeoutMs = 5 * 60 * 1000;
let timeoutId: NodeJS.Timeout;
const gitProcess = spawn(gitHttpBackend, [], { const gitProcess = spawn(gitHttpBackend, [], {
env: envVars, env: envVars,
stdio: ['pipe', 'pipe', 'pipe'] stdio: ['pipe', 'pipe', 'pipe'],
// Security: Don't inherit parent's environment fully
shell: false
}); });
timeoutId = setTimeout(() => {
gitProcess.kill('SIGTERM');
auditLogger.logRepoAccess(
originalOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
'Operation timeout'
);
resolve(error(504, 'Git operation timeout'));
}, timeoutMs);
const chunks: Buffer[] = []; const chunks: Buffer[] = [];
let errorOutput = ''; let errorOutput = '';
@ -226,6 +268,30 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
}); });
gitProcess.on('close', (code) => { gitProcess.on('close', (code) => {
clearTimeout(timeoutId);
// Log audit entry after operation completes
if (code === 0) {
// Success: operation completed successfully
auditLogger.logRepoAccess(
originalOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'success'
);
} else {
// Failure: operation failed
auditLogger.logRepoAccess(
originalOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
errorOutput || 'Unknown error'
);
}
if (code !== 0 && chunks.length === 0) { if (code !== 0 && chunks.length === 0) {
resolve(error(500, `git-http-backend error: ${errorOutput || 'Unknown error'}`)); resolve(error(500, `git-http-backend error: ${errorOutput || 'Unknown error'}`));
return; return;
@ -253,6 +319,16 @@ export const GET: RequestHandler = async ({ params, url, request }) => {
}); });
gitProcess.on('error', (err) => { gitProcess.on('error', (err) => {
clearTimeout(timeoutId);
// Log audit entry for process error
auditLogger.logRepoAccess(
originalOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
`Process error: ${err.message}`
);
resolve(error(500, `Failed to execute git-http-backend: ${err.message}`)); resolve(error(500, `Failed to execute git-http-backend: ${err.message}`));
}); });
}); });
@ -281,8 +357,14 @@ export const POST: RequestHandler = async ({ params, url, request }) => {
return error(400, 'Invalid npub format'); return error(400, 'Invalid npub format');
} }
// Get repository path // Get repository path with security validation
const repoPath = join(repoRoot, npub, `${repoName}.git`); const repoPath = join(repoRoot, npub, `${repoName}.git`);
// Security: Ensure the resolved path is within repoRoot to prevent path traversal
const resolvedPath = resolve(repoPath);
const resolvedRoot = resolve(repoRoot);
if (!resolvedPath.startsWith(resolvedRoot + '/') && resolvedPath !== resolvedRoot) {
return error(403, 'Invalid repository path');
}
if (!repoManager.repoExists(repoPath)) { if (!repoManager.repoExists(repoPath)) {
return error(404, 'Repository not found'); return error(404, 'Repository not found');
} }
@ -326,12 +408,16 @@ export const POST: RequestHandler = async ({ params, url, request }) => {
} }
// Build PATH_INFO // Build PATH_INFO
const pathInfo = gitPath ? `/${npub}/${repoName}.git/${gitPath}` : `/${npub}/${repoName}.git`; // Security: Since we're setting GIT_PROJECT_ROOT to the specific repo path,
// PATH_INFO should be relative to that repo (just the git operation path)
const pathInfo = gitPath ? `/${gitPath}` : `/`;
// Set up environment variables for git-http-backend // Set up environment variables for git-http-backend
// Security: Use the specific repository path, not the entire repoRoot
// This limits git-http-backend's view to only this repository
const envVars = { const envVars = {
...process.env, ...process.env,
GIT_PROJECT_ROOT: repoRoot, GIT_PROJECT_ROOT: resolve(repoPath), // Use specific repo path, not repoRoot
GIT_HTTP_EXPORT_ALL: '1', GIT_HTTP_EXPORT_ALL: '1',
REQUEST_METHOD: request.method, REQUEST_METHOD: request.method,
PATH_INFO: pathInfo, PATH_INFO: pathInfo,
@ -341,13 +427,35 @@ export const POST: RequestHandler = async ({ params, url, request }) => {
HTTP_USER_AGENT: request.headers.get('User-Agent') || '', HTTP_USER_AGENT: request.headers.get('User-Agent') || '',
}; };
// Execute git-http-backend // Execute git-http-backend with timeout and security hardening
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
const operation = gitPath === 'git-receive-pack' || path.includes('git-receive-pack') ? 'push' : 'fetch';
return new Promise((resolve) => { return new Promise((resolve) => {
// Security: Set timeout for git operations (5 minutes max)
const timeoutMs = 5 * 60 * 1000;
let timeoutId: NodeJS.Timeout;
const gitProcess = spawn(gitHttpBackend, [], { const gitProcess = spawn(gitHttpBackend, [], {
env: envVars, env: envVars,
stdio: ['pipe', 'pipe', 'pipe'] stdio: ['pipe', 'pipe', 'pipe'],
// Security: Don't inherit parent's environment fully
shell: false
}); });
timeoutId = setTimeout(() => {
gitProcess.kill('SIGTERM');
auditLogger.logRepoAccess(
currentOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
'Operation timeout'
);
resolve(error(504, 'Git operation timeout'));
}, timeoutMs);
const chunks: Buffer[] = []; const chunks: Buffer[] = [];
let errorOutput = ''; let errorOutput = '';
@ -364,6 +472,30 @@ export const POST: RequestHandler = async ({ params, url, request }) => {
}); });
gitProcess.on('close', async (code) => { gitProcess.on('close', async (code) => {
clearTimeout(timeoutId);
// Log audit entry after operation completes
if (code === 0) {
// Success: operation completed successfully
auditLogger.logRepoAccess(
currentOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'success'
);
} else {
// Failure: operation failed
auditLogger.logRepoAccess(
currentOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
errorOutput || 'Git operation failed'
);
}
// If this was a successful push, sync to other remotes // If this was a successful push, sync to other remotes
if (code === 0 && (gitPath === 'git-receive-pack' || path.includes('git-receive-pack'))) { if (code === 0 && (gitPath === 'git-receive-pack' || path.includes('git-receive-pack'))) {
try { try {
@ -408,6 +540,16 @@ export const POST: RequestHandler = async ({ params, url, request }) => {
}); });
gitProcess.on('error', (err) => { gitProcess.on('error', (err) => {
clearTimeout(timeoutId);
// Log audit entry for process error
auditLogger.logRepoAccess(
currentOwnerPubkey,
clientIp,
operation,
`${npub}/${repoName}`,
'failure',
`Process error: ${err.message}`
);
resolve(error(500, `Failed to execute git-http-backend: ${err.message}`)); resolve(error(500, `Failed to execute git-http-backend: ${err.message}`));
}); });
}); });

127
src/routes/api/repos/[npub]/[repo]/file/+server.ts

@ -10,6 +10,7 @@ import { MaintainerService } from '$lib/services/nostr/maintainer-service.js';
import { DEFAULT_NOSTR_RELAYS } from '$lib/config.js'; import { DEFAULT_NOSTR_RELAYS } from '$lib/config.js';
import { nip19 } from 'nostr-tools'; import { nip19 } from 'nostr-tools';
import { verifyNIP98Auth } from '$lib/services/nostr/nip98-auth.js'; import { verifyNIP98Auth } from '$lib/services/nostr/nip98-auth.js';
import { auditLogger } from '$lib/services/security/audit-logger.js';
const repoRoot = process.env.GIT_REPO_ROOT || '/repos'; const repoRoot = process.env.GIT_REPO_ROOT || '/repos';
const fileManager = new FileManager(repoRoot); const fileManager = new FileManager(repoRoot);
@ -45,11 +46,43 @@ export const GET: RequestHandler = async ({ params, url, request }: { params: {
const canView = await maintainerService.canView(userPubkey || null, repoOwnerPubkey, repo); const canView = await maintainerService.canView(userPubkey || null, repoOwnerPubkey, repo);
if (!canView) { if (!canView) {
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
auditLogger.logFileOperation(
userPubkey || null,
clientIp,
'read',
`${npub}/${repo}`,
filePath,
'denied',
'Insufficient permissions'
);
return error(403, 'This repository is private. Only owners and maintainers can view it.'); return error(403, 'This repository is private. Only owners and maintainers can view it.');
} }
const fileContent = await fileManager.getFileContent(npub, repo, filePath, ref); const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
return json(fileContent); try {
const fileContent = await fileManager.getFileContent(npub, repo, filePath, ref);
auditLogger.logFileOperation(
userPubkey || null,
clientIp,
'read',
`${npub}/${repo}`,
filePath,
'success'
);
return json(fileContent);
} catch (err) {
auditLogger.logFileOperation(
userPubkey || null,
clientIp,
'read',
`${npub}/${repo}`,
filePath,
'failure',
err instanceof Error ? err.message : String(err)
);
throw err;
}
} catch (err) { } catch (err) {
console.error('Error reading file:', err); console.error('Error reading file:', err);
return error(500, err instanceof Error ? err.message : 'Failed to read file'); return error(500, err instanceof Error ? err.message : 'Failed to read file');
@ -138,34 +171,78 @@ export const POST: RequestHandler = async ({ params, url, request }: { params: {
// Explicitly ignore nsecKey from client requests - it's a security risk // Explicitly ignore nsecKey from client requests - it's a security risk
// Server-side signing should use NOSTRGIT_SECRET_KEY environment variable instead // Server-side signing should use NOSTRGIT_SECRET_KEY environment variable instead
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
if (action === 'delete') { if (action === 'delete') {
await fileManager.deleteFile( try {
npub, await fileManager.deleteFile(
repo, npub,
path, repo,
commitMessage, path,
authorName, commitMessage,
authorEmail, authorName,
branch || 'main', authorEmail,
Object.keys(signingOptions).length > 0 ? signingOptions : undefined branch || 'main',
); Object.keys(signingOptions).length > 0 ? signingOptions : undefined
return json({ success: true, message: 'File deleted and committed' }); );
auditLogger.logFileOperation(
userPubkeyHex,
clientIp,
'delete',
`${npub}/${repo}`,
path,
'success'
);
return json({ success: true, message: 'File deleted and committed' });
} catch (err) {
auditLogger.logFileOperation(
userPubkeyHex,
clientIp,
'delete',
`${npub}/${repo}`,
path,
'failure',
err instanceof Error ? err.message : String(err)
);
throw err;
}
} else if (action === 'create' || content !== undefined) { } else if (action === 'create' || content !== undefined) {
if (content === undefined) { if (content === undefined) {
return error(400, 'Content is required for create/update operations'); return error(400, 'Content is required for create/update operations');
} }
await fileManager.writeFile( try {
npub, await fileManager.writeFile(
repo, npub,
path, repo,
content, path,
commitMessage, content,
authorName, commitMessage,
authorEmail, authorName,
branch || 'main', authorEmail,
Object.keys(signingOptions).length > 0 ? signingOptions : undefined branch || 'main',
); Object.keys(signingOptions).length > 0 ? signingOptions : undefined
return json({ success: true, message: 'File saved and committed' }); );
auditLogger.logFileOperation(
userPubkeyHex,
clientIp,
action === 'create' ? 'create' : 'write',
`${npub}/${repo}`,
path,
'success'
);
return json({ success: true, message: 'File saved and committed' });
} catch (err) {
auditLogger.logFileOperation(
userPubkeyHex,
clientIp,
action === 'create' ? 'create' : 'write',
`${npub}/${repo}`,
path,
'failure',
err instanceof Error ? err.message : String(err)
);
throw err;
}
} else { } else {
return error(400, 'Invalid action or missing content'); return error(400, 'Invalid action or missing content');
} }

28
src/routes/api/repos/[npub]/[repo]/fork/+server.ts

@ -16,11 +16,14 @@ import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import { existsSync } from 'fs'; import { existsSync } from 'fs';
import { join } from 'path'; import { join } from 'path';
import { ResourceLimits } from '$lib/services/security/resource-limits.js';
import { auditLogger } from '$lib/services/security/audit-logger.js';
const execAsync = promisify(exec); const execAsync = promisify(exec);
const repoRoot = process.env.GIT_REPO_ROOT || '/repos'; const repoRoot = process.env.GIT_REPO_ROOT || '/repos';
const repoManager = new RepoManager(repoRoot); const repoManager = new RepoManager(repoRoot);
const nostrClient = new NostrClient(DEFAULT_NOSTR_RELAYS); const nostrClient = new NostrClient(DEFAULT_NOSTR_RELAYS);
const resourceLimits = new ResourceLimits(repoRoot);
/** /**
* Retry publishing an event with exponential backoff * Retry publishing an event with exponential backoff
@ -93,6 +96,20 @@ export const POST: RequestHandler = async ({ params, request }) => {
return error(400, 'Invalid npub format'); return error(400, 'Invalid npub format');
} }
// Check resource limits before forking
const resourceCheck = await resourceLimits.canCreateRepo(userNpub);
if (!resourceCheck.allowed) {
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
auditLogger.logRepoFork(
userPubkeyHex,
`${npub}/${repo}`,
`${userNpub}/${forkRepoName}`,
'denied',
resourceCheck.reason
);
return error(403, resourceCheck.reason || 'Resource limit exceeded');
}
// Decode user pubkey if needed // Decode user pubkey if needed
let userPubkeyHex = userPubkey; let userPubkeyHex = userPubkey;
try { try {
@ -139,8 +156,19 @@ export const POST: RequestHandler = async ({ params, request }) => {
} }
// Clone the repository // Clone the repository
const clientIp = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown';
auditLogger.logRepoFork(
userPubkeyHex,
`${npub}/${repo}`,
`${userNpub}/${forkRepoName}`,
'success'
);
await execAsync(`git clone --bare "${originalRepoPath}" "${forkRepoPath}"`); await execAsync(`git clone --bare "${originalRepoPath}" "${forkRepoPath}"`);
// Invalidate resource limit cache after creating repo
resourceLimits.invalidateCache(userNpub);
// Create fork announcement // Create fork announcement
const gitDomain = process.env.GIT_DOMAIN || 'localhost:6543'; const gitDomain = process.env.GIT_DOMAIN || 'localhost:6543';
const protocol = gitDomain.startsWith('localhost') ? 'http' : 'https'; const protocol = gitDomain.startsWith('localhost') ? 'http' : 'https';

Loading…
Cancel
Save