33 changed files with 5502 additions and 1534 deletions
@ -0,0 +1,257 @@ |
|||||||
|
# Benchmark CPU Usage Optimization |
||||||
|
|
||||||
|
This document describes the CPU optimization settings for the ORLY benchmark suite, specifically tuned for systems with limited CPU resources (6-core/12-thread and lower). |
||||||
|
|
||||||
|
## Problem Statement |
||||||
|
|
||||||
|
The original benchmark implementation was designed for maximum throughput testing, which caused: |
||||||
|
- **CPU saturation**: 95-100% sustained CPU usage across all cores |
||||||
|
- **System instability**: Other services unable to run alongside benchmarks |
||||||
|
- **Thermal throttling**: Long benchmark runs causing CPU frequency reduction |
||||||
|
- **Unrealistic load**: Tight loops not representative of real-world relay usage |
||||||
|
|
||||||
|
## Solution: Aggressive Rate Limiting |
||||||
|
|
||||||
|
The benchmark now implements multi-layered CPU usage controls: |
||||||
|
|
||||||
|
### 1. Reduced Worker Concurrency |
||||||
|
|
||||||
|
**Default Worker Count**: `NumCPU() / 4` (minimum 2) |
||||||
|
|
||||||
|
For a 6-core/12-thread system: |
||||||
|
- Previous: 12 workers |
||||||
|
- **Current: 3 workers** |
||||||
|
|
||||||
|
This 4x reduction dramatically lowers: |
||||||
|
- Goroutine context switching overhead |
||||||
|
- Lock contention on shared resources |
||||||
|
- CPU cache thrashing |
||||||
|
|
||||||
|
### 2. Per-Operation Delays |
||||||
|
|
||||||
|
All benchmark operations now include mandatory delays to prevent CPU saturation: |
||||||
|
|
||||||
|
| Operation Type | Delay | Rationale | |
||||||
|
|---------------|-------|-----------| |
||||||
|
| Event writes | 500µs | Simulates network latency and client pacing | |
||||||
|
| Queries | 1ms | Queries are CPU-intensive, need more spacing | |
||||||
|
| Concurrent writes | 500µs | Balanced for mixed workloads | |
||||||
|
| Burst writes | 500µs | Prevents CPU spikes during bursts | |
||||||
|
|
||||||
|
### 3. Implementation Locations |
||||||
|
|
||||||
|
#### Main Benchmark (Badger backend) |
||||||
|
|
||||||
|
**Peak Throughput Test** ([main.go:471-473](main.go#L471-L473)): |
||||||
|
```go |
||||||
|
const eventDelay = 500 * time.Microsecond |
||||||
|
time.Sleep(eventDelay) // After each event save |
||||||
|
``` |
||||||
|
|
||||||
|
**Burst Pattern Test** ([main.go:599-600](main.go#L599-L600)): |
||||||
|
```go |
||||||
|
const eventDelay = 500 * time.Microsecond |
||||||
|
time.Sleep(eventDelay) // In worker loop |
||||||
|
``` |
||||||
|
|
||||||
|
**Query Test** ([main.go:899](main.go#L899)): |
||||||
|
```go |
||||||
|
time.Sleep(1 * time.Millisecond) // After each query |
||||||
|
``` |
||||||
|
|
||||||
|
**Concurrent Query/Store** ([main.go:900, 1068](main.go#L900)): |
||||||
|
```go |
||||||
|
time.Sleep(1 * time.Millisecond) // Readers |
||||||
|
time.Sleep(500 * time.Microsecond) // Writers |
||||||
|
``` |
||||||
|
|
||||||
|
#### BenchmarkAdapter (DGraph/Neo4j backends) |
||||||
|
|
||||||
|
**Peak Throughput** ([benchmark_adapter.go:58](benchmark_adapter.go#L58)): |
||||||
|
```go |
||||||
|
const eventDelay = 500 * time.Microsecond |
||||||
|
``` |
||||||
|
|
||||||
|
**Burst Pattern** ([benchmark_adapter.go:142](benchmark_adapter.go#L142)): |
||||||
|
```go |
||||||
|
const eventDelay = 500 * time.Microsecond |
||||||
|
``` |
||||||
|
|
||||||
|
## Expected CPU Usage |
||||||
|
|
||||||
|
### Before Optimization |
||||||
|
- **Workers**: 12 (on 12-thread system) |
||||||
|
- **Delays**: None or minimal |
||||||
|
- **CPU Usage**: 95-100% sustained |
||||||
|
- **System Impact**: Severe - other processes starved |
||||||
|
|
||||||
|
### After Optimization |
||||||
|
- **Workers**: 3 (on 12-thread system) |
||||||
|
- **Delays**: 500µs-1ms per operation |
||||||
|
- **Expected CPU Usage**: 40-60% average, 70% peak |
||||||
|
- **System Impact**: Minimal - plenty of headroom for other processes |
||||||
|
|
||||||
|
## Performance Impact |
||||||
|
|
||||||
|
### Throughput Reduction |
||||||
|
The aggressive rate limiting will reduce benchmark throughput: |
||||||
|
|
||||||
|
**Before** (unrealistic, CPU-bound): |
||||||
|
- ~50,000 events/second with 12 workers |
||||||
|
|
||||||
|
**After** (realistic, rate-limited): |
||||||
|
- ~5,000-10,000 events/second with 3 workers |
||||||
|
- More representative of real-world relay load |
||||||
|
- Network latency and client pacing simulated |
||||||
|
|
||||||
|
### Latency Accuracy |
||||||
|
**Improved**: With lower CPU contention, latency measurements are more accurate: |
||||||
|
- Less queueing delay in database operations |
||||||
|
- More consistent response times |
||||||
|
- Better P95/P99 metric reliability |
||||||
|
|
||||||
|
## Tuning Guide |
||||||
|
|
||||||
|
If you need to adjust CPU usage further: |
||||||
|
|
||||||
|
### Further Reduce CPU (< 40%) |
||||||
|
|
||||||
|
1. **Reduce workers**: |
||||||
|
```bash |
||||||
|
./benchmark --workers 2 # Half of default |
||||||
|
``` |
||||||
|
|
||||||
|
2. **Increase delays** in code: |
||||||
|
```go |
||||||
|
// Change from 500µs to 1ms for writes |
||||||
|
const eventDelay = 1 * time.Millisecond |
||||||
|
|
||||||
|
// Change from 1ms to 2ms for queries |
||||||
|
time.Sleep(2 * time.Millisecond) |
||||||
|
``` |
||||||
|
|
||||||
|
3. **Reduce event count**: |
||||||
|
```bash |
||||||
|
./benchmark --events 5000 # Shorter test runs |
||||||
|
``` |
||||||
|
|
||||||
|
### Increase CPU (for faster testing) |
||||||
|
|
||||||
|
1. **Increase workers**: |
||||||
|
```bash |
||||||
|
./benchmark --workers 6 # More concurrency |
||||||
|
``` |
||||||
|
|
||||||
|
2. **Decrease delays** in code: |
||||||
|
```go |
||||||
|
// Change from 500µs to 100µs |
||||||
|
const eventDelay = 100 * time.Microsecond |
||||||
|
|
||||||
|
// Change from 1ms to 500µs |
||||||
|
time.Sleep(500 * time.Microsecond) |
||||||
|
``` |
||||||
|
|
||||||
|
## Monitoring CPU Usage |
||||||
|
|
||||||
|
### Real-time Monitoring |
||||||
|
|
||||||
|
```bash |
||||||
|
# Terminal 1: Run benchmark |
||||||
|
cd cmd/benchmark |
||||||
|
./benchmark --workers 3 --events 10000 |
||||||
|
|
||||||
|
# Terminal 2: Monitor CPU |
||||||
|
watch -n 1 'ps aux | grep benchmark | grep -v grep | awk "{print \$3\" %CPU\"}"' |
||||||
|
``` |
||||||
|
|
||||||
|
### With htop (recommended) |
||||||
|
|
||||||
|
```bash |
||||||
|
# Install htop if needed |
||||||
|
sudo apt install htop |
||||||
|
|
||||||
|
# Run htop and filter for benchmark process |
||||||
|
htop -p $(pgrep -f benchmark) |
||||||
|
``` |
||||||
|
|
||||||
|
### System-wide CPU Usage |
||||||
|
|
||||||
|
```bash |
||||||
|
# Check overall system load |
||||||
|
mpstat 1 |
||||||
|
|
||||||
|
# Or with sar |
||||||
|
sar -u 1 |
||||||
|
``` |
||||||
|
|
||||||
|
## Docker Compose Considerations |
||||||
|
|
||||||
|
When running the full benchmark suite in Docker Compose: |
||||||
|
|
||||||
|
### Resource Limits |
||||||
|
|
||||||
|
The compose file should limit CPU allocation: |
||||||
|
|
||||||
|
```yaml |
||||||
|
services: |
||||||
|
benchmark-runner: |
||||||
|
deploy: |
||||||
|
resources: |
||||||
|
limits: |
||||||
|
cpus: '4' # Limit to 4 CPU cores |
||||||
|
``` |
||||||
|
|
||||||
|
### Sequential vs Parallel |
||||||
|
|
||||||
|
Current implementation runs benchmarks **sequentially** to avoid overwhelming the system. |
||||||
|
Each relay is tested one at a time, ensuring: |
||||||
|
- Consistent baseline for comparisons |
||||||
|
- No CPU competition between tests |
||||||
|
- Reliable latency measurements |
||||||
|
|
||||||
|
## Best Practices |
||||||
|
|
||||||
|
1. **Always monitor CPU during first run** to verify settings work for your system |
||||||
|
2. **Close other applications** during benchmarking for consistent results |
||||||
|
3. **Use consistent worker counts** across test runs for fair comparisons |
||||||
|
4. **Document your settings** if you modify delay constants |
||||||
|
5. **Test with small event counts first** (--events 1000) to verify CPU usage |
||||||
|
|
||||||
|
## Realistic Workload Simulation |
||||||
|
|
||||||
|
The delays aren't just for CPU management - they simulate real-world conditions: |
||||||
|
|
||||||
|
- **500µs write delay**: Typical network round-trip time for local clients |
||||||
|
- **1ms query delay**: Client thinking time between queries |
||||||
|
- **3 workers**: Simulates 3 concurrent users/clients |
||||||
|
- **Burst patterns**: Models social media posting patterns (busy hours vs quiet periods) |
||||||
|
|
||||||
|
This makes benchmark results more applicable to production relay deployment planning. |
||||||
|
|
||||||
|
## System Requirements |
||||||
|
|
||||||
|
### Minimum |
||||||
|
- 4 CPU cores (2 physical cores with hyperthreading) |
||||||
|
- 8GB RAM |
||||||
|
- SSD storage for database |
||||||
|
|
||||||
|
### Recommended |
||||||
|
- 6+ CPU cores |
||||||
|
- 16GB RAM |
||||||
|
- NVMe SSD |
||||||
|
|
||||||
|
### For Full Suite (Docker Compose) |
||||||
|
- 8+ CPU cores (allows multiple relays + benchmark runner) |
||||||
|
- 32GB RAM (Neo4j, DGraph are memory-hungry) |
||||||
|
- Fast SSD with 100GB+ free space |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
|
||||||
|
These aggressive CPU optimizations ensure the benchmark suite: |
||||||
|
- ✅ Runs reliably on modest hardware |
||||||
|
- ✅ Doesn't interfere with other system processes |
||||||
|
- ✅ Produces realistic, production-relevant metrics |
||||||
|
- ✅ Completes without thermal throttling |
||||||
|
- ✅ Allows fair comparison across different relay implementations |
||||||
|
|
||||||
|
The trade-off is longer test duration, but the results are far more valuable for actual relay deployment planning. |
||||||
@ -0,0 +1,37 @@ |
|||||||
|
version: "3.9" |
||||||
|
|
||||||
|
services: |
||||||
|
neo4j: |
||||||
|
image: neo4j:5.15-community |
||||||
|
container_name: orly-benchmark-neo4j |
||||||
|
ports: |
||||||
|
- "7474:7474" # HTTP |
||||||
|
- "7687:7687" # Bolt |
||||||
|
environment: |
||||||
|
- NEO4J_AUTH=neo4j/benchmark123 |
||||||
|
- NEO4J_server_memory_heap_initial__size=2G |
||||||
|
- NEO4J_server_memory_heap_max__size=4G |
||||||
|
- NEO4J_server_memory_pagecache_size=2G |
||||||
|
- NEO4J_dbms_security_procedures_unrestricted=apoc.* |
||||||
|
- NEO4J_dbms_security_procedures_allowlist=apoc.* |
||||||
|
- NEO4JLABS_PLUGINS=["apoc"] |
||||||
|
volumes: |
||||||
|
- neo4j-data:/data |
||||||
|
- neo4j-logs:/logs |
||||||
|
networks: |
||||||
|
- orly-benchmark |
||||||
|
healthcheck: |
||||||
|
test: ["CMD-SHELL", "cypher-shell -u neo4j -p benchmark123 'RETURN 1;' || exit 1"] |
||||||
|
interval: 10s |
||||||
|
timeout: 5s |
||||||
|
retries: 10 |
||||||
|
start_period: 40s |
||||||
|
|
||||||
|
networks: |
||||||
|
orly-benchmark: |
||||||
|
name: orly-benchmark-network |
||||||
|
driver: bridge |
||||||
|
|
||||||
|
volumes: |
||||||
|
neo4j-data: |
||||||
|
neo4j-logs: |
||||||
@ -0,0 +1,135 @@ |
|||||||
|
package main |
||||||
|
|
||||||
|
import ( |
||||||
|
"context" |
||||||
|
"fmt" |
||||||
|
"log" |
||||||
|
"os" |
||||||
|
"time" |
||||||
|
|
||||||
|
"next.orly.dev/pkg/database" |
||||||
|
_ "next.orly.dev/pkg/neo4j" // Import to register neo4j factory
|
||||||
|
) |
||||||
|
|
||||||
|
// Neo4jBenchmark wraps a Benchmark with Neo4j-specific setup
|
||||||
|
type Neo4jBenchmark struct { |
||||||
|
config *BenchmarkConfig |
||||||
|
docker *Neo4jDocker |
||||||
|
database database.Database |
||||||
|
bench *BenchmarkAdapter |
||||||
|
} |
||||||
|
|
||||||
|
// NewNeo4jBenchmark creates a new Neo4j benchmark instance
|
||||||
|
func NewNeo4jBenchmark(config *BenchmarkConfig) (*Neo4jBenchmark, error) { |
||||||
|
// Create Docker manager
|
||||||
|
docker, err := NewNeo4jDocker() |
||||||
|
if err != nil { |
||||||
|
return nil, fmt.Errorf("failed to create Neo4j docker manager: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
// Start Neo4j container
|
||||||
|
if err := docker.Start(); err != nil { |
||||||
|
return nil, fmt.Errorf("failed to start Neo4j: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
// Set environment variables for Neo4j connection
|
||||||
|
os.Setenv("ORLY_NEO4J_URI", "bolt://localhost:7687") |
||||||
|
os.Setenv("ORLY_NEO4J_USER", "neo4j") |
||||||
|
os.Setenv("ORLY_NEO4J_PASSWORD", "benchmark123") |
||||||
|
|
||||||
|
// Create database instance using Neo4j backend
|
||||||
|
ctx := context.Background() |
||||||
|
cancel := func() {} |
||||||
|
db, err := database.NewDatabase(ctx, cancel, "neo4j", config.DataDir, "warn") |
||||||
|
if err != nil { |
||||||
|
docker.Stop() |
||||||
|
return nil, fmt.Errorf("failed to create Neo4j database: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
// Wait for database to be ready
|
||||||
|
fmt.Println("Waiting for Neo4j database to be ready...") |
||||||
|
select { |
||||||
|
case <-db.Ready(): |
||||||
|
fmt.Println("Neo4j database is ready") |
||||||
|
case <-time.After(30 * time.Second): |
||||||
|
db.Close() |
||||||
|
docker.Stop() |
||||||
|
return nil, fmt.Errorf("Neo4j database failed to become ready") |
||||||
|
} |
||||||
|
|
||||||
|
// Create adapter to use Database interface with Benchmark
|
||||||
|
adapter := NewBenchmarkAdapter(config, db) |
||||||
|
|
||||||
|
neo4jBench := &Neo4jBenchmark{ |
||||||
|
config: config, |
||||||
|
docker: docker, |
||||||
|
database: db, |
||||||
|
bench: adapter, |
||||||
|
} |
||||||
|
|
||||||
|
return neo4jBench, nil |
||||||
|
} |
||||||
|
|
||||||
|
// Close closes the Neo4j benchmark and stops Docker container
|
||||||
|
func (ngb *Neo4jBenchmark) Close() { |
||||||
|
fmt.Println("Closing Neo4j benchmark...") |
||||||
|
|
||||||
|
if ngb.database != nil { |
||||||
|
ngb.database.Close() |
||||||
|
} |
||||||
|
|
||||||
|
if ngb.docker != nil { |
||||||
|
if err := ngb.docker.Stop(); err != nil { |
||||||
|
log.Printf("Error stopping Neo4j Docker: %v", err) |
||||||
|
} |
||||||
|
} |
||||||
|
} |
||||||
|
|
||||||
|
// RunSuite runs the benchmark suite on Neo4j
|
||||||
|
func (ngb *Neo4jBenchmark) RunSuite() { |
||||||
|
fmt.Println("\n╔════════════════════════════════════════════════════════╗") |
||||||
|
fmt.Println("║ NEO4J BACKEND BENCHMARK SUITE ║") |
||||||
|
fmt.Println("╚════════════════════════════════════════════════════════╝") |
||||||
|
|
||||||
|
// Run benchmark tests
|
||||||
|
fmt.Printf("\n=== Starting Neo4j benchmark ===\n") |
||||||
|
|
||||||
|
fmt.Printf("RunPeakThroughputTest (Neo4j)..\n") |
||||||
|
ngb.bench.RunPeakThroughputTest() |
||||||
|
fmt.Println("Wiping database between tests...") |
||||||
|
ngb.database.Wipe() |
||||||
|
time.Sleep(10 * time.Second) |
||||||
|
|
||||||
|
fmt.Printf("RunBurstPatternTest (Neo4j)..\n") |
||||||
|
ngb.bench.RunBurstPatternTest() |
||||||
|
fmt.Println("Wiping database between tests...") |
||||||
|
ngb.database.Wipe() |
||||||
|
time.Sleep(10 * time.Second) |
||||||
|
|
||||||
|
fmt.Printf("RunMixedReadWriteTest (Neo4j)..\n") |
||||||
|
ngb.bench.RunMixedReadWriteTest() |
||||||
|
fmt.Println("Wiping database between tests...") |
||||||
|
ngb.database.Wipe() |
||||||
|
time.Sleep(10 * time.Second) |
||||||
|
|
||||||
|
fmt.Printf("RunQueryTest (Neo4j)..\n") |
||||||
|
ngb.bench.RunQueryTest() |
||||||
|
fmt.Println("Wiping database between tests...") |
||||||
|
ngb.database.Wipe() |
||||||
|
time.Sleep(10 * time.Second) |
||||||
|
|
||||||
|
fmt.Printf("RunConcurrentQueryStoreTest (Neo4j)..\n") |
||||||
|
ngb.bench.RunConcurrentQueryStoreTest() |
||||||
|
|
||||||
|
fmt.Printf("\n=== Neo4j benchmark completed ===\n\n") |
||||||
|
} |
||||||
|
|
||||||
|
// GenerateReport generates the benchmark report
|
||||||
|
func (ngb *Neo4jBenchmark) GenerateReport() { |
||||||
|
ngb.bench.GenerateReport() |
||||||
|
} |
||||||
|
|
||||||
|
// GenerateAsciidocReport generates asciidoc format report
|
||||||
|
func (ngb *Neo4jBenchmark) GenerateAsciidocReport() { |
||||||
|
ngb.bench.GenerateAsciidocReport() |
||||||
|
} |
||||||
@ -0,0 +1,147 @@ |
|||||||
|
package main |
||||||
|
|
||||||
|
import ( |
||||||
|
"context" |
||||||
|
"fmt" |
||||||
|
"os" |
||||||
|
"os/exec" |
||||||
|
"path/filepath" |
||||||
|
"time" |
||||||
|
) |
||||||
|
|
||||||
|
// Neo4jDocker manages a Neo4j instance via Docker Compose
|
||||||
|
type Neo4jDocker struct { |
||||||
|
composeFile string |
||||||
|
projectName string |
||||||
|
} |
||||||
|
|
||||||
|
// NewNeo4jDocker creates a new Neo4j Docker manager
|
||||||
|
func NewNeo4jDocker() (*Neo4jDocker, error) { |
||||||
|
// Look for docker-compose-neo4j.yml in current directory or cmd/benchmark
|
||||||
|
composeFile := "docker-compose-neo4j.yml" |
||||||
|
if _, err := os.Stat(composeFile); os.IsNotExist(err) { |
||||||
|
// Try in cmd/benchmark directory
|
||||||
|
composeFile = filepath.Join("cmd", "benchmark", "docker-compose-neo4j.yml") |
||||||
|
} |
||||||
|
|
||||||
|
return &Neo4jDocker{ |
||||||
|
composeFile: composeFile, |
||||||
|
projectName: "orly-benchmark-neo4j", |
||||||
|
}, nil |
||||||
|
} |
||||||
|
|
||||||
|
// Start starts the Neo4j Docker container
|
||||||
|
func (d *Neo4jDocker) Start() error { |
||||||
|
fmt.Println("Starting Neo4j Docker container...") |
||||||
|
|
||||||
|
// Pull image first
|
||||||
|
pullCmd := exec.Command("docker-compose", |
||||||
|
"-f", d.composeFile, |
||||||
|
"-p", d.projectName, |
||||||
|
"pull", |
||||||
|
) |
||||||
|
pullCmd.Stdout = os.Stdout |
||||||
|
pullCmd.Stderr = os.Stderr |
||||||
|
if err := pullCmd.Run(); err != nil { |
||||||
|
return fmt.Errorf("failed to pull Neo4j image: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
// Start containers
|
||||||
|
upCmd := exec.Command("docker-compose", |
||||||
|
"-f", d.composeFile, |
||||||
|
"-p", d.projectName, |
||||||
|
"up", "-d", |
||||||
|
) |
||||||
|
upCmd.Stdout = os.Stdout |
||||||
|
upCmd.Stderr = os.Stderr |
||||||
|
if err := upCmd.Run(); err != nil { |
||||||
|
return fmt.Errorf("failed to start Neo4j container: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
fmt.Println("Waiting for Neo4j to be healthy...") |
||||||
|
if err := d.waitForHealthy(); err != nil { |
||||||
|
return err |
||||||
|
} |
||||||
|
|
||||||
|
fmt.Println("Neo4j is ready!") |
||||||
|
return nil |
||||||
|
} |
||||||
|
|
||||||
|
// waitForHealthy waits for Neo4j to become healthy
|
||||||
|
func (d *Neo4jDocker) waitForHealthy() error { |
||||||
|
timeout := 120 * time.Second |
||||||
|
deadline := time.Now().Add(timeout) |
||||||
|
|
||||||
|
containerName := "orly-benchmark-neo4j" |
||||||
|
|
||||||
|
for time.Now().Before(deadline) { |
||||||
|
// Check container health status
|
||||||
|
checkCmd := exec.Command("docker", "inspect", |
||||||
|
"--format={{.State.Health.Status}}", |
||||||
|
containerName, |
||||||
|
) |
||||||
|
output, err := checkCmd.Output() |
||||||
|
if err == nil && string(output) == "healthy\n" { |
||||||
|
return nil |
||||||
|
} |
||||||
|
|
||||||
|
time.Sleep(2 * time.Second) |
||||||
|
} |
||||||
|
|
||||||
|
return fmt.Errorf("Neo4j failed to become healthy within %v", timeout) |
||||||
|
} |
||||||
|
|
||||||
|
// Stop stops and removes the Neo4j Docker container
|
||||||
|
func (d *Neo4jDocker) Stop() error { |
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) |
||||||
|
defer cancel() |
||||||
|
|
||||||
|
// Get logs before stopping (useful for debugging)
|
||||||
|
logsCmd := exec.CommandContext(ctx, "docker-compose", |
||||||
|
"-f", d.composeFile, |
||||||
|
"-p", d.projectName, |
||||||
|
"logs", "--tail=50", |
||||||
|
) |
||||||
|
logsCmd.Stdout = os.Stdout |
||||||
|
logsCmd.Stderr = os.Stderr |
||||||
|
_ = logsCmd.Run() // Ignore errors
|
||||||
|
|
||||||
|
fmt.Println("Stopping Neo4j Docker container...") |
||||||
|
|
||||||
|
// Stop and remove containers
|
||||||
|
downCmd := exec.Command("docker-compose", |
||||||
|
"-f", d.composeFile, |
||||||
|
"-p", d.projectName, |
||||||
|
"down", "-v", |
||||||
|
) |
||||||
|
downCmd.Stdout = os.Stdout |
||||||
|
downCmd.Stderr = os.Stderr |
||||||
|
if err := downCmd.Run(); err != nil { |
||||||
|
return fmt.Errorf("failed to stop Neo4j container: %w", err) |
||||||
|
} |
||||||
|
|
||||||
|
return nil |
||||||
|
} |
||||||
|
|
||||||
|
// GetBoltEndpoint returns the Neo4j Bolt endpoint
|
||||||
|
func (d *Neo4jDocker) GetBoltEndpoint() string { |
||||||
|
return "bolt://localhost:7687" |
||||||
|
} |
||||||
|
|
||||||
|
// IsRunning returns whether Neo4j is running
|
||||||
|
func (d *Neo4jDocker) IsRunning() bool { |
||||||
|
checkCmd := exec.Command("docker", "ps", "--filter", "name=orly-benchmark-neo4j", "--format", "{{.Names}}") |
||||||
|
output, err := checkCmd.Output() |
||||||
|
return err == nil && len(output) > 0 |
||||||
|
} |
||||||
|
|
||||||
|
// Logs returns the logs from Neo4j container
|
||||||
|
func (d *Neo4jDocker) Logs(tail int) (string, error) { |
||||||
|
logsCmd := exec.Command("docker-compose", |
||||||
|
"-f", d.composeFile, |
||||||
|
"-p", d.projectName, |
||||||
|
"logs", "--tail", fmt.Sprintf("%d", tail), |
||||||
|
) |
||||||
|
output, err := logsCmd.CombinedOutput() |
||||||
|
return string(output), err |
||||||
|
} |
||||||
@ -0,0 +1,176 @@ |
|||||||
|
================================================================ |
||||||
|
NOSTR RELAY BENCHMARK AGGREGATE REPORT |
||||||
|
================================================================ |
||||||
|
Generated: 2025-11-19T06:13:40+00:00 |
||||||
|
Benchmark Configuration: |
||||||
|
Events per test: 50000 |
||||||
|
Concurrent workers: 24 |
||||||
|
Test duration: 60s |
||||||
|
|
||||||
|
Relays tested: 8 |
||||||
|
|
||||||
|
================================================================ |
||||||
|
SUMMARY BY RELAY |
||||||
|
================================================================ |
||||||
|
|
||||||
|
Relay: next-orly-badger |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2911.52 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2911.52 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 3.938925ms |
||||||
|
Bottom 10% Avg Latency: 1.115318ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.624387ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 112.915µs |
||||||
|
|
||||||
|
Relay: next-orly-dgraph |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2661.66 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2661.66 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.795769ms |
||||||
|
Bottom 10% Avg Latency: 1.212562ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 6.029522ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 115.35µs |
||||||
|
|
||||||
|
Relay: next-orly-neo4j |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2827.54 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2827.54 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.203722ms |
||||||
|
Bottom 10% Avg Latency: 1.124184ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.568189ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 112.755µs |
||||||
|
|
||||||
|
Relay: khatru-sqlite |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2840.91 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2840.91 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.23095ms |
||||||
|
Bottom 10% Avg Latency: 1.142932ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.703046ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 113.897µs |
||||||
|
|
||||||
|
Relay: khatru-badger |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2885.30 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2885.30 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 3.985846ms |
||||||
|
Bottom 10% Avg Latency: 1.120349ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.23797ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 114.277µs |
||||||
|
|
||||||
|
Relay: relayer-basic |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2707.76 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2707.76 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.657987ms |
||||||
|
Bottom 10% Avg Latency: 1.266467ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 5.603449ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 112.123µs |
||||||
|
|
||||||
|
Relay: strfry |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2841.22 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2841.22 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.088506ms |
||||||
|
Bottom 10% Avg Latency: 1.135387ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.517428ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 113.396µs |
||||||
|
|
||||||
|
Relay: nostr-rs-relay |
||||||
|
---------------------------------------- |
||||||
|
Status: COMPLETED |
||||||
|
Events/sec: 2883.32 |
||||||
|
Events/sec: 0.00 |
||||||
|
Events/sec: 2883.32 |
||||||
|
Success Rate: 23.2% |
||||||
|
Success Rate: 0.0% |
||||||
|
Success Rate: 50.0% |
||||||
|
Avg Latency: 4.044321ms |
||||||
|
Bottom 10% Avg Latency: 1.103637ms |
||||||
|
Avg Latency: 0s |
||||||
|
P95 Latency: 4.602719ms |
||||||
|
P95 Latency: 0s |
||||||
|
P95 Latency: 114.679µs |
||||||
|
|
||||||
|
|
||||||
|
================================================================ |
||||||
|
DETAILED RESULTS |
||||||
|
================================================================ |
||||||
|
|
||||||
|
Individual relay reports are available in: |
||||||
|
- /reports/run_20251119_054648/khatru-badger_results.txt |
||||||
|
- /reports/run_20251119_054648/khatru-sqlite_results.txt |
||||||
|
- /reports/run_20251119_054648/next-orly-badger_results.txt |
||||||
|
- /reports/run_20251119_054648/next-orly-dgraph_results.txt |
||||||
|
- /reports/run_20251119_054648/next-orly-neo4j_results.txt |
||||||
|
- /reports/run_20251119_054648/nostr-rs-relay_results.txt |
||||||
|
- /reports/run_20251119_054648/relayer-basic_results.txt |
||||||
|
- /reports/run_20251119_054648/strfry_results.txt |
||||||
|
|
||||||
|
================================================================ |
||||||
|
BENCHMARK COMPARISON TABLE |
||||||
|
================================================================ |
||||||
|
|
||||||
|
Relay Status Peak Tput/s Avg Latency Success Rate |
||||||
|
---- ------ ----------- ----------- ------------ |
||||||
|
next-orly-badger OK 2911.52 3.938925ms 23.2% |
||||||
|
next-orly-dgraph OK 2661.66 4.795769ms 23.2% |
||||||
|
next-orly-neo4j OK 2827.54 4.203722ms 23.2% |
||||||
|
khatru-sqlite OK 2840.91 4.23095ms 23.2% |
||||||
|
khatru-badger OK 2885.30 3.985846ms 23.2% |
||||||
|
relayer-basic OK 2707.76 4.657987ms 23.2% |
||||||
|
strfry OK 2841.22 4.088506ms 23.2% |
||||||
|
nostr-rs-relay OK 2883.32 4.044321ms 23.2% |
||||||
|
|
||||||
|
================================================================ |
||||||
|
End of Report |
||||||
|
================================================================ |
||||||
@ -0,0 +1,505 @@ |
|||||||
|
package policy |
||||||
|
|
||||||
|
import ( |
||||||
|
"testing" |
||||||
|
|
||||||
|
"next.orly.dev/pkg/encoders/hex" |
||||||
|
"next.orly.dev/pkg/interfaces/signer/p8k" |
||||||
|
) |
||||||
|
|
||||||
|
// TestReadAllowLogic tests the correct semantics of ReadAllow:
|
||||||
|
// ReadAllow should control WHO can read events of a kind,
|
||||||
|
// not which event authors can be read.
|
||||||
|
func TestReadAllowLogic(t *testing.T) { |
||||||
|
// Set up: Create 3 different users
|
||||||
|
// - alice: will author an event
|
||||||
|
// - bob: will be allowed to read (in ReadAllow list)
|
||||||
|
// - charlie: will NOT be allowed to read (not in ReadAllow list)
|
||||||
|
|
||||||
|
aliceSigner, alicePubkey := generateTestKeypair(t) |
||||||
|
_, bobPubkey := generateTestKeypair(t) |
||||||
|
_, charliePubkey := generateTestKeypair(t) |
||||||
|
|
||||||
|
// Create an event authored by Alice (kind 30166)
|
||||||
|
aliceEvent := createTestEvent(t, aliceSigner, "server heartbeat", 30166) |
||||||
|
|
||||||
|
// Create policy: Only Bob can READ kind 30166 events
|
||||||
|
policy := &P{ |
||||||
|
DefaultPolicy: "allow", |
||||||
|
Rules: map[int]Rule{ |
||||||
|
30166: { |
||||||
|
Description: "Private server heartbeat events", |
||||||
|
ReadAllow: []string{hex.Enc(bobPubkey)}, // Only Bob can read
|
||||||
|
}, |
||||||
|
}, |
||||||
|
} |
||||||
|
|
||||||
|
// Test 1: Bob (who is in ReadAllow) should be able to READ Alice's event
|
||||||
|
t.Run("allowed_reader_can_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, bobPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Bob should be allowed to READ Alice's event (Bob is in ReadAllow list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 2: Charlie (who is NOT in ReadAllow) should NOT be able to READ Alice's event
|
||||||
|
t.Run("disallowed_reader_cannot_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to READ Alice's event (Charlie is not in ReadAllow list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 3: Alice (the author) should NOT be able to READ her own event if she's not in ReadAllow
|
||||||
|
t.Run("author_not_in_readallow_cannot_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, alicePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Alice should NOT be allowed to READ her own event (Alice is not in ReadAllow list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 4: Unauthenticated user should NOT be able to READ
|
||||||
|
t.Run("unauthenticated_cannot_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, nil, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Unauthenticated user should NOT be allowed to READ (not in ReadAllow list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// TestReadDenyLogic tests the correct semantics of ReadDeny:
|
||||||
|
// ReadDeny should control WHO cannot read events of a kind,
|
||||||
|
// not which event authors cannot be read.
|
||||||
|
func TestReadDenyLogic(t *testing.T) { |
||||||
|
// Set up: Create 3 different users
|
||||||
|
aliceSigner, alicePubkey := generateTestKeypair(t) |
||||||
|
_, bobPubkey := generateTestKeypair(t) |
||||||
|
_, charliePubkey := generateTestKeypair(t) |
||||||
|
|
||||||
|
// Create an event authored by Alice
|
||||||
|
aliceEvent := createTestEvent(t, aliceSigner, "test content", 1) |
||||||
|
|
||||||
|
// Create policy: Charlie cannot READ kind 1 events (but others can)
|
||||||
|
policy := &P{ |
||||||
|
DefaultPolicy: "allow", |
||||||
|
Rules: map[int]Rule{ |
||||||
|
1: { |
||||||
|
Description: "Test events", |
||||||
|
ReadDeny: []string{hex.Enc(charliePubkey)}, // Charlie cannot read
|
||||||
|
}, |
||||||
|
}, |
||||||
|
} |
||||||
|
|
||||||
|
// Test 1: Bob (who is NOT in ReadDeny) should be able to READ Alice's event
|
||||||
|
t.Run("non_denied_reader_can_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, bobPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Bob should be allowed to READ Alice's event (Bob is not in ReadDeny list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 2: Charlie (who IS in ReadDeny) should NOT be able to READ Alice's event
|
||||||
|
t.Run("denied_reader_cannot_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to READ Alice's event (Charlie is in ReadDeny list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 3: Alice (the author, not in ReadDeny) should be able to READ her own event
|
||||||
|
t.Run("author_not_denied_can_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, alicePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Alice should be allowed to READ her own event (Alice is not in ReadDeny list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// TestSamplePolicyFromUser tests the exact policy configuration provided by the user
|
||||||
|
func TestSamplePolicyFromUser(t *testing.T) { |
||||||
|
policyJSON := []byte(`{ |
||||||
|
"kind": { |
||||||
|
"whitelist": [4678, 10306, 30520, 30919, 30166] |
||||||
|
}, |
||||||
|
"rules": { |
||||||
|
"4678": { |
||||||
|
"description": "Zenotp message events", |
||||||
|
"write_allow": [ |
||||||
|
"04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5", |
||||||
|
"e4101949fb0367c72f5105fc9bd810cde0e0e0f950da26c1f47a6af5f77ded31", |
||||||
|
"3f5fefcdc3fb41f3b299732acad7dc9c3649e8bde97d4f238380dde547b5e0e0" |
||||||
|
], |
||||||
|
"privileged": true |
||||||
|
}, |
||||||
|
"10306": { |
||||||
|
"description": "End user whitelist change requests", |
||||||
|
"read_allow": [ |
||||||
|
"04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5" |
||||||
|
], |
||||||
|
"privileged": true |
||||||
|
}, |
||||||
|
"30520": { |
||||||
|
"description": "End user whitelist events", |
||||||
|
"write_allow": [ |
||||||
|
"04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5" |
||||||
|
], |
||||||
|
"privileged": true |
||||||
|
}, |
||||||
|
"30919": { |
||||||
|
"description": "Customer indexing events", |
||||||
|
"write_allow": [ |
||||||
|
"04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5" |
||||||
|
], |
||||||
|
"privileged": true |
||||||
|
}, |
||||||
|
"30166": { |
||||||
|
"description": "Private server heartbeat events", |
||||||
|
"write_allow": [ |
||||||
|
"4d13154d82477a2d2e07a5c0d52def9035fdf379ae87cd6f0a5fb87801a4e5e4", |
||||||
|
"e400106ed10310ea28b039e81824265434bf86ece58722655c7a98f894406112" |
||||||
|
], |
||||||
|
"read_allow": [ |
||||||
|
"04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5", |
||||||
|
"4d13154d82477a2d2e07a5c0d52def9035fdf379ae87cd6f0a5fb87801a4e5e4", |
||||||
|
"e400106ed10310ea28b039e81824265434bf86ece58722655c7a98f894406112" |
||||||
|
] |
||||||
|
} |
||||||
|
} |
||||||
|
}`) |
||||||
|
|
||||||
|
policy, err := New(policyJSON) |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Failed to create policy: %v", err) |
||||||
|
} |
||||||
|
|
||||||
|
// Define the test users
|
||||||
|
adminPubkeyHex := "04eeb1ed409c0b9205e722f8bf1780f553b61876ef323aff16c9f80a9d8ee9f5" |
||||||
|
server1PubkeyHex := "4d13154d82477a2d2e07a5c0d52def9035fdf379ae87cd6f0a5fb87801a4e5e4" |
||||||
|
server2PubkeyHex := "e400106ed10310ea28b039e81824265434bf86ece58722655c7a98f894406112" |
||||||
|
|
||||||
|
adminPubkey, _ := hex.Dec(adminPubkeyHex) |
||||||
|
server1Pubkey, _ := hex.Dec(server1PubkeyHex) |
||||||
|
server2Pubkey, _ := hex.Dec(server2PubkeyHex) |
||||||
|
|
||||||
|
// Create a random user not in any allow list
|
||||||
|
randomSigner, randomPubkey := generateTestKeypair(t) |
||||||
|
|
||||||
|
// Test Kind 30166 (Private server heartbeat events)
|
||||||
|
t.Run("kind_30166_read_access", func(t *testing.T) { |
||||||
|
// We can't sign with the exact pubkey without the private key,
|
||||||
|
// so we'll create a generic event and manually set the pubkey for testing
|
||||||
|
heartbeatEvent := createTestEvent(t, randomSigner, "heartbeat data", 30166) |
||||||
|
heartbeatEvent.Pubkey = server1Pubkey // Set to server1's pubkey
|
||||||
|
|
||||||
|
// Test 1: Admin (in read_allow) should be able to READ the heartbeat
|
||||||
|
allowed, err := policy.CheckPolicy("read", heartbeatEvent, adminPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Admin should be allowed to READ kind 30166 events (admin is in read_allow list)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 2: Server1 (in read_allow) should be able to READ the heartbeat
|
||||||
|
allowed, err = policy.CheckPolicy("read", heartbeatEvent, server1Pubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Server1 should be allowed to READ kind 30166 events (server1 is in read_allow list)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 3: Server2 (in read_allow) should be able to READ the heartbeat
|
||||||
|
allowed, err = policy.CheckPolicy("read", heartbeatEvent, server2Pubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Server2 should be allowed to READ kind 30166 events (server2 is in read_allow list)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 4: Random user (NOT in read_allow) should NOT be able to READ the heartbeat
|
||||||
|
allowed, err = policy.CheckPolicy("read", heartbeatEvent, randomPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Random user should NOT be allowed to READ kind 30166 events (not in read_allow list)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 5: Unauthenticated user should NOT be able to READ (privileged + read_allow)
|
||||||
|
allowed, err = policy.CheckPolicy("read", heartbeatEvent, nil, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Unauthenticated user should NOT be allowed to READ kind 30166 events (privileged)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test Kind 10306 (End user whitelist change requests)
|
||||||
|
t.Run("kind_10306_read_access", func(t *testing.T) { |
||||||
|
// Create an event authored by a random user
|
||||||
|
requestEvent := createTestEvent(t, randomSigner, "whitelist change request", 10306) |
||||||
|
// Add admin to p tag to satisfy privileged requirement
|
||||||
|
addPTag(requestEvent, adminPubkey) |
||||||
|
|
||||||
|
// Test 1: Admin (in read_allow) should be able to READ the request
|
||||||
|
allowed, err := policy.CheckPolicy("read", requestEvent, adminPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Admin should be allowed to READ kind 10306 events (admin is in read_allow list)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 2: Server1 (NOT in read_allow for kind 10306) should NOT be able to READ
|
||||||
|
// Even though server1 might be allowed for kind 30166
|
||||||
|
allowed, err = policy.CheckPolicy("read", requestEvent, server1Pubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Server1 should NOT be allowed to READ kind 10306 events (not in read_allow list for this kind)") |
||||||
|
} |
||||||
|
|
||||||
|
// Test 3: Random user should NOT be able to READ
|
||||||
|
allowed, err = policy.CheckPolicy("read", requestEvent, randomPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Random user should NOT be allowed to READ kind 10306 events (not in read_allow list)") |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// TestReadAllowWithPrivileged tests interaction between read_allow and privileged
|
||||||
|
func TestReadAllowWithPrivileged(t *testing.T) { |
||||||
|
aliceSigner, alicePubkey := generateTestKeypair(t) |
||||||
|
_, bobPubkey := generateTestKeypair(t) |
||||||
|
_, charliePubkey := generateTestKeypair(t) |
||||||
|
|
||||||
|
// Create policy: Kind 100 is privileged AND has read_allow
|
||||||
|
policy := &P{ |
||||||
|
DefaultPolicy: "allow", |
||||||
|
Rules: map[int]Rule{ |
||||||
|
100: { |
||||||
|
Description: "Privileged with read_allow", |
||||||
|
Privileged: true, |
||||||
|
ReadAllow: []string{hex.Enc(bobPubkey)}, // Only Bob can read
|
||||||
|
}, |
||||||
|
}, |
||||||
|
} |
||||||
|
|
||||||
|
// Create event authored by Alice, with Bob in p tag
|
||||||
|
ev := createTestEvent(t, aliceSigner, "secret message", 100) |
||||||
|
addPTag(ev, bobPubkey) |
||||||
|
|
||||||
|
// Test 1: Bob (in ReadAllow AND in p tag) should be able to READ
|
||||||
|
t.Run("bob_in_readallow_and_ptag", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", ev, bobPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Bob should be allowed to READ (in ReadAllow AND satisfies privileged)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 2: Alice (author, but NOT in ReadAllow) should NOT be able to READ
|
||||||
|
// Even though she's the author (privileged check would pass), ReadAllow takes precedence
|
||||||
|
t.Run("alice_author_but_not_in_readallow", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", ev, alicePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Alice should NOT be allowed to READ (not in ReadAllow list, even though she's the author)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 3: Charlie (NOT in ReadAllow, NOT in p tag) should NOT be able to READ
|
||||||
|
t.Run("charlie_not_authorized", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", ev, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to READ (not in ReadAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 4: Create event with Charlie in p tag but Charlie not in ReadAllow
|
||||||
|
evWithCharlie := createTestEvent(t, aliceSigner, "message for charlie", 100) |
||||||
|
addPTag(evWithCharlie, charliePubkey) |
||||||
|
|
||||||
|
t.Run("charlie_in_ptag_but_not_readallow", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", evWithCharlie, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to READ (privileged check passes but not in ReadAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// TestReadAllowWriteAllowIndependent verifies that read_allow and write_allow are independent
|
||||||
|
func TestReadAllowWriteAllowIndependent(t *testing.T) { |
||||||
|
aliceSigner, alicePubkey := generateTestKeypair(t) |
||||||
|
bobSigner, bobPubkey := generateTestKeypair(t) |
||||||
|
_, charliePubkey := generateTestKeypair(t) |
||||||
|
|
||||||
|
// Create policy:
|
||||||
|
// - Alice can WRITE
|
||||||
|
// - Bob can READ
|
||||||
|
// - Charlie can do neither
|
||||||
|
policy := &P{ |
||||||
|
DefaultPolicy: "allow", |
||||||
|
Rules: map[int]Rule{ |
||||||
|
200: { |
||||||
|
Description: "Write/Read separation test", |
||||||
|
WriteAllow: []string{hex.Enc(alicePubkey)}, // Only Alice can write
|
||||||
|
ReadAllow: []string{hex.Enc(bobPubkey)}, // Only Bob can read
|
||||||
|
}, |
||||||
|
}, |
||||||
|
} |
||||||
|
|
||||||
|
// Alice creates an event
|
||||||
|
aliceEvent := createTestEvent(t, aliceSigner, "alice's message", 200) |
||||||
|
|
||||||
|
// Test 1: Alice can WRITE her own event
|
||||||
|
t.Run("alice_can_write", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("write", aliceEvent, alicePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Alice should be allowed to WRITE (in WriteAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 2: Alice CANNOT READ her own event (not in ReadAllow)
|
||||||
|
t.Run("alice_cannot_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, alicePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Alice should NOT be allowed to READ (not in ReadAllow, even though she wrote it)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Bob creates an event (will be denied on write)
|
||||||
|
bobEvent := createTestEvent(t, bobSigner, "bob's message", 200) |
||||||
|
|
||||||
|
// Test 3: Bob CANNOT WRITE (not in WriteAllow)
|
||||||
|
t.Run("bob_cannot_write", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("write", bobEvent, bobPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Bob should NOT be allowed to WRITE (not in WriteAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 4: Bob CAN READ Alice's event (in ReadAllow)
|
||||||
|
t.Run("bob_can_read", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", aliceEvent, bobPubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if !allowed { |
||||||
|
t.Error("Bob should be allowed to READ Alice's event (in ReadAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 5: Charlie cannot write or read
|
||||||
|
t.Run("charlie_cannot_write_or_read", func(t *testing.T) { |
||||||
|
// Create an event authored by Charlie
|
||||||
|
charlieSigner := p8k.MustNew() |
||||||
|
charlieSigner.Generate() |
||||||
|
charlieEvent := createTestEvent(t, charlieSigner, "charlie's message", 200) |
||||||
|
charlieEvent.Pubkey = charliePubkey // Set to Charlie's pubkey
|
||||||
|
|
||||||
|
// Charlie's event should be denied for write (Charlie not in WriteAllow)
|
||||||
|
allowed, err := policy.CheckPolicy("write", charlieEvent, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to WRITE events of kind 200 (not in WriteAllow)") |
||||||
|
} |
||||||
|
|
||||||
|
// Charlie should not be able to READ Alice's event (not in ReadAllow)
|
||||||
|
allowed, err = policy.CheckPolicy("read", aliceEvent, charliePubkey, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Charlie should NOT be allowed to READ (not in ReadAllow)") |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
|
|
||||||
|
// TestReadAccessEdgeCases tests edge cases like nil pubkeys
|
||||||
|
func TestReadAccessEdgeCases(t *testing.T) { |
||||||
|
aliceSigner, _ := generateTestKeypair(t) |
||||||
|
|
||||||
|
policy := &P{ |
||||||
|
DefaultPolicy: "allow", |
||||||
|
Rules: map[int]Rule{ |
||||||
|
300: { |
||||||
|
Description: "Test edge cases", |
||||||
|
ReadAllow: []string{"somepubkey"}, // Non-empty ReadAllow
|
||||||
|
}, |
||||||
|
}, |
||||||
|
} |
||||||
|
|
||||||
|
event := createTestEvent(t, aliceSigner, "test", 300) |
||||||
|
|
||||||
|
// Test 1: Nil loggedInPubkey with ReadAllow should be denied
|
||||||
|
t.Run("nil_pubkey_with_readallow", func(t *testing.T) { |
||||||
|
allowed, err := policy.CheckPolicy("read", event, nil, "127.0.0.1") |
||||||
|
if err != nil { |
||||||
|
t.Fatalf("Unexpected error: %v", err) |
||||||
|
} |
||||||
|
if allowed { |
||||||
|
t.Error("Nil pubkey should NOT be allowed when ReadAllow is set") |
||||||
|
} |
||||||
|
}) |
||||||
|
|
||||||
|
// Test 2: Verify hex.Enc(nil) doesn't accidentally match anything
|
||||||
|
t.Run("hex_enc_nil_no_match", func(t *testing.T) { |
||||||
|
emptyStringHex := hex.Enc(nil) |
||||||
|
t.Logf("hex.Enc(nil) = %q (len=%d)", emptyStringHex, len(emptyStringHex)) |
||||||
|
|
||||||
|
// Verify it's empty string
|
||||||
|
if emptyStringHex != "" { |
||||||
|
t.Errorf("Expected hex.Enc(nil) to be empty string, got %q", emptyStringHex) |
||||||
|
} |
||||||
|
}) |
||||||
|
} |
||||||
Loading…
Reference in new issue