diff --git a/.claude/settings.local.json b/.claude/settings.local.json index 679e011..f13eaad 100644 --- a/.claude/settings.local.json +++ b/.claude/settings.local.json @@ -173,7 +173,8 @@ "Bash(GOROOT=/home/mleku/go node:*)", "Bash(GOOS=js GOARCH=wasm go build:*)", "Bash(go mod graph:*)", - "Bash(xxd:*)" + "Bash(xxd:*)", + "Bash(CGO_ENABLED=0 go mod tidy:*)" ], "deny": [], "ask": [] diff --git a/BADGER_MIGRATION_GUIDE.md b/BADGER_MIGRATION_GUIDE.md deleted file mode 100644 index 2089cd4..0000000 --- a/BADGER_MIGRATION_GUIDE.md +++ /dev/null @@ -1,319 +0,0 @@ -# Badger Database Migration Guide - -## Overview - -This guide covers migrating your ORLY relay database when changing Badger configuration parameters, specifically for the VLogPercentile and table size optimizations. - -## When Migration is Needed - -Based on research of Badger v4 source code and documentation: - -### Configuration Changes That DON'T Require Migration - -The following options can be changed **without migration**: -- `BlockCacheSize` - Only affects in-memory cache -- `IndexCacheSize` - Only affects in-memory cache -- `NumCompactors` - Runtime setting -- `NumLevelZeroTables` - Affects compaction timing -- `NumMemtables` - Affects write buffering -- `DetectConflicts` - Runtime conflict detection -- `Compression` - New data uses new compression, old data remains as-is -- `BlockSize` - Explicitly stated in Badger source: "Changing BlockSize across DB runs will not break badger" - -### Configuration Changes That BENEFIT from Migration - -The following options apply to **new writes only** - existing data gradually adopts new settings through compaction: -- `VLogPercentile` - Affects where **new** values are stored (LSM vs vlog) -- `BaseTableSize` - **New** SST files use new size -- `MemTableSize` - Affects new write buffering -- `BaseLevelSize` - Affects new LSM tree structure -- `ValueLogFileSize` - New vlog files use new size - -**Migration Impact:** Without migration, existing data remains in its current location (LSM tree or value log). The database will **gradually** adapt through normal compaction, which may take days or weeks depending on write volume. - -## Migration Options - -### Option 1: No Migration (Let Natural Compaction Handle It) - -**Best for:** Low-traffic relays, testing environments - -**Pros:** -- No downtime required -- No manual intervention -- Zero risk of data loss - -**Cons:** -- Benefits take time to materialize (days/weeks) -- Old data layout persists until natural compaction -- Cache tuning benefits delayed - -**Steps:** -1. Update Badger configuration in `pkg/database/database.go` -2. Restart ORLY relay -3. Monitor performance over several days -4. Optionally run manual GC: `db.RunValueLogGC(0.5)` periodically - -### Option 2: Manual Value Log Garbage Collection - -**Best for:** Medium-traffic relays wanting faster optimization - -**Pros:** -- Faster than natural compaction -- Still safe (no export/import) -- Can run while relay is online - -**Cons:** -- Still gradual (hours instead of days) -- CPU/disk intensive during GC -- Partial benefit until GC completes - -**Steps:** -1. Update Badger configuration -2. Restart ORLY relay -3. Monitor logs for compaction activity -4. Manually trigger GC if needed (future feature - not currently exposed) - -### Option 3: Full Export/Import Migration (RECOMMENDED for Production) - -**Best for:** Production relays, large databases, maximum performance - -**Pros:** -- Immediate full benefit of new configuration -- Clean database structure -- Predictable migration time -- Reclaims all disk space - -**Cons:** -- Requires relay downtime (several hours for large DBs) -- Requires 2x disk space temporarily -- More complex procedure - -**Steps:** See detailed procedure below - -## Full Migration Procedure (Option 3) - -### Prerequisites - -1. **Disk space:** At minimum 2.5x current database size - - 1x for current database - - 1x for JSONL export - - 0.5x for new database (will be smaller with compression) - -2. **Time estimate:** - - Export: ~100-500 MB/s depending on disk speed - - Import: ~50-200 MB/s with indexing overhead - - Example: 10 GB database = ~10-30 minutes total - -3. **Backup:** Ensure you have a recent backup before proceeding - -### Step-by-Step Migration - -#### 1. Prepare Migration Script - -Use the provided `scripts/migrate-badger-config.sh` script (see below). - -#### 2. Stop the Relay - -```bash -# If using systemd -sudo systemctl stop orly - -# If running manually -pkill orly -``` - -#### 3. Run Migration - -```bash -cd ~/src/next.orly.dev -chmod +x scripts/migrate-badger-config.sh -./scripts/migrate-badger-config.sh -``` - -The script will: -- Export all events to JSONL format -- Move old database to backup location -- Create new database with updated configuration -- Import all events (rebuilds indexes automatically) -- Verify event count matches - -#### 4. Verify Migration - -```bash -# Check that events were migrated -echo "Old event count:" -cat ~/.local/share/ORLY-backup-*/migration.log | grep "exported.*events" - -echo "New event count:" -cat ~/.local/share/ORLY/migration.log | grep "saved.*events" -``` - -#### 5. Restart Relay - -```bash -# If using systemd -sudo systemctl start orly -sudo journalctl -u orly -f - -# If running manually -./orly -``` - -#### 6. Monitor Performance - -Watch for improvements in: -- Cache hit ratio (should be >85% with new config) -- Average query latency (should be <3ms for cached events) -- No "Block cache too small" warnings in logs - -#### 7. Clean Up (After Verification) - -```bash -# Once you confirm everything works (wait 24-48 hours) -rm -rf ~/.local/share/ORLY-backup-* -rm ~/.local/share/ORLY/events-export.jsonl -``` - -## Migration Script - -The migration script is located at `scripts/migrate-badger-config.sh` and handles: -- Automatic export of all events to JSONL -- Safe backup of existing database -- Creation of new database with updated config -- Import and indexing of all events -- Verification of event counts - -## Rollback Procedure - -If migration fails or performance degrades: - -```bash -# Stop the relay -sudo systemctl stop orly # or pkill orly - -# Restore old database -rm -rf ~/.local/share/ORLY -mv ~/.local/share/ORLY-backup-$(date +%Y%m%d)* ~/.local/share/ORLY - -# Restart with old configuration -sudo systemctl start orly -``` - -## Configuration Changes Summary - -### Changes Applied in pkg/database/database.go - -```go -// Cache sizes (can change without migration) -opts.BlockCacheSize = 16384 MB (was 512 MB) -opts.IndexCacheSize = 4096 MB (was 256 MB) - -// Table sizes (benefits from migration) -opts.BaseTableSize = 8 MB (was 64 MB) -opts.MemTableSize = 16 MB (was 64 MB) -opts.ValueLogFileSize = 128 MB (was 256 MB) - -// Inline event optimization (CRITICAL - benefits from migration) -opts.VLogPercentile = 0.99 (was 0.0 - default) - -// LSM structure (benefits from migration) -opts.BaseLevelSize = 64 MB (was 10 MB - default) - -// Performance settings (no migration needed) -opts.DetectConflicts = false (was true) -opts.Compression = options.ZSTD (was options.None) -opts.NumCompactors = 8 (was 4) -opts.NumMemtables = 8 (was 5) -``` - -## Expected Improvements - -### Before Migration -- Cache hit ratio: 33% -- Average latency: 9.35ms -- P95 latency: 34.48ms -- Block cache warnings: Yes - -### After Migration -- Cache hit ratio: 85-95% -- Average latency: <3ms -- P95 latency: <8ms -- Block cache warnings: No -- Inline events: 3-5x faster reads - -## Troubleshooting - -### Migration Script Fails - -**Error:** "Not enough disk space" -- Free up space or use Option 1 (natural compaction) -- Ensure you have 2.5x current DB size available - -**Error:** "Export failed" -- Check database is not corrupted -- Ensure ORLY is stopped -- Check file permissions - -**Error:** "Import count mismatch" -- This is informational - some events may be duplicates -- Check logs for specific errors -- Verify core events are present via relay queries - -### Performance Not Improved - -**After migration, performance is the same:** -1. Verify configuration was actually applied: - ```bash - # Check running relay logs for config output - sudo journalctl -u orly | grep -i "block.*cache\|vlog" - ``` - -2. Wait for cache to warm up (2-5 minutes after start) - -3. Check if workload changed (different query patterns) - -4. Verify disk I/O is not bottleneck: - ```bash - iostat -x 5 - ``` - -### High CPU During Migration - -- This is normal - import rebuilds all indexes -- Migration is single-threaded by design (data consistency) -- Expect 30-60% CPU usage on one core - -## Additional Notes - -### Compression Impact - -The `Compression = options.ZSTD` setting: -- Only compresses **new** data -- Old data remains uncompressed until rewritten by compaction -- Migration forces all data to be rewritten → immediate compression benefit -- Expect 2-3x compression ratio for event data - -### VLogPercentile Behavior - -With `VLogPercentile = 0.99`: -- **99% of values** stored in LSM tree (fast access) -- **1% of values** stored in value log (large events >100 KB) -- Threshold dynamically adjusted based on value size distribution -- Perfect for ORLY's inline event optimization - -### Production Considerations - -For production relays: -1. Schedule migration during low-traffic period -2. Notify users of maintenance window -3. Have rollback plan ready -4. Monitor closely for 24-48 hours after migration -5. Keep backup for at least 1 week - -## References - -- Badger v4 Documentation: https://pkg.go.dev/github.com/dgraph-io/badger/v4 -- ORLY Database Package: `pkg/database/database.go` -- Export/Import Implementation: `pkg/database/{export,import}.go` -- Cache Optimization Analysis: `cmd/benchmark/CACHE_OPTIMIZATION_STRATEGY.md` -- Inline Event Optimization: `cmd/benchmark/INLINE_EVENT_OPTIMIZATION.md` diff --git a/CLAUDE.md b/CLAUDE.md index 1541fd5..7f70828 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -8,7 +8,7 @@ ORLY is a high-performance Nostr relay written in Go, designed for personal rela **Key Technologies:** - **Language**: Go 1.25.3+ -- **Database**: Badger v4 (embedded), DGraph (distributed graph), or Neo4j (social graph) +- **Database**: Badger v4 (embedded) or Neo4j (social graph) - **Cryptography**: Custom p8k library using purego for secp256k1 operations (no CGO) - **Web UI**: Svelte frontend embedded in the binary - **WebSocket**: gorilla/websocket for Nostr protocol @@ -140,12 +140,9 @@ export ORLY_SPROCKET_ENABLED=true # Enable policy system export ORLY_POLICY_ENABLED=true -# Database backend selection (badger, dgraph, or neo4j) +# Database backend selection (badger or neo4j) export ORLY_DB_TYPE=badger -# DGraph configuration (only when ORLY_DB_TYPE=dgraph) -export ORLY_DGRAPH_URL=localhost:9080 - # Neo4j configuration (only when ORLY_DB_TYPE=neo4j) export ORLY_NEO4J_URI=bolt://localhost:7687 export ORLY_NEO4J_USER=neo4j @@ -199,7 +196,7 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes **`pkg/database/`** - Database abstraction layer with multiple backend support - `interface.go` - Database interface definition for pluggable backends -- `factory.go` - Database backend selection (Badger, DGraph, or Neo4j) +- `factory.go` - Database backend selection (Badger or Neo4j) - `database.go` - Badger implementation with cache tuning and query cache - `save-event.go` - Event storage with index updates - `query-events.go` - Main query execution engine with filter normalization @@ -322,7 +319,6 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes **Database Backend Selection:** - Supports multiple backends via `ORLY_DB_TYPE` environment variable - **Badger** (default): Embedded key-value store with custom indexing, ideal for single-instance deployments -- **DGraph**: Distributed graph database for larger, multi-node deployments - **Neo4j**: Graph database with social graph and Web of Trust (WoT) extensions - Processes kinds 0 (profile), 3 (contacts), 1984 (reports), 10000 (mute list) for social graph - NostrUser nodes with trust metrics (influence, PageRank) @@ -357,7 +353,7 @@ export ORLY_AUTH_TO_WRITE=false # Require auth only for writes - All config fields use `ORLY_` prefix with struct tags defining defaults and usage - Supports XDG directories via `github.com/adrg/xdg` - Default data directory: `~/.local/share/ORLY` -- Database-specific config (Neo4j, DGraph, Badger) is passed via `DatabaseConfig` struct in `pkg/database/factory.go` +- Database-specific config (Neo4j, Badger) is passed via `DatabaseConfig` struct in `pkg/database/factory.go` **Constants - CRITICAL RULES:** - **ALWAYS** define named constants for values used more than a few times @@ -455,8 +451,6 @@ if IsValidHexPubkey(pubkey) { ... } - `pkg/neo4j/save-event.go` - Event storage with e/p tag handling - `pkg/neo4j/social-event-processor.go` - Social graph with p-tag extraction - `pkg/neo4j/query-events.go` - Filter queries with tag matching -- `pkg/dgraph/save-event.go` - DGraph event storage with e/p tag handling -- `pkg/dgraph/delete.go` - DGraph event deletion with e-tag handling - `pkg/database/save-event.go` - Badger event storage - `pkg/database/filter_utils.go` - Tag normalization utilities - `pkg/find/parser.go` - FIND protocol parser with p-tag extraction diff --git a/DGRAPH_IMPLEMENTATION_STATUS.md b/DGRAPH_IMPLEMENTATION_STATUS.md deleted file mode 100644 index 0a2294c..0000000 --- a/DGRAPH_IMPLEMENTATION_STATUS.md +++ /dev/null @@ -1,387 +0,0 @@ -# Dgraph Database Implementation Status - -## Overview - -This document tracks the implementation of Dgraph as an alternative database backend for ORLY. The implementation allows switching between Badger (default) and Dgraph via the `ORLY_DB_TYPE` environment variable. - -## Completion Status: ✅ STEP 1 COMPLETE - DGRAPH SERVER INTEGRATION + TESTS - -**Build Status:** ✅ Successfully compiles with `CGO_ENABLED=0` -**Binary Test:** ✅ ORLY v0.29.0 starts and runs successfully -**Database Backend:** Uses badger by default, dgraph client integration complete -**Dgraph Integration:** ✅ Real dgraph client connection via dgo library -**Test Suite:** ✅ Comprehensive test suite mirroring badger tests - -### ✅ Completed Components - -1. **Core Infrastructure** - - Database interface abstraction (`pkg/database/interface.go`) - - Database factory with `ORLY_DB_TYPE` configuration - - Dgraph package structure (`pkg/dgraph/`) - - Schema definition for Nostr events, authors, tags, and markers - - Lifecycle management (initialization, shutdown) - -2. **Serial Number Generation** - - Atomic counter using Dgraph markers (`pkg/dgraph/serial.go`) - - Automatic initialization on startup - - Thread-safe increment with mutex protection - - Serial numbers assigned during SaveEvent - -3. **Event Operations** - - `SaveEvent`: Store events with graph relationships - - `QueryEvents`: DQL query generation from Nostr filters - - `QueryEventsWithOptions`: Support for delete events and versions - - `CountEvents`: Event counting - - `FetchEventBySerial`: Retrieve by serial number - - `DeleteEvent`: Event deletion by ID - - `Delete EventBySerial`: Event deletion by serial - - `ProcessDelete`: Kind 5 deletion processing - -4. **Metadata Storage (Marker-based)** - - `SetMarker`/`GetMarker`/`HasMarker`/`DeleteMarker`: Key-value storage - - Relay identity storage (using markers) - - All metadata stored as special Marker nodes in graph - -5. **Subscriptions & Payments** - - `GetSubscription`/`IsSubscriptionActive`/`ExtendSubscription` - - `RecordPayment`/`GetPaymentHistory` - - `ExtendBlossomSubscription`/`GetBlossomStorageQuota` - - `IsFirstTimeUser` - - All implemented using JSON-encoded markers - -6. **NIP-43 Invite System** - - `AddNIP43Member`/`RemoveNIP43Member`/`IsNIP43Member` - - `GetNIP43Membership`/`GetAllNIP43Members` - - `StoreInviteCode`/`ValidateInviteCode`/`DeleteInviteCode` - - All implemented using JSON-encoded markers - -7. **Import/Export** - - `Import`/`ImportEventsFromReader`/`ImportEventsFromStrings` - - JSONL format support - - Basic `Export` stub - -8. **Configuration** - - `ORLY_DB_TYPE` environment variable added - - Factory pattern for database instantiation - - main.go updated to use database.Database interface - -9. **Compilation Fixes (Completed)** - - ✅ All interface signatures matched to badger implementation - - ✅ Fixed 100+ type errors in pkg/dgraph package - - ✅ Updated app layer to use database interface instead of concrete types - - ✅ Added type assertions for compatibility with existing managers - - ✅ Project compiles successfully with both badger and dgraph implementations - -10. **Dgraph Server Integration (✅ STEP 1 COMPLETE)** - - ✅ Added dgo client library (v230.0.1) - - ✅ Implemented gRPC connection to external dgraph instance - - ✅ Real Query() and Mutate() methods using dgraph client - - ✅ Schema definition and automatic application on startup - - ✅ ORLY_DGRAPH_URL configuration (default: localhost:9080) - - ✅ Proper connection lifecycle management - - ✅ Badger metadata store for local key-value storage - - ✅ Dual-storage architecture: dgraph for events, badger for metadata - -11. **Test Suite (✅ COMPLETE)** - - ✅ Test infrastructure (testmain_test.go, helpers_test.go) - - ✅ Comprehensive save-event tests - - ✅ Comprehensive query-events tests - - ✅ Docker-compose setup for dgraph server - - ✅ Automated test scripts (test-dgraph.sh, dgraph-start.sh) - - ✅ Test documentation (DGRAPH_TESTING.md) - - ✅ All tests compile successfully - - ⏳ Tests require running dgraph server to execute - -### ⚠️ Remaining Work (For Production Use) - -1. **Unimplemented Methods** (Stubs - Not Critical) - - `GetSerialsFromFilter`: Returns "not implemented" error - - `GetSerialsByRange`: Returns "not implemented" error - - `EventIdsBySerial`: Returns "not implemented" error - - These are helper methods that may not be critical for basic operation - -2. **📝 STEP 2: DQL Implementation** (Next Priority) - - Update save-event.go to use real Mutate() calls with RDF N-Quads - - Update query-events.go to parse actual DQL responses - - Implement proper event JSON unmarshaling from dgraph responses - - Add error handling for dgraph-specific errors - - Optimize DQL queries for performance - -3. **Schema Optimizations** - - Current tag queries are simplified - - Complex tag filters may need refinement - - Consider using Dgraph facets for better tag indexing - -4. **📝 STEP 3: Testing** (After DQL Implementation) - - Set up local dgraph instance for testing - - Integration testing with relay-tester - - Performance comparison with Badger - - Memory usage profiling - - Test with actual dgraph server instance - -### 📦 Dependencies Added - -```bash -go get github.com/dgraph-io/dgo/v230@v230.0.1 -go get google.golang.org/grpc@latest -go get github.com/dgraph-io/badger/v4 # For metadata storage -``` - -All dependencies have been added and `go mod tidy` completed successfully. - -### 🔌 Dgraph Server Integration Details - -The implementation uses a **client-server architecture**: - -1. **Dgraph Server** (External) - - Runs as a separate process (via docker or standalone) - - Default gRPC endpoint: `localhost:9080` - - Configured via `ORLY_DGRAPH_URL` environment variable - -2. **ORLY Dgraph Client** (Integrated) - - Uses dgo library for gRPC communication - - Connects on startup, applies Nostr schema automatically - - Query and Mutate methods communicate with dgraph server - -3. **Dual Storage Architecture** - - **Dgraph**: Event graph storage (events, authors, tags, relationships) - - **Badger**: Metadata storage (markers, counters, relay identity) - - This hybrid approach leverages strengths of both databases - -## Implementation Approach - -### Marker-Based Storage - -For metadata that doesn't fit the graph model (subscriptions, NIP-43, identity), we use a marker-based approach: - -1. **Markers** are special graph nodes with type "Marker" -2. Each marker has: - - `marker.key`: String index for lookup - - `marker.value`: Hex-encoded or JSON-encoded data -3. This provides key-value storage within the graph database - -### Serial Number Management - -Serial numbers are critical for event ordering. Implementation: - -```go -// Serial counter stored as a special marker -const serialCounterKey = "serial_counter" - -// Atomic increment with mutex protection -func (d *D) getNextSerial() (uint64, error) { - serialMutex.Lock() - defer serialMutex.Unlock() - - // Query current value, increment, save - ... -} -``` - -### Event Storage - -Events are stored as graph nodes with relationships: - -- **Event nodes**: ID, serial, kind, created_at, content, sig, pubkey, tags -- **Author nodes**: Pubkey with reverse edges to events -- **Tag nodes**: Tag type and value with reverse edges -- **Relationships**: `authored_by`, `references`, `mentions`, `tagged_with` - -## Files Created/Modified - -### New Files (`pkg/dgraph/`) -- `dgraph.go`: Main implementation, initialization, schema -- `save-event.go`: Event storage with RDF triple generation -- `query-events.go`: Nostr filter to DQL translation -- `fetch-event.go`: Event retrieval methods -- `delete.go`: Event deletion -- `markers.go`: Key-value metadata storage -- `identity.go`: Relay identity management -- `serial.go`: Serial number generation -- `subscriptions.go`: Subscription/payment methods -- `nip43.go`: NIP-43 invite system -- `import-export.go`: Import/export operations -- `logger.go`: Logging adapter -- `utils.go`: Helper functions -- `README.md`: Documentation - -### Modified Files -- `pkg/database/interface.go`: Database interface definition -- `pkg/database/factory.go`: Database factory -- `pkg/database/database.go`: Badger compile-time check -- `app/config/config.go`: Added `ORLY_DB_TYPE` config -- `app/server.go`: Changed to use Database interface -- `app/main.go`: Updated to use Database interface -- `main.go`: Added dgraph import and factory usage - -## Usage - -### Setting Up Dgraph Server - -Before using dgraph mode, start a dgraph server: - -```bash -# Using docker (recommended) -docker run -d -p 8080:8080 -p 9080:9080 -p 8000:8000 \ - -v ~/dgraph:/dgraph \ - dgraph/standalone:latest - -# Or using docker-compose (see docs/dgraph-docker-compose.yml) -docker-compose up -d dgraph -``` - -### Environment Configuration - -```bash -# Use Badger (default) -./orly - -# Use Dgraph with default localhost connection -export ORLY_DB_TYPE=dgraph -./orly - -# Use Dgraph with custom server -export ORLY_DB_TYPE=dgraph -export ORLY_DGRAPH_URL=remote.dgraph.server:9080 -./orly - -# With full configuration -export ORLY_DB_TYPE=dgraph -export ORLY_DGRAPH_URL=localhost:9080 -export ORLY_DATA_DIR=/path/to/data -./orly -``` - -### Data Storage - -#### Badger -- Single directory with SST files -- Typical size: 100-500MB for moderate usage - -#### Dgraph -- Three subdirectories: - - `p/`: Postings (main data) - - `w/`: Write-ahead log - - Typical size: 500MB-2GB overhead + event data - -## Performance Considerations - -### Memory Usage -- **Badger**: ~100-200MB baseline -- **Dgraph**: ~500MB-1GB baseline - -### Query Performance -- **Simple queries** (by ID, kind, author): Dgraph may be slower than Badger -- **Graph traversals** (follows-of-follows): Dgraph significantly faster -- **Full-text search**: Dgraph has built-in support - -### Recommendations -1. Use Badger for simple, high-performance relays -2. Use Dgraph for relays needing complex graph queries -3. Consider hybrid approach: Badger primary + Dgraph secondary - -## Next Steps to Complete - -### ✅ STEP 1: Dgraph Server Integration (COMPLETED) -- ✅ Added dgo client library -- ✅ Implemented gRPC connection -- ✅ Real Query/Mutate methods -- ✅ Schema application -- ✅ Configuration added - -### 📝 STEP 2: DQL Implementation (Next Priority) - -1. **Update SaveEvent Implementation** (2-3 hours) - - Replace RDF string building with actual Mutate() calls - - Use dgraph's SetNquads for event insertion - - Handle UIDs and references properly - - Add error handling and transaction rollback - -2. **Update QueryEvents Implementation** (2-3 hours) - - Parse actual JSON responses from dgraph Query() - - Implement proper event deserialization - - Handle pagination with DQL offset/limit - - Add query optimization for common patterns - -3. **Implement Helper Methods** (1-2 hours) - - FetchEventBySerial using DQL - - GetSerialsByIds using DQL - - CountEvents using DQL aggregation - - DeleteEvent using dgraph mutations - -### 📝 STEP 3: Testing (After DQL) - -1. **Setup Dgraph Test Instance** (30 minutes) - ```bash - # Start dgraph server - docker run -d -p 9080:9080 dgraph/standalone:latest - - # Test connection - ORLY_DB_TYPE=dgraph ORLY_DGRAPH_URL=localhost:9080 ./orly - ``` - -2. **Basic Functional Testing** (1 hour) - ```bash - # Start with dgraph - ORLY_DB_TYPE=dgraph ./orly - - # Test with relay-tester - go run cmd/relay-tester/main.go -url ws://localhost:3334 - ``` - -3. **Performance Testing** (2 hours) - ```bash - # Compare query performance - # Memory profiling - # Load testing - ``` - -## Known Limitations - -1. **Subscription Storage**: Uses simple JSON encoding in markers rather than proper graph nodes -2. **Tag Queries**: Simplified implementation may not handle all complex tag filter combinations -3. **Export**: Basic stub - needs full implementation for production use -4. **Migrations**: Not implemented (Dgraph schema changes require manual updates) - -## Conclusion - -The Dgraph implementation has completed **✅ STEP 1: DGRAPH SERVER INTEGRATION** successfully. - -### What Works Now (Step 1 Complete) -- ✅ Full database interface implementation -- ✅ All method signatures match badger implementation -- ✅ Project compiles successfully with `CGO_ENABLED=0` -- ✅ Binary runs and starts successfully -- ✅ Real dgraph client connection via dgo library -- ✅ gRPC communication with external dgraph server -- ✅ Schema application on startup -- ✅ Query() and Mutate() methods implemented -- ✅ ORLY_DGRAPH_URL configuration -- ✅ Dual-storage architecture (dgraph + badger metadata) - -### Implementation Status -- **Step 1: Dgraph Server Integration** ✅ COMPLETE -- **Step 2: DQL Implementation** 📝 Next (save-event.go and query-events.go need updates) -- **Step 3: Testing** 📝 After Step 2 (relay-tester, performance benchmarks) - -### Architecture Summary - -The implementation uses a **client-server architecture** with dual storage: - -1. **Dgraph Client** (ORLY) - - Connects to external dgraph via gRPC (default: localhost:9080) - - Applies Nostr schema automatically on startup - - Query/Mutate methods ready for DQL operations - -2. **Dgraph Server** (External) - - Run separately via docker or standalone binary - - Stores event graph data (events, authors, tags, relationships) - - Handles all graph queries and mutations - -3. **Badger Metadata Store** (Local) - - Stores markers, counters, relay identity - - Provides fast key-value access for non-graph data - - Complements dgraph for hybrid storage benefits - -The abstraction layer is complete and the dgraph client integration is functional. Next step is implementing actual DQL query/mutation logic in save-event.go and query-events.go. - diff --git a/MIGRATION_SUMMARY.md b/MIGRATION_SUMMARY.md deleted file mode 100644 index 2d48d70..0000000 --- a/MIGRATION_SUMMARY.md +++ /dev/null @@ -1,197 +0,0 @@ -# Migration to git.mleku.dev/mleku/nostr Library - -## Overview - -Successfully migrated the ORLY relay codebase to use the external `git.mleku.dev/mleku/nostr` library instead of maintaining duplicate protocol code internally. - -## Migration Statistics - -- **Files Changed**: 449 -- **Lines Added**: 624 -- **Lines Removed**: 65,132 -- **Net Reduction**: **64,508 lines of code** (~30-40% of the codebase) - -## Packages Migrated - -### Removed from next.orly.dev/pkg/ - -The following packages were completely removed as they now come from the nostr library: - -#### Encoders (`pkg/encoders/`) -- `encoders/event/` → `git.mleku.dev/mleku/nostr/encoders/event` -- `encoders/filter/` → `git.mleku.dev/mleku/nostr/encoders/filter` -- `encoders/tag/` → `git.mleku.dev/mleku/nostr/encoders/tag` -- `encoders/kind/` → `git.mleku.dev/mleku/nostr/encoders/kind` -- `encoders/timestamp/` → `git.mleku.dev/mleku/nostr/encoders/timestamp` -- `encoders/hex/` → `git.mleku.dev/mleku/nostr/encoders/hex` -- `encoders/text/` → `git.mleku.dev/mleku/nostr/encoders/text` -- `encoders/ints/` → `git.mleku.dev/mleku/nostr/encoders/ints` -- `encoders/bech32encoding/` → `git.mleku.dev/mleku/nostr/encoders/bech32encoding` -- `encoders/reason/` → `git.mleku.dev/mleku/nostr/encoders/reason` -- `encoders/varint/` → `git.mleku.dev/mleku/nostr/encoders/varint` - -#### Envelopes (`pkg/encoders/envelopes/`) -- `envelopes/eventenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/eventenvelope` -- `envelopes/reqenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/reqenvelope` -- `envelopes/okenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/okenvelope` -- `envelopes/noticeenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/noticeenvelope` -- `envelopes/eoseenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/eoseenvelope` -- `envelopes/closedenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/closedenvelope` -- `envelopes/closeenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/closeenvelope` -- `envelopes/countenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/countenvelope` -- `envelopes/authenvelope/` → `git.mleku.dev/mleku/nostr/encoders/envelopes/authenvelope` - -#### Cryptography (`pkg/crypto/`) -- `crypto/p8k/` → `git.mleku.dev/mleku/nostr/crypto/p8k` -- `crypto/ec/schnorr/` → `git.mleku.dev/mleku/nostr/crypto/ec/schnorr` -- `crypto/ec/secp256k1/` → `git.mleku.dev/mleku/nostr/crypto/ec/secp256k1` -- `crypto/ec/bech32/` → `git.mleku.dev/mleku/nostr/crypto/ec/bech32` -- `crypto/ec/musig2/` → `git.mleku.dev/mleku/nostr/crypto/ec/musig2` -- `crypto/ec/base58/` → `git.mleku.dev/mleku/nostr/crypto/ec/base58` -- `crypto/ec/ecdsa/` → `git.mleku.dev/mleku/nostr/crypto/ec/ecdsa` -- `crypto/ec/taproot/` → `git.mleku.dev/mleku/nostr/crypto/ec/taproot` -- `crypto/keys/` → `git.mleku.dev/mleku/nostr/crypto/keys` -- `crypto/encryption/` → `git.mleku.dev/mleku/nostr/crypto/encryption` - -#### Interfaces (`pkg/interfaces/`) -- `interfaces/signer/` → `git.mleku.dev/mleku/nostr/interfaces/signer` -- `interfaces/signer/p8k/` → `git.mleku.dev/mleku/nostr/interfaces/signer/p8k` -- `interfaces/codec/` → `git.mleku.dev/mleku/nostr/interfaces/codec` - -#### Protocol (`pkg/protocol/`) -- `protocol/ws/` → `git.mleku.dev/mleku/nostr/ws` (note: moved to root level in library) -- `protocol/auth/` → `git.mleku.dev/mleku/nostr/protocol/auth` -- `protocol/relayinfo/` → `git.mleku.dev/mleku/nostr/relayinfo` -- `protocol/httpauth/` → `git.mleku.dev/mleku/nostr/httpauth` - -#### Utilities (`pkg/utils/`) -- `utils/bufpool/` → `git.mleku.dev/mleku/nostr/utils/bufpool` -- `utils/normalize/` → `git.mleku.dev/mleku/nostr/utils/normalize` -- `utils/constraints/` → `git.mleku.dev/mleku/nostr/utils/constraints` -- `utils/number/` → `git.mleku.dev/mleku/nostr/utils/number` -- `utils/pointers/` → `git.mleku.dev/mleku/nostr/utils/pointers` -- `utils/units/` → `git.mleku.dev/mleku/nostr/utils/units` -- `utils/values/` → `git.mleku.dev/mleku/nostr/utils/values` - -### Packages Kept in ORLY (Relay-Specific) - -The following packages remain in the ORLY codebase as they are relay-specific: - -- `pkg/database/` - Database abstraction layer (Badger, DGraph backends) -- `pkg/acl/` - Access control systems (follows, managed, none) -- `pkg/policy/` - Event filtering and validation policies -- `pkg/spider/` - Event syncing from other relays -- `pkg/sync/` - Distributed relay synchronization -- `pkg/protocol/blossom/` - Blossom blob storage protocol implementation -- `pkg/protocol/directory/` - Directory service -- `pkg/protocol/nwc/` - Nostr Wallet Connect -- `pkg/protocol/nip43/` - NIP-43 relay management -- `pkg/protocol/publish/` - Event publisher for WebSocket subscriptions -- `pkg/interfaces/publisher/` - Publisher interface -- `pkg/interfaces/store/` - Storage interface -- `pkg/interfaces/acl/` - ACL interface -- `pkg/interfaces/typer/` - Type identification interface (not in nostr library) -- `pkg/utils/atomic/` - Extended atomic operations -- `pkg/utils/interrupt/` - Signal handling -- `pkg/utils/apputil/` - Application utilities -- `pkg/utils/qu/` - Queue utilities -- `pkg/utils/fastequal.go` - Fast byte comparison -- `pkg/utils/subscription.go` - Subscription utilities -- `pkg/run/` - Run utilities -- `pkg/version/` - Version information -- `app/` - All relay server code - -## Migration Process - -### 1. Added Dependency -```bash -go get git.mleku.dev/mleku/nostr@latest -``` - -### 2. Updated Imports -Created automated migration script to update all import paths from: -- `next.orly.dev/pkg/encoders/*` → `git.mleku.dev/mleku/nostr/encoders/*` -- `next.orly.dev/pkg/crypto/*` → `git.mleku.dev/mleku/nostr/crypto/*` -- etc. - -Processed **240+ files** with encoder imports, **74 files** with crypto imports, and **9 files** with WebSocket client imports. - -### 3. Special Cases -- **pkg/interfaces/typer/**: Restored from git as it's not in the nostr library (relay-specific) -- **pkg/protocol/ws/**: Mapped to root-level `ws/` in the nostr library -- **Test helpers**: Updated to use `git.mleku.dev/mleku/nostr/encoders/event/examples` -- **atag package**: Migrated to `git.mleku.dev/mleku/nostr/encoders/tag/atag` - -### 4. Removed Redundant Code -```bash -rm -rf pkg/encoders pkg/crypto pkg/interfaces/signer pkg/interfaces/codec \ - pkg/protocol/ws pkg/protocol/auth pkg/protocol/relayinfo \ - pkg/protocol/httpauth pkg/utils/bufpool pkg/utils/normalize \ - pkg/utils/constraints pkg/utils/number pkg/utils/pointers \ - pkg/utils/units pkg/utils/values -``` - -### 5. Fixed Dependencies -- Ran `go mod tidy` to clean up go.mod -- Rebuilt with `CGO_ENABLED=0 GOFLAGS=-mod=mod go build -o orly .` -- Verified tests pass - -## Benefits - -### 1. Code Reduction -- **64,508 fewer lines** of code to maintain -- Simplified codebase focused on relay-specific functionality -- Reduced maintenance burden - -### 2. Code Reuse -- Nostr protocol code can be shared across multiple projects -- Clients and other tools can use the same library -- Consistent implementation across the ecosystem - -### 3. Separation of Concerns -- Clear boundary between general Nostr protocol code (library) and relay-specific code (ORLY) -- Easier to understand which code is protocol-level vs. application-level - -### 4. Improved Development -- Protocol improvements benefit all projects using the library -- Bug fixes are centralized -- Testing is consolidated - -## Verification - -### Build Status -✅ **Build successful**: Binary builds without errors - -### Test Status -✅ **App tests passed**: All application-level tests pass -⏳ **Database tests**: Run extensively (timing out due to comprehensive query tests, but functionally working) - -### Binary Output -``` -$ ./orly version -ℹ️ starting ORLY v0.29.14 -✅ Successfully initialized with nostr library -``` - -## Next Steps - -1. **Commit Changes**: Review and commit the migration -2. **Update Documentation**: Update CLAUDE.md to reflect the new architecture -3. **CI/CD**: Ensure CI pipeline works with the new dependency -4. **Testing**: Run full test suite to verify all functionality - -## Notes - -- The migration maintains full compatibility with existing ORLY functionality -- No changes to relay behavior or API -- All relay-specific features remain intact -- The nostr library is actively maintained at `git.mleku.dev/mleku/nostr` -- Library version: **v1.0.2** - -## Migration Scripts - -Created helper scripts (can be removed after commit): -- `migrate-imports.sh` - Original comprehensive migration script -- `migrate-fast.sh` - Fast sed-based migration script (used) - -These scripts can be deleted after the migration is committed. diff --git a/app/config/config.go b/app/config/config.go index eb1b057..82bc5cc 100644 --- a/app/config/config.go +++ b/app/config/config.go @@ -90,9 +90,8 @@ type C struct { NIP43InviteExpiry time.Duration `env:"ORLY_NIP43_INVITE_EXPIRY" default:"24h" usage:"how long invite codes remain valid"` // Database configuration - DBType string `env:"ORLY_DB_TYPE" default:"badger" usage:"database backend to use: badger, dgraph, or neo4j"` - DgraphURL string `env:"ORLY_DGRAPH_URL" default:"localhost:9080" usage:"dgraph gRPC endpoint address (only used when ORLY_DB_TYPE=dgraph)"` - QueryCacheSizeMB int `env:"ORLY_QUERY_CACHE_SIZE_MB" default:"512" usage:"query cache size in MB (caches database query results for faster REQ responses)"` + DBType string `env:"ORLY_DB_TYPE" default:"badger" usage:"database backend to use: badger or neo4j"` + QueryCacheSizeMB int `env:"ORLY_QUERY_CACHE_SIZE_MB" default:"512" usage:"query cache size in MB (caches database query results for faster REQ responses)"` QueryCacheMaxAge string `env:"ORLY_QUERY_CACHE_MAX_AGE" default:"5m" usage:"maximum age for cached query results (e.g., 5m, 10m, 1h)"` // Neo4j configuration (only used when ORLY_DB_TYPE=neo4j) @@ -410,7 +409,7 @@ func (cfg *C) GetDatabaseConfigValues() ( blockCacheMB, indexCacheMB, queryCacheSizeMB int, queryCacheMaxAge time.Duration, inlineEventThreshold int, - dgraphURL, neo4jURI, neo4jUser, neo4jPassword string, + neo4jURI, neo4jUser, neo4jPassword string, ) { // Parse query cache max age from string to duration queryCacheMaxAge = 5 * time.Minute // Default @@ -424,5 +423,5 @@ func (cfg *C) GetDatabaseConfigValues() ( cfg.DBBlockCacheMB, cfg.DBIndexCacheMB, cfg.QueryCacheSizeMB, queryCacheMaxAge, cfg.InlineEventThreshold, - cfg.DgraphURL, cfg.Neo4jURI, cfg.Neo4jUser, cfg.Neo4jPassword + cfg.Neo4jURI, cfg.Neo4jUser, cfg.Neo4jPassword } diff --git a/cmd/benchmark/README.md b/cmd/benchmark/README.md index c2be84e..22909cd 100644 --- a/cmd/benchmark/README.md +++ b/cmd/benchmark/README.md @@ -2,7 +2,7 @@ A comprehensive benchmarking system for testing and comparing the performance of multiple Nostr relay implementations, including: -- **next.orly.dev** (this repository) - Badger, DGraph, and Neo4j backend variants +- **next.orly.dev** (this repository) - Badger and Neo4j backend variants - **Khatru** - SQLite and Badger variants - **Relayer** - Basic example implementation - **Strfry** - C++ LMDB-based relay @@ -94,10 +94,7 @@ ls reports/run_YYYYMMDD_HHMMSS/ | Service | Port | Description | | ------------------ | ---- | ----------------------------------------- | | next-orly-badger | 8001 | This repository's Badger relay | -| next-orly-dgraph | 8007 | This repository's DGraph relay | | next-orly-neo4j | 8008 | This repository's Neo4j relay | -| dgraph-zero | 5080 | DGraph cluster coordinator | -| dgraph-alpha | 9080 | DGraph data node | | neo4j | 7474/7687 | Neo4j graph database | | khatru-sqlite | 8002 | Khatru with SQLite backend | | khatru-badger | 8003 | Khatru with Badger backend | @@ -180,7 +177,7 @@ go build -o benchmark main.go ## Database Backend Comparison -The benchmark suite includes **next.orly.dev** with three different database backends to compare architectural approaches: +The benchmark suite includes **next.orly.dev** with two different database backends to compare architectural approaches: ### Badger Backend (next-orly-badger) - **Type**: Embedded key-value store @@ -192,16 +189,6 @@ The benchmark suite includes **next.orly.dev** with three different database bac - Simpler deployment - Limited to single-node scaling -### DGraph Backend (next-orly-dgraph) -- **Type**: Distributed graph database -- **Architecture**: Client-server with dgraph-zero (coordinator) and dgraph-alpha (data node) -- **Best for**: Distributed deployments, horizontal scaling -- **Characteristics**: - - Network overhead from gRPC communication - - Supports multi-node clustering - - Built-in replication and sharding - - More complex deployment - ### Neo4j Backend (next-orly-neo4j) - **Type**: Native graph database - **Architecture**: Client-server with Neo4j Community Edition @@ -218,10 +205,10 @@ The benchmark suite includes **next.orly.dev** with three different database bac ### Comparing the Backends The benchmark results will show: -- **Latency differences**: Embedded vs. distributed overhead, graph traversal efficiency -- **Throughput trade-offs**: Single-process optimization vs. distributed scalability vs. graph query optimization +- **Latency differences**: Embedded vs. client-server overhead, graph traversal efficiency +- **Throughput trade-offs**: Single-process optimization vs. graph query optimization - **Resource usage**: Memory and CPU patterns for different architectures -- **Query performance**: Graph queries (Neo4j) vs. key-value lookups (Badger) vs. distributed queries (DGraph) +- **Query performance**: Graph queries (Neo4j) vs. key-value lookups (Badger) This comparison helps determine which backend is appropriate for different deployment scenarios and workload patterns. diff --git a/cmd/benchmark/dgraph_benchmark.go b/cmd/benchmark/dgraph_benchmark.go deleted file mode 100644 index e98637f..0000000 --- a/cmd/benchmark/dgraph_benchmark.go +++ /dev/null @@ -1,130 +0,0 @@ -package main - -import ( - "context" - "fmt" - "log" - "os" - "time" - - "next.orly.dev/pkg/database" - _ "next.orly.dev/pkg/dgraph" // Import to register dgraph factory -) - -// DgraphBenchmark wraps a Benchmark with dgraph-specific setup -type DgraphBenchmark struct { - config *BenchmarkConfig - docker *DgraphDocker - database database.Database - bench *BenchmarkAdapter -} - -// NewDgraphBenchmark creates a new dgraph benchmark instance -func NewDgraphBenchmark(config *BenchmarkConfig) (*DgraphBenchmark, error) { - // Create Docker manager - docker := NewDgraphDocker() - - // Start dgraph containers - ctx := context.Background() - if err := docker.Start(ctx); err != nil { - return nil, fmt.Errorf("failed to start dgraph: %w", err) - } - - // Set environment variable for dgraph connection - os.Setenv("ORLY_DGRAPH_URL", docker.GetGRPCEndpoint()) - - // Create database instance using dgraph backend - cancel := func() {} - db, err := database.NewDatabase(ctx, cancel, "dgraph", config.DataDir, "warn") - if err != nil { - docker.Stop() - return nil, fmt.Errorf("failed to create dgraph database: %w", err) - } - - // Wait for database to be ready - fmt.Println("Waiting for dgraph database to be ready...") - select { - case <-db.Ready(): - fmt.Println("Dgraph database is ready") - case <-time.After(30 * time.Second): - db.Close() - docker.Stop() - return nil, fmt.Errorf("dgraph database failed to become ready") - } - - // Create adapter to use Database interface with Benchmark - adapter := NewBenchmarkAdapter(config, db) - - dgraphBench := &DgraphBenchmark{ - config: config, - docker: docker, - database: db, - bench: adapter, - } - - return dgraphBench, nil -} - -// Close closes the dgraph benchmark and stops Docker containers -func (dgb *DgraphBenchmark) Close() { - fmt.Println("Closing dgraph benchmark...") - - if dgb.database != nil { - dgb.database.Close() - } - - if dgb.docker != nil { - if err := dgb.docker.Stop(); err != nil { - log.Printf("Error stopping dgraph Docker: %v", err) - } - } -} - -// RunSuite runs the benchmark suite on dgraph -func (dgb *DgraphBenchmark) RunSuite() { - fmt.Println("\n╔════════════════════════════════════════════════════════╗") - fmt.Println("║ DGRAPH BACKEND BENCHMARK SUITE ║") - fmt.Println("╚════════════════════════════════════════════════════════╝") - - // Run only one round for dgraph to keep benchmark time reasonable - fmt.Printf("\n=== Starting dgraph benchmark ===\n") - - fmt.Printf("RunPeakThroughputTest (dgraph)..\n") - dgb.bench.RunPeakThroughputTest() - fmt.Println("Wiping database between tests...") - dgb.database.Wipe() - time.Sleep(10 * time.Second) - - fmt.Printf("RunBurstPatternTest (dgraph)..\n") - dgb.bench.RunBurstPatternTest() - fmt.Println("Wiping database between tests...") - dgb.database.Wipe() - time.Sleep(10 * time.Second) - - fmt.Printf("RunMixedReadWriteTest (dgraph)..\n") - dgb.bench.RunMixedReadWriteTest() - fmt.Println("Wiping database between tests...") - dgb.database.Wipe() - time.Sleep(10 * time.Second) - - fmt.Printf("RunQueryTest (dgraph)..\n") - dgb.bench.RunQueryTest() - fmt.Println("Wiping database between tests...") - dgb.database.Wipe() - time.Sleep(10 * time.Second) - - fmt.Printf("RunConcurrentQueryStoreTest (dgraph)..\n") - dgb.bench.RunConcurrentQueryStoreTest() - - fmt.Printf("\n=== Dgraph benchmark completed ===\n\n") -} - -// GenerateReport generates the benchmark report -func (dgb *DgraphBenchmark) GenerateReport() { - dgb.bench.GenerateReport() -} - -// GenerateAsciidocReport generates asciidoc format report -func (dgb *DgraphBenchmark) GenerateAsciidocReport() { - dgb.bench.GenerateAsciidocReport() -} diff --git a/cmd/benchmark/dgraph_docker.go b/cmd/benchmark/dgraph_docker.go deleted file mode 100644 index bf293fc..0000000 --- a/cmd/benchmark/dgraph_docker.go +++ /dev/null @@ -1,160 +0,0 @@ -package main - -import ( - "context" - "fmt" - "os" - "os/exec" - "path/filepath" - "time" -) - -// DgraphDocker manages a dgraph instance via Docker Compose -type DgraphDocker struct { - composeFile string - projectName string - running bool -} - -// NewDgraphDocker creates a new dgraph Docker manager -func NewDgraphDocker() *DgraphDocker { - // Try to find the docker-compose file in the current directory first - composeFile := "docker-compose-dgraph.yml" - - // If not found, try the cmd/benchmark directory (for running from project root) - if _, err := os.Stat(composeFile); os.IsNotExist(err) { - composeFile = filepath.Join("cmd", "benchmark", "docker-compose-dgraph.yml") - } - - return &DgraphDocker{ - composeFile: composeFile, - projectName: "orly-benchmark-dgraph", - running: false, - } -} - -// Start starts the dgraph Docker containers -func (d *DgraphDocker) Start(ctx context.Context) error { - fmt.Println("Starting dgraph Docker containers...") - - // Stop any existing containers first - d.Stop() - - // Start containers - cmd := exec.CommandContext( - ctx, - "docker-compose", - "-f", d.composeFile, - "-p", d.projectName, - "up", "-d", - ) - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - - if err := cmd.Run(); err != nil { - return fmt.Errorf("failed to start dgraph containers: %w", err) - } - - fmt.Println("Waiting for dgraph to be healthy...") - - // Wait for health checks to pass - if err := d.waitForHealthy(ctx, 60*time.Second); err != nil { - d.Stop() // Clean up on failure - return err - } - - d.running = true - fmt.Println("Dgraph is ready!") - return nil -} - -// waitForHealthy waits for dgraph to become healthy -func (d *DgraphDocker) waitForHealthy(ctx context.Context, timeout time.Duration) error { - deadline := time.Now().Add(timeout) - - for time.Now().Before(deadline) { - // Check if alpha is healthy by checking docker health status - cmd := exec.CommandContext( - ctx, - "docker", - "inspect", - "--format={{.State.Health.Status}}", - "orly-benchmark-dgraph-alpha", - ) - - output, err := cmd.Output() - if err == nil && string(output) == "healthy\n" { - // Additional short wait to ensure full readiness - time.Sleep(2 * time.Second) - return nil - } - - select { - case <-ctx.Done(): - return ctx.Err() - case <-time.After(2 * time.Second): - // Continue waiting - } - } - - return fmt.Errorf("dgraph failed to become healthy within %v", timeout) -} - -// Stop stops and removes the dgraph Docker containers -func (d *DgraphDocker) Stop() error { - if !d.running { - // Try to stop anyway in case of untracked state - cmd := exec.Command( - "docker-compose", - "-f", d.composeFile, - "-p", d.projectName, - "down", "-v", - ) - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - _ = cmd.Run() // Ignore errors - return nil - } - - fmt.Println("Stopping dgraph Docker containers...") - - cmd := exec.Command( - "docker-compose", - "-f", d.composeFile, - "-p", d.projectName, - "down", "-v", - ) - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - - if err := cmd.Run(); err != nil { - return fmt.Errorf("failed to stop dgraph containers: %w", err) - } - - d.running = false - fmt.Println("Dgraph containers stopped") - return nil -} - -// GetGRPCEndpoint returns the dgraph gRPC endpoint -func (d *DgraphDocker) GetGRPCEndpoint() string { - return "localhost:9080" -} - -// IsRunning returns whether dgraph is running -func (d *DgraphDocker) IsRunning() bool { - return d.running -} - -// Logs returns the logs from dgraph containers -func (d *DgraphDocker) Logs() error { - cmd := exec.Command( - "docker-compose", - "-f", d.composeFile, - "-p", d.projectName, - "logs", - ) - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - return cmd.Run() -} diff --git a/cmd/benchmark/docker-compose-dgraph.yml b/cmd/benchmark/docker-compose-dgraph.yml deleted file mode 100644 index adf48ed..0000000 --- a/cmd/benchmark/docker-compose-dgraph.yml +++ /dev/null @@ -1,44 +0,0 @@ -version: "3.9" - -services: - dgraph-zero: - image: dgraph/dgraph:v23.1.0 - container_name: orly-benchmark-dgraph-zero - working_dir: /data/zero - ports: - - "5080:5080" - - "6080:6080" - command: dgraph zero --my=dgraph-zero:5080 - networks: - - orly-benchmark - healthcheck: - test: ["CMD", "sh", "-c", "dgraph version || exit 1"] - interval: 5s - timeout: 3s - retries: 3 - start_period: 5s - - dgraph-alpha: - image: dgraph/dgraph:v23.1.0 - container_name: orly-benchmark-dgraph-alpha - working_dir: /data/alpha - ports: - - "8080:8080" - - "9080:9080" - command: dgraph alpha --my=dgraph-alpha:7080 --zero=dgraph-zero:5080 --security whitelist=0.0.0.0/0 - networks: - - orly-benchmark - depends_on: - dgraph-zero: - condition: service_healthy - healthcheck: - test: ["CMD", "sh", "-c", "dgraph version || exit 1"] - interval: 5s - timeout: 3s - retries: 6 - start_period: 10s - -networks: - orly-benchmark: - name: orly-benchmark-network - driver: bridge diff --git a/go.mod b/go.mod index 2733ba8..1de1b8b 100644 --- a/go.mod +++ b/go.mod @@ -5,9 +5,10 @@ go 1.25.3 require ( git.mleku.dev/mleku/nostr v1.0.7 github.com/adrg/xdg v0.5.3 + github.com/aperturerobotics/go-indexeddb v0.2.3 github.com/dgraph-io/badger/v4 v4.8.0 - github.com/dgraph-io/dgo/v230 v230.0.1 github.com/gorilla/websocket v1.5.3 + github.com/hack-pad/safejs v0.1.1 github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 github.com/klauspost/compress v1.18.2 github.com/minio/sha256-simd v1.0.1 @@ -21,7 +22,6 @@ require ( go.uber.org/atomic v1.11.0 golang.org/x/crypto v0.45.0 golang.org/x/lint v0.0.0-20241112194109-818c5a804067 - google.golang.org/grpc v1.76.0 honnef.co/go/tools v0.6.1 lol.mleku.dev v1.0.5 lukechampine.com/frand v1.5.1 @@ -30,7 +30,6 @@ require ( require ( github.com/BurntSushi/toml v1.5.0 // indirect github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 // indirect - github.com/aperturerobotics/go-indexeddb v0.2.3 // indirect github.com/btcsuite/btcd/btcec/v2 v2.3.4 // indirect github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0 // indirect github.com/bytedance/sonic v1.13.1 // indirect @@ -47,11 +46,8 @@ require ( github.com/felixge/fgprof v0.9.5 // indirect github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/stdr v1.2.2 // indirect - github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/protobuf v1.5.4 // indirect github.com/google/flatbuffers v25.9.23+incompatible // indirect github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect - github.com/hack-pad/safejs v0.1.1 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/klauspost/cpuid/v2 v2.3.0 // indirect @@ -59,7 +55,6 @@ require ( github.com/mattn/go-sqlite3 v1.14.32 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect - github.com/pkg/errors v0.9.1 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect github.com/templexxx/cpu v0.1.1 // indirect @@ -81,7 +76,6 @@ require ( golang.org/x/sys v0.38.0 // indirect golang.org/x/text v0.31.0 // indirect golang.org/x/tools v0.39.0 // indirect - google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 // indirect google.golang.org/protobuf v1.36.10 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect p256k1.mleku.dev v1.0.3 // indirect diff --git a/go.sum b/go.sum index f5303a1..0d3f40f 100644 --- a/go.sum +++ b/go.sum @@ -1,7 +1,5 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= git.mleku.dev/mleku/nostr v1.0.7 h1:BXWsAAiGu56JXR4rIn0kaVOE+RtMmA9MPvAs8y/BjnI= git.mleku.dev/mleku/nostr v1.0.7/go.mod h1:iYTlg2WKJXJ0kcsM6QBGOJ0UDiJidMgL/i64cHyPjZc= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg= github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho= github.com/ImVexed/fasturl v0.0.0-20230304231329-4e41488060f3 h1:ClzzXMDDuUbWfNNZqGeYq4PnYOlwlOVIvSyNaIy0ykg= @@ -19,7 +17,6 @@ github.com/bytedance/sonic v1.13.1/go.mod h1:o68xyaF9u2gvVBuGHPlUVCy+ZfmNNO5ETf1 github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU= github.com/bytedance/sonic/loader v0.2.4 h1:ZWCw4stuXUsn1/+zQDqeE7JKP+QO47tz7QCNan80NzY= github.com/bytedance/sonic/loader v0.2.4/go.mod h1:N8A3vUdtUebEY2/VQC0MyhYeKUFosQU6FxH2JmUe6VI= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/chromedp/cdproto v0.0.0-20230802225258-3cf4e6d46a89/go.mod h1:GKljq0VrfU4D5yc+2qA6OVr8pmO/MBbPEWqWQ/oqGEs= @@ -31,7 +28,6 @@ github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5P github.com/chzyer/readline v1.5.1/go.mod h1:Eh+b79XXUwfKfcPLepksvw2tcLE/Ct21YObkaSkeBlk= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/cloudwego/base64x v0.1.5 h1:XPciSp1xaq2VCSt6lF0phncD4koWyULpl5bUxbfCyP4= github.com/cloudwego/base64x v0.1.5/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w= github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY= @@ -46,8 +42,6 @@ github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvw github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40= github.com/dgraph-io/badger/v4 v4.8.0 h1:JYph1ChBijCw8SLeybvPINizbDKWZ5n/GYbz2yhN/bs= github.com/dgraph-io/badger/v4 v4.8.0/go.mod h1:U6on6e8k/RTbUWxqKR0MvugJuVmkxSNc79ap4917h4w= -github.com/dgraph-io/dgo/v230 v230.0.1 h1:kR7gI7/ZZv0jtG6dnedNgNOCxe1cbSG8ekF+pNfReks= -github.com/dgraph-io/dgo/v230 v230.0.1/go.mod h1:5FerO2h4LPOxR2XTkOAtqUUPaFdQ+5aBOHXPBJ3nT10= github.com/dgraph-io/ristretto/v2 v2.3.0 h1:qTQ38m7oIyd4GAed/QkUZyPFNMnvVWyazGXRwvOt5zk= github.com/dgraph-io/ristretto/v2 v2.3.0/go.mod h1:gpoRV3VzrEY1a9dWAYV6T1U7YzfgttXdd/ZzL1s9OZM= github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38= @@ -57,8 +51,6 @@ github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+m github.com/dvyukov/go-fuzz v0.0.0-20200318091601-be3528f3a813/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw= github.com/ebitengine/purego v0.9.1 h1:a/k2f2HQU3Pi399RPW1MOaZyhKJL9w/xFpKAg4q1s0A= github.com/ebitengine/purego v0.9.1/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw= github.com/felixge/fgprof v0.9.5 h1:8+vR6yu2vvSKn08urWyEuxx75NWPEvybbkBirEpsbVY= github.com/felixge/fgprof v0.9.5/go.mod h1:yKl+ERSa++RYOs32d8K6WEXCB4uXdLls4ZaZPpayhMM= @@ -70,26 +62,8 @@ github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM= github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY= -github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= -github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= -github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/google/flatbuffers v25.9.23+incompatible h1:rGZKv+wOb6QPzIdkM2KxhBZCDrA0DeN6DNmRDrqIsQU= github.com/google/flatbuffers v25.9.23+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= @@ -97,8 +71,6 @@ github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8I github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik= github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d h1:KJIErDwbSHjnp/SGzE5ed8Aol7JsKiI5X7yWKAtzhM0= github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= -github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= -github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/hack-pad/safejs v0.1.1 h1:d5qPO0iQ7h2oVtpzGnLExE+Wn9AtytxIfltcS2b9KD8= @@ -111,8 +83,6 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 h1:iQTw/8FWTuc7uiaSepXwyf3o52HaUYcV+Tu66S3F5GA= github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8= -github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk= github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4= github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= @@ -141,13 +111,10 @@ github.com/nbd-wtf/go-nostr v0.52.0/go.mod h1:4avYoc9mDGZ9wHsvCOhHH9vPzKucCfuYBt github.com/neo4j/neo4j-go-driver/v5 v5.28.4 h1:7toxehVcYkZbyxV4W3Ib9VcnyRBQPucF+VwNNmtSXi4= github.com/neo4j/neo4j-go-driver/v5 v5.28.4/go.mod h1:Vff8OwT7QpLm7L2yYr85XNWe9Rbqlbeb9asNXJTHO4k= github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0= -github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA= github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg= github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= @@ -181,8 +148,6 @@ github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08= github.com/vertex-lab/nostr-sqlite v0.3.2 h1:8nZYYIwiKnWLA446qA/wL/Gy+bU0kuaxdLfUyfeTt/E= github.com/vertex-lab/nostr-sqlite v0.3.2/go.mod h1:5bw1wMgJhSdrumsZAWxqy+P0u1g+q02PnlGQn15dnSM= -github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= go-simpler.org/env v0.12.0 h1:kt/lBts0J1kjWJAnB740goNdvwNxt5emhYngL0Fzufs= go-simpler.org/env v0.12.0/go.mod h1:cc/5Md9JCUM7LVLtN0HYjPTDcI3Q8TDaPlNTAlDU+WI= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= @@ -191,10 +156,6 @@ go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8= go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM= go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA= go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI= -go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI= -go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg= -go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc= -go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps= go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE= go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs= go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= @@ -203,92 +164,40 @@ golang.org/x/arch v0.15.0 h1:QtOrQd0bTUnhNVNndMpLHNWrDmYzZ2KDqSrEymqInZw= golang.org/x/arch v0.15.0/go.mod h1:JmwW7aLIoRUKgaTzhkiEFxvcEiQGyOg9BMonBJUS7EE= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q= golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 h1:zfMcR1Cs4KNuomFFgGefv5N0czO2XZpUbxGUy8i8ug0= golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0= golang.org/x/exp/typeparams v0.0.0-20251023183803-a4bb9ffd2546 h1:HDjDiATsGqvuqvkDvgJjD1IgPrVekcSXVVE21JwvzGE= golang.org/x/exp/typeparams v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:4Mzdyp/6jzw9auFDJ3OMF5qksa7UvPnzKqTVGcb04ms= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20241112194109-818c5a804067 h1:adDmSQyFTCiv19j015EGKJBoaa7ElV0Q1Wovb/4G7NA= golang.org/x/lint v0.0.0-20241112194109-818c5a804067/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk= golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I= golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc= golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM= golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ= golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ= golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM= golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= -gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A= -google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE= google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -297,8 +206,6 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI= honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4= lol.mleku.dev v1.0.5 h1:irwfwz+Scv74G/2OXmv05YFKOzUNOVZ735EAkYgjgM8= diff --git a/main.go b/main.go index e2e8da7..fa60bb0 100644 --- a/main.go +++ b/main.go @@ -445,7 +445,7 @@ func makeDatabaseConfig(cfg *config.C) *database.DatabaseConfig { blockCacheMB, indexCacheMB, queryCacheSizeMB, queryCacheMaxAge, inlineEventThreshold, - dgraphURL, neo4jURI, neo4jUser, neo4jPassword := cfg.GetDatabaseConfigValues() + neo4jURI, neo4jUser, neo4jPassword := cfg.GetDatabaseConfigValues() return &database.DatabaseConfig{ DataDir: dataDir, @@ -455,7 +455,6 @@ func makeDatabaseConfig(cfg *config.C) *database.DatabaseConfig { QueryCacheSizeMB: queryCacheSizeMB, QueryCacheMaxAge: queryCacheMaxAge, InlineEventThreshold: inlineEventThreshold, - DgraphURL: dgraphURL, Neo4jURI: neo4jURI, Neo4jUser: neo4jUser, Neo4jPassword: neo4jPassword, diff --git a/pkg/database/factory.go b/pkg/database/factory.go index 6243d09..bbcef51 100644 --- a/pkg/database/factory.go +++ b/pkg/database/factory.go @@ -24,9 +24,6 @@ type DatabaseConfig struct { QueryCacheMaxAge time.Duration // ORLY_QUERY_CACHE_MAX_AGE InlineEventThreshold int // ORLY_INLINE_EVENT_THRESHOLD - // DGraph-specific settings - DgraphURL string // ORLY_DGRAPH_URL - // Neo4j-specific settings Neo4jURI string // ORLY_NEO4J_URI Neo4jUser string // ORLY_NEO4J_USER @@ -62,12 +59,6 @@ func NewDatabaseWithConfig( case "badger", "": // Use the existing badger implementation return NewWithConfig(ctx, cancel, cfg) - case "dgraph": - // Use the dgraph implementation - if newDgraphDatabase == nil { - return nil, fmt.Errorf("dgraph database backend not available (import _ \"next.orly.dev/pkg/dgraph\")") - } - return newDgraphDatabase(ctx, cancel, cfg) case "neo4j": // Use the neo4j implementation if newNeo4jDatabase == nil { @@ -81,20 +72,10 @@ func NewDatabaseWithConfig( } return newWasmDBDatabase(ctx, cancel, cfg) default: - return nil, fmt.Errorf("unsupported database type: %s (supported: badger, dgraph, neo4j, wasmdb)", dbType) + return nil, fmt.Errorf("unsupported database type: %s (supported: badger, neo4j, wasmdb)", dbType) } } -// newDgraphDatabase creates a dgraph database instance -// This is defined here to avoid import cycles -var newDgraphDatabase func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error) - -// RegisterDgraphFactory registers the dgraph database factory -// This is called from the dgraph package's init() function -func RegisterDgraphFactory(factory func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error)) { - newDgraphDatabase = factory -} - // newNeo4jDatabase creates a neo4j database instance // This is defined here to avoid import cycles var newNeo4jDatabase func(context.Context, context.CancelFunc, *DatabaseConfig) (Database, error) diff --git a/pkg/version/version b/pkg/version/version index 090488e..bb8e4e7 100644 --- a/pkg/version/version +++ b/pkg/version/version @@ -1 +1 @@ -v0.32.2 \ No newline at end of file +v0.32.3 \ No newline at end of file diff --git a/scripts/DGRAPH_TESTING.md b/scripts/DGRAPH_TESTING.md deleted file mode 100644 index ff5ecfb..0000000 --- a/scripts/DGRAPH_TESTING.md +++ /dev/null @@ -1,276 +0,0 @@ -# Dgraph Integration Testing - -This directory contains scripts and configuration for testing the ORLY dgraph integration. - -## Quick Start - -### 1. Start Dgraph Server - -```bash -# Using the convenience script -./scripts/dgraph-start.sh - -# Or manually with docker-compose -cd scripts -docker-compose -f dgraph-docker-compose.yml up -d - -# Or directly with docker -docker run -d \ - -p 8080:8080 \ - -p 9080:9080 \ - -p 8000:8000 \ - --name dgraph-orly \ - dgraph/standalone:latest -``` - -### 2. Run Dgraph Tests - -```bash -# Run all dgraph package tests -./scripts/test-dgraph.sh - -# Run tests with relay-tester -./scripts/test-dgraph.sh --relay-tester -``` - -### 3. Manual Testing - -```bash -# Start ORLY with dgraph backend -export ORLY_DB_TYPE=dgraph -export ORLY_DGRAPH_URL=localhost:9080 -./orly - -# In another terminal, run relay-tester -go run cmd/relay-tester/main.go -url ws://localhost:3334 -``` - -## Test Files - -The dgraph package includes comprehensive tests: - -- **testmain_test.go** - Test configuration and logging setup -- **helpers_test.go** - Helper functions for test setup/teardown -- **save-event_test.go** - Event storage tests -- **query-events_test.go** - Event query tests - -All tests mirror the existing badger tests to ensure feature parity. - -## Test Coverage - -The dgraph tests cover: - -✅ **Event Storage** -- Saving events from examples.Cache -- Duplicate event rejection -- Deletion event validation - -✅ **Event Queries** -- Query by ID -- Query by kind -- Query by author -- Query by time range -- Query by tags -- Event counting - -✅ **Advanced Features** -- Replaceable events (kind 0) -- Parameterized replaceable events (kind 30000+) -- Event deletion (kind 5) -- Event replacement logic - -## Requirements - -### Dgraph Server - -The tests require a running dgraph server. Tests will be skipped if dgraph is not available. - -**Endpoints:** -- gRPC: `localhost:9080` (required for ORLY) -- HTTP: `localhost:8080` (for health checks) -- Ratel UI: `localhost:8000` (optional, for debugging) - -**Custom Endpoint:** -```bash -export ORLY_DGRAPH_URL=remote.server.com:9080 -./scripts/test-dgraph.sh -``` - -### Docker - -The docker-compose setup requires: -- Docker Engine 20.10+ -- Docker Compose 1.29+ (or docker-compose plugin) - -## Test Workflow - -### Running Tests Locally - -```bash -# 1. Start dgraph -./scripts/dgraph-start.sh - -# 2. Run tests -./scripts/test-dgraph.sh - -# 3. Clean up when done -cd scripts && docker-compose -f dgraph-docker-compose.yml down -``` - -### CI/CD Integration - -For CI pipelines, use the docker-compose file: - -```yaml -# Example GitHub Actions workflow -services: - dgraph: - image: dgraph/standalone:latest - ports: - - 8080:8080 - - 9080:9080 - -steps: - - name: Run dgraph tests - run: | - export ORLY_DGRAPH_URL=localhost:9080 - CGO_ENABLED=0 go test -v ./pkg/dgraph/... -``` - -## Debugging - -### View Dgraph Logs - -```bash -docker logs dgraph-orly-test -f -``` - -### Access Ratel UI - -Open http://localhost:8000 in your browser to: -- View schema -- Run DQL queries -- Inspect data - -### Enable Test Logging - -```bash -export TEST_LOG=1 -./scripts/test-dgraph.sh -``` - -### Manual DQL Queries - -```bash -# Using curl -curl -X POST localhost:8080/query -d '{ - q(func: type(Event)) { - uid - event.id - event.kind - event.created_at - } -}' - -# Using grpcurl (if installed) -grpcurl -plaintext -d '{ - "query": "{ q(func: type(Event)) { uid event.id } }" -}' localhost:9080 api.Dgraph/Query -``` - -## Troubleshooting - -### Tests Skip with "Dgraph server not available" - -**Solution:** Ensure dgraph is running: -```bash -docker ps | grep dgraph -./scripts/dgraph-start.sh -``` - -### Connection Refused Errors - -**Symptoms:** -``` -failed to connect to dgraph at localhost:9080: connection refused -``` - -**Solutions:** -1. Check dgraph is running: `docker ps` -2. Check port mapping: `docker port dgraph-orly-test` -3. Check firewall rules -4. Verify ORLY_DGRAPH_URL is correct - -### Schema Application Failed - -**Symptoms:** -``` -failed to apply schema: ... -``` - -**Solutions:** -1. Check dgraph logs: `docker logs dgraph-orly-test` -2. Drop all data and retry: Use `dropAll` in test setup -3. Verify dgraph version compatibility - -### Tests Timeout - -**Symptoms:** -``` -panic: test timed out after 10m -``` - -**Solutions:** -1. Increase timeout: `go test -timeout 20m ./pkg/dgraph/...` -2. Check dgraph performance: May need more resources -3. Reduce test dataset size - -## Performance Benchmarks - -Compare dgraph vs badger performance: - -```bash -# Run badger benchmarks -go test -bench=. ./pkg/database/... - -# Run dgraph benchmarks -go test -bench=. ./pkg/dgraph/... -``` - -## Test Data - -Tests use `pkg/encoders/event/examples.Cache` which contains: -- ~100 real Nostr events -- Various kinds (text notes, metadata, etc.) -- Different authors and timestamps -- Events with tags and relationships - -## Cleanup - -### Remove Test Data - -```bash -# Stop and remove containers -cd scripts -docker-compose -f dgraph-docker-compose.yml down - -# Remove volumes -docker volume rm scripts_dgraph-data -``` - -### Reset Dgraph - -```bash -# Drop all data (via test helper) -# The dropAll() function is called in test setup - -# Or manually via HTTP -curl -X POST localhost:8080/alter -d '{"drop_all": true}' -``` - -## Related Documentation - -- [Dgraph Implementation Status](../DGRAPH_IMPLEMENTATION_STATUS.md) -- [Package README](../pkg/dgraph/README.md) -- [Dgraph Documentation](https://dgraph.io/docs/) -- [DQL Query Language](https://dgraph.io/docs/query-language/) diff --git a/scripts/DOCKER_TESTING.md b/scripts/DOCKER_TESTING.md index 4ce4609..906f357 100644 --- a/scripts/DOCKER_TESTING.md +++ b/scripts/DOCKER_TESTING.md @@ -1,11 +1,11 @@ # Docker-Based Integration Testing -This guide covers running ORLY and Dgraph together in Docker containers for integration testing. +This guide covers running ORLY in Docker containers for integration testing. ## Overview The Docker setup provides: -- **Isolated Environment**: Dgraph + ORLY in containers +- **Isolated Environment**: ORLY in containers - **Automated Testing**: Health checks and dependency management - **Reproducible Tests**: Consistent environment across systems - **Easy Cleanup**: Remove everything with one command @@ -16,19 +16,17 @@ The Docker setup provides: ┌─────────────────────────────────────────────┐ │ Docker Network (orly-network) │ │ │ -│ ┌──────────────────┐ ┌─────────────────┐ │ -│ │ Dgraph │ │ ORLY Relay │ │ -│ │ standalone │◄─┤ (dgraph mode) │ │ -│ │ │ │ │ │ -│ │ :8080 (HTTP) │ │ :3334 (WS) │ │ -│ │ :9080 (gRPC) │ │ │ │ -│ │ :8000 (Ratel) │ │ │ │ -│ └──────────────────┘ └─────────────────┘ │ -│ │ │ │ -└─────────┼───────────────────────┼───────────┘ - │ │ - Published Published - to host to host +│ ┌─────────────────┐ ┌─────────────────┐ │ +│ │ ORLY Relay │ │ relay-tester │ │ +│ │ (badger mode) │ │ (optional) │ │ +│ │ │ │ │ │ +│ │ :3334 (WS) │ │ │ │ +│ └─────────────────┘ └─────────────────┘ │ +│ │ │ +└─────────┼───────────────────────────────────┘ + │ + Published + to host ``` ## Quick Start @@ -107,18 +105,12 @@ Builds relay-tester for automated testing: Orchestrates the full stack: **Services:** -1. **dgraph** - Database backend - - Health check via HTTP - - Persistent volume for data - - Exposed ports for debugging - -2. **orly** - Relay server - - Depends on dgraph (waits for healthy) - - Configured with ORLY_DB_TYPE=dgraph +1. **orly** - Relay server + - Configured with ORLY_DB_TYPE=badger (default) - Health check via HTTP - Auto-restart on failure -3. **relay-tester** - Test runner +2. **relay-tester** - Test runner - Profile: test (optional) - Runs tests against ORLY - Exits after completion @@ -130,10 +122,6 @@ Orchestrates the full stack: The docker-compose file sets: ```yaml -# Database -ORLY_DB_TYPE: dgraph -ORLY_DGRAPH_URL: dgraph:9080 # Internal network name - # Server ORLY_LISTEN: 0.0.0.0 ORLY_PORT: 3334 @@ -141,7 +129,7 @@ ORLY_DATA_DIR: /data # Application ORLY_LOG_LEVEL: info -ORLY_APP_NAME: ORLY-Dgraph-Test +ORLY_APP_NAME: ORLY-Test ORLY_ACL_MODE: none ``` @@ -158,13 +146,12 @@ EOF ### Volumes **Persistent Data:** -- `dgraph-data:/dgraph` - Dgraph database -- `orly-data:/data` - ORLY metadata +- `orly-data:/data` - ORLY database and metadata **Inspect Volumes:** ```bash docker volume ls -docker volume inspect scripts_dgraph-data +docker volume inspect scripts_orly-data ``` ### Networks @@ -185,12 +172,11 @@ docker volume inspect scripts_dgraph-data **What it does:** 1. Stops any existing containers -2. Starts dgraph and waits for health -3. Starts ORLY and waits for health -4. Verifies HTTP connectivity -5. Tests WebSocket (if websocat installed) -6. Shows container status -7. Cleans up (unless --keep-running) +2. Starts ORLY and waits for health +3. Verifies HTTP connectivity +4. Tests WebSocket (if websocat installed) +5. Shows container status +6. Cleans up (unless --keep-running) ### With Relay-Tester @@ -211,7 +197,7 @@ docker volume inspect scripts_dgraph-data ./scripts/test-docker.sh --keep-running # Make changes to code -vim pkg/dgraph/save-event.go +vim pkg/database/save-event.go # Rebuild and restart docker-compose -f scripts/docker-compose-test.yml up -d --build orly @@ -236,7 +222,6 @@ docker-compose -f scripts/docker-compose-test.yml logs -f # Specific service docker logs orly-relay -f -docker logs orly-dgraph -f # Last N lines docker logs orly-relay --tail 50 @@ -253,19 +238,8 @@ docker exec orly-relay ps aux # Inspect data directory docker exec orly-relay ls -la /data - -# Query dgraph -docker exec orly-dgraph curl http://localhost:8080/health ``` -### Access Ratel UI - -Open http://localhost:8000 in browser: -- View dgraph schema -- Run DQL queries -- Inspect stored data -- Monitor performance - ### Network Inspection ```bash @@ -274,10 +248,6 @@ docker network ls # Inspect orly network docker network inspect scripts_orly-network - -# Test connectivity -docker exec orly-relay ping dgraph -docker exec orly-relay nc -zv dgraph 9080 ``` ### Health Check Status @@ -298,10 +268,9 @@ docker inspect --format='{{json .State.Health}}' orly-relay | jq ```bash # Ensure library exists -ls -l pkg/crypto/p8k/libsecp256k1.so +ls -l libsecp256k1.so -# Rebuild if needed -cd pkg/crypto/p8k && make +# Library should be in repository root ``` **Error: Go module download fails** @@ -324,24 +293,9 @@ docker logs orly-relay # Common issues: # - Port already in use: docker ps (check for conflicts) -# - Dgraph not ready: docker logs orly-dgraph # - Bad configuration: docker exec orly-relay env ``` -**Cannot connect to dgraph** - -```bash -# Verify dgraph is healthy -docker inspect orly-dgraph | grep Health - -# Check network connectivity -docker exec orly-relay ping dgraph -docker exec orly-relay nc -zv dgraph 9080 - -# Verify dgraph is listening -docker exec orly-dgraph netstat -tlnp | grep 9080 -``` - **WebSocket connection fails** ```bash @@ -364,7 +318,6 @@ sudo iptables -L | grep 3334 start_period: 60s # Default is 20-30s # Pre-pull images -docker pull dgraph/standalone:latest docker pull golang:1.25-alpine ``` @@ -415,7 +368,6 @@ jobs: name: container-logs path: | scripts/orly-relay.log - scripts/dgraph.log ``` ### GitLab CI Example @@ -524,14 +476,12 @@ docker-compose -f docker-compose-test.yml stop # Remove only one service docker-compose -f docker-compose-test.yml rm -s -f orly -# Clear dgraph data -docker volume rm scripts_dgraph-data +# Clear ORLY data +docker volume rm scripts_orly-data ``` ## Related Documentation -- [Main Testing Guide](DGRAPH_TESTING.md) -- [Package Tests](../pkg/dgraph/TESTING.md) - [Docker Documentation](https://docs.docker.com/) - [Docker Compose](https://docs.docker.com/compose/) @@ -540,7 +490,7 @@ docker volume rm scripts_dgraph-data 1. **Always use health checks** - Ensure services are ready 2. **Use specific tags** - Don't rely on :latest in production 3. **Limit resources** - Prevent container resource exhaustion -4. **Volume backups** - Backup dgraph-data volume before updates +4. **Volume backups** - Backup orly-data volume before updates 5. **Network isolation** - Use custom networks for security 6. **Read-only root** - Run as non-root user 7. **Clean up regularly** - Remove unused containers/volumes diff --git a/scripts/README.md b/scripts/README.md index ece7b73..b69abb8 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -4,20 +4,6 @@ This directory contains automation scripts for building, testing, and deploying ## Quick Reference -### Dgraph Integration Testing - -```bash -# Local testing (requires dgraph server) -./dgraph-start.sh # Start dgraph server -./test-dgraph.sh # Run dgraph package tests -./test-dgraph.sh --relay-tester # Run tests + relay-tester - -# Docker testing (containers for everything) -./docker-build.sh # Build ORLY docker image -./test-docker.sh # Run integration tests in containers -./test-docker.sh --relay-tester --keep-running # Full test, keep running -``` - ### Build & Deploy ```bash @@ -26,44 +12,14 @@ This directory contains automation scripts for building, testing, and deploying ./update-embedded-web.sh # Build and embed web UI ``` -## Script Descriptions - -### Dgraph Testing Scripts - -#### dgraph-start.sh -Starts dgraph server using docker-compose for local testing. +### Testing -**Usage:** ```bash -./dgraph-start.sh -``` - -**What it does:** -- Checks if dgraph is already running -- Starts dgraph via docker-compose -- Waits for health check -- Shows endpoints and commands - -#### dgraph-docker-compose.yml -Docker Compose configuration for standalone dgraph server. - -**Ports:** -- 8080: HTTP API -- 9080: gRPC (ORLY connects here) -- 8000: Ratel UI - -#### test-dgraph.sh -Runs dgraph package tests against a running dgraph server. - -**Usage:** -```bash -./test-dgraph.sh # Just tests -./test-dgraph.sh --relay-tester # Tests + relay-tester +./test.sh # Run all Go tests +./test-docker.sh # Run integration tests in containers ``` -**Requirements:** -- Dgraph server running at ORLY_DGRAPH_URL (default: localhost:9080) -- Go 1.21+ +## Script Descriptions ### Docker Integration Scripts @@ -81,16 +37,14 @@ Builds Docker images for ORLY and optionally relay-tester. - orly-relay-tester:latest (if --with-tester) #### docker-compose-test.yml -Full-stack docker-compose with dgraph, ORLY, and relay-tester. +Full-stack docker-compose with ORLY and relay-tester. **Services:** -- dgraph: Database backend -- orly: Relay with dgraph backend +- orly: Relay with Badger backend - relay-tester: Protocol tests (optional, profile: test) **Features:** - Health checks for all services -- Dependency management (ORLY waits for dgraph) - Custom network with DNS - Persistent volumes @@ -109,12 +63,11 @@ Comprehensive integration testing in Docker containers. **What it does:** 1. Stops any existing containers 2. Optionally rebuilds images -3. Starts dgraph and waits for health -4. Starts ORLY and waits for health -5. Verifies connectivity -6. Optionally runs relay-tester -7. Shows status and endpoints -8. Cleanup (unless --keep-running) +3. Starts ORLY and waits for health +4. Verifies connectivity +5. Optionally runs relay-tester +6. Shows status and endpoints +7. Cleanup (unless --keep-running) ### Build Scripts @@ -166,10 +119,6 @@ TEST_LOG=1 ./test.sh # With logging ### Common Variables ```bash -# Dgraph -export ORLY_DGRAPH_URL=localhost:9080 # Dgraph endpoint -export ORLY_DB_TYPE=dgraph # Use dgraph backend - # Logging export ORLY_LOG_LEVEL=debug # Log verbosity export TEST_LOG=1 # Enable test logging @@ -180,6 +129,10 @@ export ORLY_LISTEN=0.0.0.0 # Listen address # Data export ORLY_DATA_DIR=/path/to/data # Data directory + +# Database backend +export ORLY_DB_TYPE=badger # Use badger backend (default) +export ORLY_DB_TYPE=neo4j # Use Neo4j backend ``` ### Script-Specific Variables @@ -188,9 +141,6 @@ export ORLY_DATA_DIR=/path/to/data # Data directory # Docker scripts export SKIP_BUILD=true # Skip image rebuild export KEEP_RUNNING=true # Don't cleanup containers - -# Dgraph scripts -export DGRAPH_VERSION=latest # Dgraph image tag ``` ## File Organization @@ -198,13 +148,8 @@ export DGRAPH_VERSION=latest # Dgraph image tag ``` scripts/ ├── README.md # This file -├── DGRAPH_TESTING.md # Dgraph testing guide ├── DOCKER_TESTING.md # Docker testing guide │ -├── dgraph-start.sh # Start dgraph server -├── dgraph-docker-compose.yml # Dgraph docker config -├── test-dgraph.sh # Run dgraph tests -│ ├── docker-build.sh # Build docker images ├── docker-compose-test.yml # Full stack docker config ├── test-docker.sh # Run docker integration tests @@ -217,43 +162,35 @@ scripts/ ## Workflows -### Local Development with Dgraph +### Local Development ```bash -# 1. Start dgraph -./scripts/dgraph-start.sh - -# 2. Run ORLY locally with dgraph -export ORLY_DB_TYPE=dgraph -export ORLY_DGRAPH_URL=localhost:9080 +# 1. Run ORLY locally ./orly -# 3. Test changes +# 2. Test changes go run cmd/relay-tester/main.go -url ws://localhost:3334 -# 4. Run unit tests -./scripts/test-dgraph.sh +# 3. Run unit tests +./scripts/test.sh ``` ### Docker Development ```bash -# 1. Make changes -vim pkg/dgraph/save-event.go - -# 2. Build and test in containers +# 1. Build and test in containers ./scripts/test-docker.sh --relay-tester --keep-running -# 3. Make more changes +# 2. Make changes -# 4. Rebuild just ORLY +# 3. Rebuild just ORLY cd scripts docker-compose -f docker-compose-test.yml up -d --build orly -# 5. View logs +# 4. View logs docker logs orly-relay -f -# 6. Stop when done +# 5. Stop when done docker-compose -f docker-compose-test.yml down ``` @@ -288,19 +225,6 @@ journalctl -u orly -f ## Troubleshooting -### Dgraph Not Available - -```bash -# Check if running -docker ps | grep dgraph - -# Start it -./scripts/dgraph-start.sh - -# Check logs -docker logs dgraph-orly-test -f -``` - ### Port Conflicts ```bash @@ -352,9 +276,6 @@ newgrp docker # Check docker docker --version docker-compose --version - - # Check dgraph - curl http://localhost:9080/health ``` 3. **Clean up after testing** @@ -379,15 +300,12 @@ newgrp docker docker logs orly-relay --tail 100 # Test output - ./scripts/test-dgraph.sh 2>&1 | tee test.log + ./scripts/test.sh 2>&1 | tee test.log ``` ## Related Documentation -- [Dgraph Testing Guide](DGRAPH_TESTING.md) - [Docker Testing Guide](DOCKER_TESTING.md) -- [Package Tests](../pkg/dgraph/TESTING.md) -- [Main Implementation Status](../DGRAPH_IMPLEMENTATION_STATUS.md) ## Contributing diff --git a/scripts/dgraph-docker-compose.yml b/scripts/dgraph-docker-compose.yml deleted file mode 100644 index 1623af8..0000000 --- a/scripts/dgraph-docker-compose.yml +++ /dev/null @@ -1,25 +0,0 @@ -version: '3.8' - -services: - dgraph: - image: dgraph/standalone:latest - container_name: dgraph-orly-test - ports: - - "8080:8080" # HTTP API - - "9080:9080" # gRPC - - "8000:8000" # Ratel UI - volumes: - - dgraph-data:/dgraph - environment: - - DGRAPH_ALPHA_JAEGER_COLLECTOR=false - restart: unless-stopped - healthcheck: - test: ["CMD", "curl", "-f", "http://localhost:8080/health"] - interval: 10s - timeout: 5s - retries: 5 - start_period: 20s - -volumes: - dgraph-data: - driver: local diff --git a/scripts/dgraph-start.sh b/scripts/dgraph-start.sh deleted file mode 100755 index ed0c684..0000000 --- a/scripts/dgraph-start.sh +++ /dev/null @@ -1,50 +0,0 @@ -#!/bin/bash -# Quick script to start dgraph for testing - -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" - -echo "Starting dgraph server for ORLY testing..." -cd "$SCRIPT_DIR" - -# Check if already running -if docker ps | grep -q dgraph-orly-test; then - echo "✅ Dgraph is already running" - echo "" - echo "Dgraph endpoints:" - echo " gRPC: localhost:9080" - echo " HTTP: http://localhost:8080" - echo " Ratel UI: http://localhost:8000" - exit 0 -fi - -# Determine docker-compose command -if docker compose version &> /dev/null 2>&1; then - DOCKER_COMPOSE="docker compose" -else - DOCKER_COMPOSE="docker-compose" -fi - -# Start using docker compose -$DOCKER_COMPOSE -f dgraph-docker-compose.yml up -d - -echo "" -echo "Waiting for dgraph to be healthy..." -for i in {1..30}; do - if docker exec dgraph-orly-test curl -sf http://localhost:8080/health > /dev/null 2>&1; then - echo "✅ Dgraph is healthy and ready" - echo "" - echo "Dgraph endpoints:" - echo " gRPC: localhost:9080" - echo " HTTP: http://localhost:8080" - echo " Ratel UI: http://localhost:8000" - echo "" - echo "To stop: $DOCKER_COMPOSE -f dgraph-docker-compose.yml down" - echo "To view logs: docker logs dgraph-orly-test -f" - exit 0 - fi - sleep 1 -done - -echo "❌ Dgraph failed to become healthy" -docker logs dgraph-orly-test -exit 1 diff --git a/scripts/test-dgraph.sh b/scripts/test-dgraph.sh deleted file mode 100755 index 3508a68..0000000 --- a/scripts/test-dgraph.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/bin/bash -set -e - -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(dirname "$SCRIPT_DIR")" - -echo "=== ORLY Dgraph Integration Test Suite ===" -echo "" - -# Check if dgraph is running -echo "Checking for dgraph server..." -DGRAPH_URL="${ORLY_DGRAPH_URL:-localhost:9080}" - -if ! timeout 2 bash -c "echo > /dev/tcp/${DGRAPH_URL%:*}/${DGRAPH_URL#*:}" 2>/dev/null; then - echo "❌ Dgraph server not available at $DGRAPH_URL" - echo "" - echo "To start dgraph using docker-compose:" - echo " cd $SCRIPT_DIR && docker-compose -f dgraph-docker-compose.yml up -d" - echo "" - echo "Or using docker directly:" - echo " docker run -d -p 8080:8080 -p 9080:9080 -p 8000:8000 --name dgraph-orly dgraph/standalone:latest" - echo "" - exit 1 -fi - -echo "✅ Dgraph server is running at $DGRAPH_URL" -echo "" - -# Run dgraph tests -echo "Running dgraph package tests..." -cd "$PROJECT_ROOT" -CGO_ENABLED=0 go test -v -timeout 10m ./pkg/dgraph/... || { - echo "❌ Dgraph tests failed" - exit 1 -} - -echo "" -echo "✅ All dgraph tests passed!" -echo "" - -# Optional: Run relay-tester if requested -if [ "$1" == "--relay-tester" ]; then - echo "Starting ORLY with dgraph backend..." - export ORLY_DB_TYPE=dgraph - export ORLY_DGRAPH_URL="$DGRAPH_URL" - export ORLY_LOG_LEVEL=info - export ORLY_PORT=3334 - - # Kill any existing ORLY instance - pkill -f "./orly" || true - sleep 1 - - # Start ORLY in background - ./orly & - ORLY_PID=$! - - # Wait for ORLY to start - echo "Waiting for ORLY to start..." - for i in {1..30}; do - if curl -s http://localhost:3334 > /dev/null 2>&1; then - echo "✅ ORLY started successfully" - break - fi - sleep 1 - if [ $i -eq 30 ]; then - echo "❌ ORLY failed to start" - kill $ORLY_PID 2>/dev/null || true - exit 1 - fi - done - - echo "" - echo "Running relay-tester against dgraph backend..." - go run cmd/relay-tester/main.go -url ws://localhost:3334 || { - echo "❌ Relay-tester failed" - kill $ORLY_PID 2>/dev/null || true - exit 1 - } - - # Clean up - kill $ORLY_PID 2>/dev/null || true - - echo "" - echo "✅ Relay-tester passed!" -fi - -echo "" -echo "=== All tests completed successfully! ==="