31 changed files with 472 additions and 4952 deletions
@ -1,280 +0,0 @@ |
|||||||
# Dgraph Database Implementation for ORLY |
|
||||||
|
|
||||||
This package provides a Dgraph-based implementation of the ORLY database interface, enabling graph-based storage for Nostr events with powerful relationship querying capabilities. |
|
||||||
|
|
||||||
## Status: Step 1 Complete ✅ |
|
||||||
|
|
||||||
**Current State:** Dgraph server integration is complete and functional |
|
||||||
**Next Step:** DQL query/mutation implementation in save-event.go and query-events.go |
|
||||||
|
|
||||||
## Architecture |
|
||||||
|
|
||||||
### Client-Server Model |
|
||||||
|
|
||||||
The implementation uses a **client-server architecture**: |
|
||||||
|
|
||||||
``` |
|
||||||
┌─────────────────────────────────────────────┐ |
|
||||||
│ ORLY Relay Process │ |
|
||||||
│ │ |
|
||||||
│ ┌────────────────────────────────────┐ │ |
|
||||||
│ │ Dgraph Client (pkg/dgraph) │ │ |
|
||||||
│ │ - dgo library (gRPC) │ │ |
|
||||||
│ │ - Schema management │────┼───► Dgraph Server |
|
||||||
│ │ - Query/Mutate methods │ │ (localhost:9080) |
|
||||||
│ └────────────────────────────────────┘ │ - Event graph |
|
||||||
│ │ - Authors, tags |
|
||||||
│ ┌────────────────────────────────────┐ │ - Relationships |
|
||||||
│ │ Badger Metadata Store │ │ |
|
||||||
│ │ - Markers (key-value) │ │ |
|
||||||
│ │ - Serial counters │ │ |
|
||||||
│ │ - Relay identity │ │ |
|
||||||
│ └────────────────────────────────────┘ │ |
|
||||||
└─────────────────────────────────────────────┘ |
|
||||||
``` |
|
||||||
|
|
||||||
### Dual Storage Strategy |
|
||||||
|
|
||||||
1. **Dgraph** (Graph Database) |
|
||||||
- Nostr events and their content |
|
||||||
- Author relationships |
|
||||||
- Tag relationships |
|
||||||
- Event references and mentions |
|
||||||
- Optimized for graph traversals and complex queries |
|
||||||
|
|
||||||
2. **Badger** (Key-Value Store) |
|
||||||
- Metadata markers |
|
||||||
- Serial number counters |
|
||||||
- Relay identity keys |
|
||||||
- Fast key-value operations |
|
||||||
|
|
||||||
## Setup |
|
||||||
|
|
||||||
### 1. Start Dgraph Server |
|
||||||
|
|
||||||
Using Docker (recommended): |
|
||||||
|
|
||||||
```bash |
|
||||||
docker run -d \ |
|
||||||
--name dgraph \ |
|
||||||
-p 8080:8080 \ |
|
||||||
-p 9080:9080 \ |
|
||||||
-p 8000:8000 \ |
|
||||||
-v ~/dgraph:/dgraph \ |
|
||||||
dgraph/standalone:latest |
|
||||||
``` |
|
||||||
|
|
||||||
### 2. Configure ORLY |
|
||||||
|
|
||||||
```bash |
|
||||||
export ORLY_DB_TYPE=dgraph |
|
||||||
export ORLY_DGRAPH_URL=localhost:9080 # Optional, this is the default |
|
||||||
``` |
|
||||||
|
|
||||||
### 3. Run ORLY |
|
||||||
|
|
||||||
```bash |
|
||||||
./orly |
|
||||||
``` |
|
||||||
|
|
||||||
On startup, ORLY will: |
|
||||||
1. Connect to dgraph server via gRPC |
|
||||||
2. Apply the Nostr schema automatically |
|
||||||
3. Initialize badger metadata store |
|
||||||
4. Initialize serial number counter |
|
||||||
5. Start accepting events |
|
||||||
|
|
||||||
## Schema |
|
||||||
|
|
||||||
The Nostr schema defines the following types: |
|
||||||
|
|
||||||
### Event Nodes |
|
||||||
```dql |
|
||||||
type Event { |
|
||||||
event.id # Event ID (string, indexed) |
|
||||||
event.serial # Sequential number (int, indexed) |
|
||||||
event.kind # Event kind (int, indexed) |
|
||||||
event.created_at # Timestamp (int, indexed) |
|
||||||
event.content # Event content (string) |
|
||||||
event.sig # Signature (string, indexed) |
|
||||||
event.pubkey # Author pubkey (string, indexed) |
|
||||||
event.authored_by # -> Author (uid) |
|
||||||
event.references # -> Events (uid list) |
|
||||||
event.mentions # -> Events (uid list) |
|
||||||
event.tagged_with # -> Tags (uid list) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Author Nodes |
|
||||||
```dql |
|
||||||
type Author { |
|
||||||
author.pubkey # Pubkey (string, indexed, unique) |
|
||||||
author.events # -> Events (uid list, reverse) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Tag Nodes |
|
||||||
```dql |
|
||||||
type Tag { |
|
||||||
tag.type # Tag type (string, indexed) |
|
||||||
tag.value # Tag value (string, indexed + fulltext) |
|
||||||
tag.events # -> Events (uid list, reverse) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Marker Nodes (Metadata) |
|
||||||
```dql |
|
||||||
type Marker { |
|
||||||
marker.key # Key (string, indexed, unique) |
|
||||||
marker.value # Value (string) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Configuration |
|
||||||
|
|
||||||
### Environment Variables |
|
||||||
|
|
||||||
- `ORLY_DB_TYPE=dgraph` - Enable dgraph database (default: badger) |
|
||||||
- `ORLY_DGRAPH_URL=host:port` - Dgraph gRPC endpoint (default: localhost:9080) |
|
||||||
- `ORLY_DATA_DIR=/path` - Data directory for metadata storage |
|
||||||
|
|
||||||
### Connection Details |
|
||||||
|
|
||||||
The dgraph client uses **insecure gRPC** by default for local development. For production deployments: |
|
||||||
|
|
||||||
1. Set up TLS certificates for dgraph |
|
||||||
2. Modify `pkg/dgraph/dgraph.go` to use `grpc.WithTransportCredentials()` with your certs |
|
||||||
|
|
||||||
## Implementation Details |
|
||||||
|
|
||||||
### Files |
|
||||||
|
|
||||||
- `dgraph.go` - Main implementation, initialization, lifecycle |
|
||||||
- `schema.go` - Schema definition and application |
|
||||||
- `save-event.go` - Event storage (TODO: update to use Mutate) |
|
||||||
- `query-events.go` - Event queries (TODO: update to parse DQL responses) |
|
||||||
- `fetch-event.go` - Event retrieval methods |
|
||||||
- `delete.go` - Event deletion |
|
||||||
- `markers.go` - Key-value metadata storage (uses badger) |
|
||||||
- `serial.go` - Serial number generation (uses badger) |
|
||||||
- `subscriptions.go` - Subscription/payment tracking (uses markers) |
|
||||||
- `nip43.go` - NIP-43 invite system (uses markers) |
|
||||||
- `import-export.go` - Import/export operations |
|
||||||
- `logger.go` - Logging adapter |
|
||||||
|
|
||||||
### Key Methods |
|
||||||
|
|
||||||
#### Initialization |
|
||||||
```go |
|
||||||
d, err := dgraph.New(ctx, cancel, dataDir, logLevel) |
|
||||||
``` |
|
||||||
|
|
||||||
#### Querying (DQL) |
|
||||||
```go |
|
||||||
resp, err := d.Query(ctx, dqlQuery) |
|
||||||
``` |
|
||||||
|
|
||||||
#### Mutations (RDF N-Quads) |
|
||||||
```go |
|
||||||
mutation := &api.Mutation{SetNquads: []byte(nquads)} |
|
||||||
resp, err := d.Mutate(ctx, mutation) |
|
||||||
``` |
|
||||||
|
|
||||||
## Development Status |
|
||||||
|
|
||||||
### ✅ Step 1: Dgraph Server Integration (COMPLETE) |
|
||||||
|
|
||||||
- [x] dgo client library integration |
|
||||||
- [x] gRPC connection to external dgraph |
|
||||||
- [x] Schema definition and auto-application |
|
||||||
- [x] Query() and Mutate() method stubs |
|
||||||
- [x] ORLY_DGRAPH_URL configuration |
|
||||||
- [x] Dual-storage architecture |
|
||||||
- [x] Proper lifecycle management |
|
||||||
|
|
||||||
### 📝 Step 2: DQL Implementation (NEXT) |
|
||||||
|
|
||||||
Priority tasks: |
|
||||||
|
|
||||||
1. **save-event.go** - Replace RDF string building with actual Mutate() calls |
|
||||||
2. **query-events.go** - Parse actual JSON responses from Query() |
|
||||||
3. **fetch-event.go** - Implement DQL queries for event retrieval |
|
||||||
4. **delete.go** - Implement deletion mutations |
|
||||||
|
|
||||||
### 📝 Step 3: Testing (FUTURE) |
|
||||||
|
|
||||||
- Integration testing with relay-tester |
|
||||||
- Performance benchmarks vs badger |
|
||||||
- Memory profiling |
|
||||||
- Production deployment testing |
|
||||||
|
|
||||||
## Troubleshooting |
|
||||||
|
|
||||||
### Connection Refused |
|
||||||
|
|
||||||
``` |
|
||||||
failed to connect to dgraph at localhost:9080: connection refused |
|
||||||
``` |
|
||||||
|
|
||||||
**Solution:** Ensure dgraph server is running: |
|
||||||
```bash |
|
||||||
docker ps | grep dgraph |
|
||||||
docker logs dgraph |
|
||||||
``` |
|
||||||
|
|
||||||
### Schema Application Failed |
|
||||||
|
|
||||||
``` |
|
||||||
failed to apply schema: ... |
|
||||||
``` |
|
||||||
|
|
||||||
**Solution:** Check dgraph server logs and ensure no schema conflicts: |
|
||||||
```bash |
|
||||||
docker logs dgraph |
|
||||||
``` |
|
||||||
|
|
||||||
### Binary Not Finding libsecp256k1.so |
|
||||||
|
|
||||||
This is unrelated to dgraph. Ensure: |
|
||||||
```bash |
|
||||||
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$(pwd)/pkg/crypto/p8k" |
|
||||||
``` |
|
||||||
|
|
||||||
## Performance Considerations |
|
||||||
|
|
||||||
### When to Use Dgraph |
|
||||||
|
|
||||||
**Good fit:** |
|
||||||
- Complex graph queries (follows-of-follows, social graphs) |
|
||||||
- Full-text search requirements |
|
||||||
- Advanced filtering and aggregations |
|
||||||
- Multi-hop relationship traversals |
|
||||||
|
|
||||||
**Not ideal for:** |
|
||||||
- Simple key-value lookups (badger is faster) |
|
||||||
- Very high write throughput (badger has lower latency) |
|
||||||
- Single-node deployments with simple queries |
|
||||||
|
|
||||||
### Optimization Tips |
|
||||||
|
|
||||||
1. **Indexing**: Ensure frequently queried fields have appropriate indexes |
|
||||||
2. **Pagination**: Use offset/limit in DQL queries for large result sets |
|
||||||
3. **Caching**: Consider adding an LRU cache for hot events |
|
||||||
4. **Schema Design**: Use reverse edges for efficient relationship traversal |
|
||||||
|
|
||||||
## Resources |
|
||||||
|
|
||||||
- [Dgraph Documentation](https://dgraph.io/docs/) |
|
||||||
- [DQL Query Language](https://dgraph.io/docs/query-language/) |
|
||||||
- [dgo Client Library](https://github.com/dgraph-io/dgo) |
|
||||||
- [ORLY Implementation Status](../../DGRAPH_IMPLEMENTATION_STATUS.md) |
|
||||||
|
|
||||||
## Contributing |
|
||||||
|
|
||||||
When working on dgraph implementation: |
|
||||||
|
|
||||||
1. Test changes against a local dgraph instance |
|
||||||
2. Update schema.go if adding new node types or predicates |
|
||||||
3. Ensure dual-storage strategy is maintained (dgraph for events, badger for metadata) |
|
||||||
4. Add integration tests for new features |
|
||||||
5. Update DGRAPH_IMPLEMENTATION_STATUS.md with progress |
|
||||||
@ -1,330 +0,0 @@ |
|||||||
# Dgraph Test Suite |
|
||||||
|
|
||||||
This directory contains a comprehensive test suite for the dgraph database implementation, mirroring all tests from the badger implementation to ensure feature parity. |
|
||||||
|
|
||||||
## Test Files |
|
||||||
|
|
||||||
- **testmain_test.go** - Test configuration (logging, setup) |
|
||||||
- **helpers_test.go** - Helper functions for test database setup/teardown |
|
||||||
- **save-event_test.go** - Event storage tests |
|
||||||
- **query-events_test.go** - Event query tests |
|
||||||
|
|
||||||
## Quick Start |
|
||||||
|
|
||||||
### 1. Start Dgraph Server |
|
||||||
|
|
||||||
```bash |
|
||||||
# From project root |
|
||||||
./scripts/dgraph-start.sh |
|
||||||
|
|
||||||
# Verify it's running |
|
||||||
curl http://localhost:8080/health |
|
||||||
``` |
|
||||||
|
|
||||||
### 2. Run Tests |
|
||||||
|
|
||||||
```bash |
|
||||||
# Run all dgraph tests |
|
||||||
./scripts/test-dgraph.sh |
|
||||||
|
|
||||||
# Or run manually |
|
||||||
export ORLY_DGRAPH_URL=localhost:9080 |
|
||||||
CGO_ENABLED=0 go test -v ./pkg/dgraph/... |
|
||||||
|
|
||||||
# Run specific test |
|
||||||
CGO_ENABLED=0 go test -v -run TestSaveEvents ./pkg/dgraph |
|
||||||
``` |
|
||||||
|
|
||||||
## Test Coverage |
|
||||||
|
|
||||||
### Event Storage Tests (`save-event_test.go`) |
|
||||||
|
|
||||||
✅ **TestSaveEvents** |
|
||||||
- Loads ~100 events from examples.Cache |
|
||||||
- Saves all events chronologically |
|
||||||
- Verifies no errors during save |
|
||||||
- Reports performance metrics |
|
||||||
|
|
||||||
✅ **TestDeletionEventWithETagRejection** |
|
||||||
- Creates a regular event |
|
||||||
- Attempts to save deletion event with e-tag |
|
||||||
- Verifies deletion events with e-tags are rejected |
|
||||||
|
|
||||||
✅ **TestSaveExistingEvent** |
|
||||||
- Saves an event |
|
||||||
- Attempts to save same event again |
|
||||||
- Verifies duplicate events are rejected |
|
||||||
|
|
||||||
### Event Query Tests (`query-events_test.go`) |
|
||||||
|
|
||||||
✅ **TestQueryEventsByID** |
|
||||||
- Queries event by exact ID match |
|
||||||
- Verifies single result returned |
|
||||||
- Verifies correct event retrieved |
|
||||||
|
|
||||||
✅ **TestQueryEventsByKind** |
|
||||||
- Queries events by kind (e.g., kind 1) |
|
||||||
- Verifies all results have correct kind |
|
||||||
- Tests filtering logic |
|
||||||
|
|
||||||
✅ **TestQueryEventsByAuthor** |
|
||||||
- Queries events by author pubkey |
|
||||||
- Verifies all results from correct author |
|
||||||
- Tests author filtering |
|
||||||
|
|
||||||
✅ **TestReplaceableEventsAndDeletion** |
|
||||||
- Creates replaceable event (kind 0) |
|
||||||
- Creates newer version |
|
||||||
- Verifies only newer version returned in general queries |
|
||||||
- Creates deletion event |
|
||||||
- Verifies deleted event not returned |
|
||||||
- Tests replaceable event logic and deletion |
|
||||||
|
|
||||||
✅ **TestParameterizedReplaceableEventsAndDeletion** |
|
||||||
- Creates parameterized replaceable event (kind 30000+) |
|
||||||
- Adds d-tag |
|
||||||
- Creates deletion event with e-tag |
|
||||||
- Verifies deleted event not returned |
|
||||||
- Tests parameterized replaceable logic |
|
||||||
|
|
||||||
✅ **TestQueryEventsByTimeRange** |
|
||||||
- Queries events by since/until timestamps |
|
||||||
- Verifies all results within time range |
|
||||||
- Tests temporal filtering |
|
||||||
|
|
||||||
✅ **TestQueryEventsByTag** |
|
||||||
- Finds event with tags |
|
||||||
- Queries by tag key/value |
|
||||||
- Verifies all results have the tag |
|
||||||
- Tests tag filtering logic |
|
||||||
|
|
||||||
✅ **TestCountEvents** |
|
||||||
- Counts all events |
|
||||||
- Counts events by kind filter |
|
||||||
- Verifies correct counts returned |
|
||||||
- Tests counting functionality |
|
||||||
|
|
||||||
## Test Helpers |
|
||||||
|
|
||||||
### setupTestDB(t *testing.T) |
|
||||||
|
|
||||||
Creates a test dgraph database: |
|
||||||
|
|
||||||
1. **Checks dgraph availability** - Skips test if server not running |
|
||||||
2. **Creates temp directory** - For metadata storage |
|
||||||
3. **Initializes dgraph client** - Connects to server |
|
||||||
4. **Drops all data** - Starts with clean slate |
|
||||||
5. **Loads test events** - From examples.Cache (~100 events) |
|
||||||
6. **Sorts chronologically** - Ensures addressable events processed in order |
|
||||||
7. **Saves all events** - Populates test database |
|
||||||
|
|
||||||
**Returns:** `(*D, []*event.E, context.Context, context.CancelFunc, string)` |
|
||||||
|
|
||||||
### cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
Cleans up after tests: |
|
||||||
- Closes database connection |
|
||||||
- Cancels context |
|
||||||
- Removes temp directory |
|
||||||
|
|
||||||
### skipIfDgraphNotAvailable(t *testing.T) |
|
||||||
|
|
||||||
Checks if dgraph is running and skips test if not available. |
|
||||||
|
|
||||||
## Running Tests |
|
||||||
|
|
||||||
### Prerequisites |
|
||||||
|
|
||||||
1. **Dgraph Server** - Must be running before tests |
|
||||||
2. **Go 1.21+** - For running tests |
|
||||||
3. **CGO_ENABLED=0** - For pure Go build |
|
||||||
|
|
||||||
### Test Execution |
|
||||||
|
|
||||||
#### All Tests |
|
||||||
|
|
||||||
```bash |
|
||||||
./scripts/test-dgraph.sh |
|
||||||
``` |
|
||||||
|
|
||||||
#### Specific Test File |
|
||||||
|
|
||||||
```bash |
|
||||||
CGO_ENABLED=0 go test -v ./pkg/dgraph -run TestSaveEvents |
|
||||||
``` |
|
||||||
|
|
||||||
#### With Logging |
|
||||||
|
|
||||||
```bash |
|
||||||
export TEST_LOG=1 |
|
||||||
CGO_ENABLED=0 go test -v ./pkg/dgraph/... |
|
||||||
``` |
|
||||||
|
|
||||||
#### With Timeout |
|
||||||
|
|
||||||
```bash |
|
||||||
CGO_ENABLED=0 go test -v -timeout 10m ./pkg/dgraph/... |
|
||||||
``` |
|
||||||
|
|
||||||
### Integration Testing |
|
||||||
|
|
||||||
Run tests + relay-tester: |
|
||||||
|
|
||||||
```bash |
|
||||||
./scripts/test-dgraph.sh --relay-tester |
|
||||||
``` |
|
||||||
|
|
||||||
This will: |
|
||||||
1. Run all dgraph package tests |
|
||||||
2. Start ORLY with dgraph backend |
|
||||||
3. Run relay-tester against ORLY |
|
||||||
4. Report results |
|
||||||
|
|
||||||
## Test Data |
|
||||||
|
|
||||||
Tests use `pkg/encoders/event/examples.Cache` which contains: |
|
||||||
- ~100 real Nostr events |
|
||||||
- Text notes (kind 1) |
|
||||||
- Profile metadata (kind 0) |
|
||||||
- Various other kinds |
|
||||||
- Events with tags, references, mentions |
|
||||||
- Multiple authors and timestamps |
|
||||||
|
|
||||||
This ensures tests cover realistic scenarios. |
|
||||||
|
|
||||||
## Debugging Tests |
|
||||||
|
|
||||||
### View Test Output |
|
||||||
|
|
||||||
```bash |
|
||||||
CGO_ENABLED=0 go test -v ./pkg/dgraph/... 2>&1 | tee test-output.log |
|
||||||
``` |
|
||||||
|
|
||||||
### Check Dgraph State |
|
||||||
|
|
||||||
```bash |
|
||||||
# View data via Ratel UI |
|
||||||
open http://localhost:8000 |
|
||||||
|
|
||||||
# Query via HTTP |
|
||||||
curl -X POST localhost:8080/query -d '{ |
|
||||||
events(func: type(Event), first: 10) { |
|
||||||
uid |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
} |
|
||||||
}' |
|
||||||
``` |
|
||||||
|
|
||||||
### Enable Dgraph Logging |
|
||||||
|
|
||||||
```bash |
|
||||||
docker logs dgraph-orly-test -f |
|
||||||
``` |
|
||||||
|
|
||||||
## Test Failures |
|
||||||
|
|
||||||
### "Dgraph server not available" |
|
||||||
|
|
||||||
**Cause:** Dgraph is not running |
|
||||||
|
|
||||||
**Fix:** |
|
||||||
```bash |
|
||||||
./scripts/dgraph-start.sh |
|
||||||
``` |
|
||||||
|
|
||||||
### Connection Timeouts |
|
||||||
|
|
||||||
**Cause:** Dgraph server overloaded or network issues |
|
||||||
|
|
||||||
**Fix:** |
|
||||||
- Increase test timeout: `go test -timeout 20m` |
|
||||||
- Check dgraph resources: `docker stats dgraph-orly-test` |
|
||||||
- Restart dgraph: `docker restart dgraph-orly-test` |
|
||||||
|
|
||||||
### Schema Errors |
|
||||||
|
|
||||||
**Cause:** Schema conflicts or version mismatch |
|
||||||
|
|
||||||
**Fix:** |
|
||||||
- Drop all data: Tests call `dropAll()` automatically |
|
||||||
- Check dgraph version: `docker exec dgraph-orly-test dgraph version` |
|
||||||
|
|
||||||
### Test Hangs |
|
||||||
|
|
||||||
**Cause:** Deadlock or infinite loop |
|
||||||
|
|
||||||
**Fix:** |
|
||||||
- Send SIGQUIT: `kill -QUIT <test-pid>` |
|
||||||
- View goroutine dump |
|
||||||
- Check dgraph logs |
|
||||||
|
|
||||||
## Continuous Integration |
|
||||||
|
|
||||||
### GitHub Actions Example |
|
||||||
|
|
||||||
```yaml |
|
||||||
name: Dgraph Tests |
|
||||||
|
|
||||||
on: [push, pull_request] |
|
||||||
|
|
||||||
jobs: |
|
||||||
test: |
|
||||||
runs-on: ubuntu-latest |
|
||||||
|
|
||||||
services: |
|
||||||
dgraph: |
|
||||||
image: dgraph/standalone:latest |
|
||||||
ports: |
|
||||||
- 8080:8080 |
|
||||||
- 9080:9080 |
|
||||||
options: >- |
|
||||||
--health-cmd "curl -f http://localhost:8080/health" |
|
||||||
--health-interval 10s |
|
||||||
--health-timeout 5s |
|
||||||
--health-retries 5 |
|
||||||
|
|
||||||
steps: |
|
||||||
- uses: actions/checkout@v3 |
|
||||||
|
|
||||||
- name: Set up Go |
|
||||||
uses: actions/setup-go@v4 |
|
||||||
with: |
|
||||||
go-version: '1.21' |
|
||||||
|
|
||||||
- name: Run dgraph tests |
|
||||||
env: |
|
||||||
ORLY_DGRAPH_URL: localhost:9080 |
|
||||||
run: | |
|
||||||
CGO_ENABLED=0 go test -v -timeout 10m ./pkg/dgraph/... |
|
||||||
``` |
|
||||||
|
|
||||||
## Performance Benchmarks |
|
||||||
|
|
||||||
Compare with badger: |
|
||||||
|
|
||||||
```bash |
|
||||||
# Badger benchmarks |
|
||||||
go test -bench=. -benchmem ./pkg/database/... |
|
||||||
|
|
||||||
# Dgraph benchmarks |
|
||||||
go test -bench=. -benchmem ./pkg/dgraph/... |
|
||||||
``` |
|
||||||
|
|
||||||
## Related Documentation |
|
||||||
|
|
||||||
- [Main Testing Guide](../../scripts/DGRAPH_TESTING.md) |
|
||||||
- [Implementation Status](../../DGRAPH_IMPLEMENTATION_STATUS.md) |
|
||||||
- [Package README](README.md) |
|
||||||
|
|
||||||
## Contributing |
|
||||||
|
|
||||||
When adding new tests: |
|
||||||
|
|
||||||
1. **Mirror badger tests** - Ensure feature parity |
|
||||||
2. **Use test helpers** - setupTestDB() and cleanupTestDB() |
|
||||||
3. **Skip if unavailable** - Call skipIfDgraphNotAvailable(t) |
|
||||||
4. **Clean up resources** - Always defer cleanupTestDB() |
|
||||||
5. **Test chronologically** - Sort events by timestamp for addressable events |
|
||||||
6. **Verify behavior** - Don't just check for no errors, verify correctness |
|
||||||
@ -1,262 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"time" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
"next.orly.dev/pkg/database/indexes/types" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// DeleteEvent deletes an event by its ID
|
|
||||||
func (d *D) DeleteEvent(c context.Context, eid []byte) error { |
|
||||||
idStr := hex.Enc(eid) |
|
||||||
|
|
||||||
// Find the event's UID
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.id, %q)) { |
|
||||||
uid |
|
||||||
} |
|
||||||
}`, idStr) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to find event for deletion: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Parse UID
|
|
||||||
var result struct { |
|
||||||
Event []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
} `json:"event"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = unmarshalJSON(resp.Json, &result); err != nil { |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Event) == 0 { |
|
||||||
return nil // Event doesn't exist
|
|
||||||
} |
|
||||||
|
|
||||||
// Delete the event node
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
DelNquads: []byte(fmt.Sprintf("<%s> * * .", result.Event[0].UID)), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(c, mutation); err != nil { |
|
||||||
return fmt.Errorf("failed to delete event: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// DeleteEventBySerial deletes an event by its serial number
|
|
||||||
func (d *D) DeleteEventBySerial(c context.Context, ser *types.Uint40, ev *event.E) error { |
|
||||||
serial := ser.Get() |
|
||||||
|
|
||||||
// Find the event's UID
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.serial, %d)) { |
|
||||||
uid |
|
||||||
} |
|
||||||
}`, serial) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to find event for deletion: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Parse UID
|
|
||||||
var result struct { |
|
||||||
Event []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
} `json:"event"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = unmarshalJSON(resp.Json, &result); err != nil { |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Event) == 0 { |
|
||||||
return nil // Event doesn't exist
|
|
||||||
} |
|
||||||
|
|
||||||
// Delete the event node
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
DelNquads: []byte(fmt.Sprintf("<%s> * * .", result.Event[0].UID)), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(c, mutation); err != nil { |
|
||||||
return fmt.Errorf("failed to delete event: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// DeleteExpired removes events that have passed their expiration time (NIP-40)
|
|
||||||
func (d *D) DeleteExpired() { |
|
||||||
// Query for events that have an "expiration" tag
|
|
||||||
// NIP-40: events should have a tag ["expiration", "<unix timestamp>"]
|
|
||||||
query := `{ |
|
||||||
events(func: has(event.tags)) { |
|
||||||
uid |
|
||||||
event.id |
|
||||||
event.tags |
|
||||||
event.created_at |
|
||||||
} |
|
||||||
}` |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
d.Logger.Errorf("failed to query events for expiration: %v", err) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
ID string `json:"event.id"` |
|
||||||
Tags string `json:"event.tags"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = unmarshalJSON(resp.Json, &result); err != nil { |
|
||||||
d.Logger.Errorf("failed to parse events for expiration: %v", err) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
now := time.Now().Unix() |
|
||||||
deletedCount := 0 |
|
||||||
|
|
||||||
for _, ev := range result.Events { |
|
||||||
// Parse tags
|
|
||||||
if ev.Tags == "" { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
var tags [][]string |
|
||||||
if err := json.Unmarshal([]byte(ev.Tags), &tags); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Look for expiration tag
|
|
||||||
var expirationTime int64 |
|
||||||
for _, tag := range tags { |
|
||||||
if len(tag) >= 2 && tag[0] == "expiration" { |
|
||||||
// Parse expiration timestamp
|
|
||||||
if _, err := fmt.Sscanf(tag[1], "%d", &expirationTime); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
break |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// If expiration time found and passed, delete the event
|
|
||||||
if expirationTime > 0 && now > expirationTime { |
|
||||||
mutation := &api.Mutation{ |
|
||||||
DelNquads: []byte(fmt.Sprintf("<%s> * * .", ev.UID)), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err := d.Mutate(context.Background(), mutation); err != nil { |
|
||||||
d.Logger.Warningf("failed to delete expired event %s: %v", ev.ID, err) |
|
||||||
} else { |
|
||||||
deletedCount++ |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
if deletedCount > 0 { |
|
||||||
d.Logger.Infof("deleted %d expired events", deletedCount) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// ProcessDelete processes a kind 5 deletion event
|
|
||||||
func (d *D) ProcessDelete(ev *event.E, admins [][]byte) (err error) { |
|
||||||
if ev.Kind != 5 { |
|
||||||
return fmt.Errorf("event is not a deletion event (kind 5)") |
|
||||||
} |
|
||||||
|
|
||||||
// Extract event IDs to delete from tags
|
|
||||||
for _, tag := range *ev.Tags { |
|
||||||
if len(tag.T) >= 2 && string(tag.T[0]) == "e" { |
|
||||||
eventID := tag.T[1] |
|
||||||
|
|
||||||
// Verify the deletion is authorized (author must match or be admin)
|
|
||||||
if err = d.CheckForDeleted(ev, admins); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Delete the event
|
|
||||||
if err = d.DeleteEvent(context.Background(), eventID); err != nil { |
|
||||||
// Log error but continue with other deletions
|
|
||||||
d.Logger.Errorf("failed to delete event %s: %v", hex.Enc(eventID), err) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// CheckForDeleted checks if an event has been deleted
|
|
||||||
func (d *D) CheckForDeleted(ev *event.E, admins [][]byte) (err error) { |
|
||||||
// Query for delete events (kind 5) that reference this event
|
|
||||||
evID := hex.Enc(ev.ID[:]) |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
deletes(func: eq(event.kind, 5)) @filter(eq(event.pubkey, %q)) { |
|
||||||
uid |
|
||||||
event.pubkey |
|
||||||
references @filter(eq(event.id, %q)) { |
|
||||||
event.id |
|
||||||
} |
|
||||||
} |
|
||||||
}`, hex.Enc(ev.Pubkey), evID) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to check for deletions: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Deletes []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
Pubkey string `json:"event.pubkey"` |
|
||||||
References []struct { |
|
||||||
ID string `json:"event.id"` |
|
||||||
} `json:"references"` |
|
||||||
} `json:"deletes"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = unmarshalJSON(resp.Json, &result); err != nil { |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
// Check if any delete events reference this event
|
|
||||||
for _, del := range result.Deletes { |
|
||||||
if len(del.References) > 0 { |
|
||||||
// Check if deletion is from the author or an admin
|
|
||||||
delPubkey, _ := hex.Dec(del.Pubkey) |
|
||||||
if string(delPubkey) == string(ev.Pubkey) { |
|
||||||
return fmt.Errorf("event has been deleted by author") |
|
||||||
} |
|
||||||
|
|
||||||
// Check admins
|
|
||||||
for _, admin := range admins { |
|
||||||
if string(delPubkey) == string(admin) { |
|
||||||
return fmt.Errorf("event has been deleted by admin") |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
@ -1,295 +0,0 @@ |
|||||||
// Package dgraph provides a Dgraph-based implementation of the database interface.
|
|
||||||
// This is a simplified implementation for testing - full dgraph integration to be completed later.
|
|
||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"os" |
|
||||||
"path/filepath" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230" |
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
"google.golang.org/grpc" |
|
||||||
"google.golang.org/grpc/credentials/insecure" |
|
||||||
"lol.mleku.dev" |
|
||||||
"lol.mleku.dev/chk" |
|
||||||
"next.orly.dev/pkg/database" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/filter" |
|
||||||
"next.orly.dev/pkg/utils/apputil" |
|
||||||
) |
|
||||||
|
|
||||||
// D implements the database.Database interface using Dgraph as the storage backend
|
|
||||||
type D struct { |
|
||||||
ctx context.Context |
|
||||||
cancel context.CancelFunc |
|
||||||
dataDir string |
|
||||||
Logger *logger |
|
||||||
|
|
||||||
// Dgraph client connection
|
|
||||||
client *dgo.Dgraph |
|
||||||
conn *grpc.ClientConn |
|
||||||
|
|
||||||
// Configuration
|
|
||||||
dgraphURL string |
|
||||||
enableGraphQL bool |
|
||||||
enableIntrospection bool |
|
||||||
|
|
||||||
ready chan struct{} // Closed when database is ready to serve requests
|
|
||||||
} |
|
||||||
|
|
||||||
// Ensure D implements database.Database interface at compile time
|
|
||||||
var _ database.Database = (*D)(nil) |
|
||||||
|
|
||||||
// init registers the dgraph database factory
|
|
||||||
func init() { |
|
||||||
database.RegisterDgraphFactory(func( |
|
||||||
ctx context.Context, |
|
||||||
cancel context.CancelFunc, |
|
||||||
cfg *database.DatabaseConfig, |
|
||||||
) (database.Database, error) { |
|
||||||
return NewWithConfig(ctx, cancel, cfg) |
|
||||||
}) |
|
||||||
} |
|
||||||
|
|
||||||
// NewWithConfig creates a new Dgraph-based database instance with full configuration.
|
|
||||||
// Configuration is passed from the centralized app config via DatabaseConfig.
|
|
||||||
func NewWithConfig( |
|
||||||
ctx context.Context, cancel context.CancelFunc, cfg *database.DatabaseConfig, |
|
||||||
) ( |
|
||||||
d *D, err error, |
|
||||||
) { |
|
||||||
// Apply defaults for empty values
|
|
||||||
dgraphURL := cfg.DgraphURL |
|
||||||
if dgraphURL == "" { |
|
||||||
dgraphURL = "localhost:9080" |
|
||||||
} |
|
||||||
|
|
||||||
d = &D{ |
|
||||||
ctx: ctx, |
|
||||||
cancel: cancel, |
|
||||||
dataDir: cfg.DataDir, |
|
||||||
Logger: NewLogger(lol.GetLogLevel(cfg.LogLevel), cfg.DataDir), |
|
||||||
dgraphURL: dgraphURL, |
|
||||||
enableGraphQL: false, |
|
||||||
enableIntrospection: false, |
|
||||||
ready: make(chan struct{}), |
|
||||||
} |
|
||||||
|
|
||||||
// Ensure the data directory exists
|
|
||||||
if err = os.MkdirAll(cfg.DataDir, 0755); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Ensure directory structure
|
|
||||||
dummyFile := filepath.Join(cfg.DataDir, "dummy.sst") |
|
||||||
if err = apputil.EnsureDir(dummyFile); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Initialize dgraph client connection
|
|
||||||
if err = d.initDgraphClient(); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Apply Nostr schema to dgraph
|
|
||||||
if err = d.applySchema(ctx); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Initialize serial counter
|
|
||||||
if err = d.initSerialCounter(); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Start warmup goroutine to signal when database is ready
|
|
||||||
go d.warmup() |
|
||||||
|
|
||||||
// Setup shutdown handler
|
|
||||||
go func() { |
|
||||||
<-d.ctx.Done() |
|
||||||
d.cancel() |
|
||||||
if d.conn != nil { |
|
||||||
d.conn.Close() |
|
||||||
} |
|
||||||
}() |
|
||||||
|
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// New creates a new Dgraph-based database instance with default configuration.
|
|
||||||
// This is provided for backward compatibility with existing callers (tests, etc.).
|
|
||||||
// For full configuration control, use NewWithConfig instead.
|
|
||||||
func New( |
|
||||||
ctx context.Context, cancel context.CancelFunc, dataDir, logLevel string, |
|
||||||
) ( |
|
||||||
d *D, err error, |
|
||||||
) { |
|
||||||
cfg := &database.DatabaseConfig{ |
|
||||||
DataDir: dataDir, |
|
||||||
LogLevel: logLevel, |
|
||||||
} |
|
||||||
return NewWithConfig(ctx, cancel, cfg) |
|
||||||
} |
|
||||||
|
|
||||||
// initDgraphClient establishes connection to dgraph server
|
|
||||||
func (d *D) initDgraphClient() error { |
|
||||||
d.Logger.Infof("connecting to dgraph at %s", d.dgraphURL) |
|
||||||
|
|
||||||
// Establish gRPC connection
|
|
||||||
conn, err := grpc.Dial(d.dgraphURL, grpc.WithTransportCredentials(insecure.NewCredentials())) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to connect to dgraph at %s: %w", d.dgraphURL, err) |
|
||||||
} |
|
||||||
|
|
||||||
d.conn = conn |
|
||||||
d.client = dgo.NewDgraphClient(api.NewDgraphClient(conn)) |
|
||||||
|
|
||||||
d.Logger.Infof("successfully connected to dgraph") |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
|
|
||||||
// Query executes a DQL query against dgraph
|
|
||||||
func (d *D) Query(ctx context.Context, query string) (*api.Response, error) { |
|
||||||
txn := d.client.NewReadOnlyTxn() |
|
||||||
defer txn.Discard(ctx) |
|
||||||
|
|
||||||
resp, err := txn.Query(ctx, query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("dgraph query failed: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return resp, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Mutate executes a mutation against dgraph
|
|
||||||
func (d *D) Mutate(ctx context.Context, mutation *api.Mutation) (*api.Response, error) { |
|
||||||
txn := d.client.NewTxn() |
|
||||||
defer txn.Discard(ctx) |
|
||||||
|
|
||||||
resp, err := txn.Mutate(ctx, mutation) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("dgraph mutation failed: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Only commit if CommitNow is false (mutation didn't auto-commit)
|
|
||||||
if !mutation.CommitNow { |
|
||||||
if err := txn.Commit(ctx); err != nil { |
|
||||||
return nil, fmt.Errorf("dgraph commit failed: %w", err) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return resp, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Path returns the data directory path
|
|
||||||
func (d *D) Path() string { return d.dataDir } |
|
||||||
|
|
||||||
// Init initializes the database with a given path (no-op, path set in New)
|
|
||||||
func (d *D) Init(path string) (err error) { |
|
||||||
// Path already set in New()
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Sync flushes pending writes (DGraph handles persistence automatically)
|
|
||||||
func (d *D) Sync() (err error) { |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Close closes the database
|
|
||||||
func (d *D) Close() (err error) { |
|
||||||
d.cancel() |
|
||||||
if d.conn != nil { |
|
||||||
if e := d.conn.Close(); e != nil { |
|
||||||
err = e |
|
||||||
} |
|
||||||
} |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Wipe removes all data
|
|
||||||
func (d *D) Wipe() (err error) { |
|
||||||
// Drop all data in DGraph using Alter
|
|
||||||
op := &api.Operation{ |
|
||||||
DropOp: api.Operation_DATA, |
|
||||||
} |
|
||||||
|
|
||||||
if err = d.client.Alter(context.Background(), op); err != nil { |
|
||||||
return fmt.Errorf("failed to drop dgraph data: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Remove data directory
|
|
||||||
if err = os.RemoveAll(d.dataDir); chk.E(err) { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// SetLogLevel sets the logging level
|
|
||||||
func (d *D) SetLogLevel(level string) { |
|
||||||
// d.Logger.SetLevel(lol.GetLogLevel(level))
|
|
||||||
} |
|
||||||
|
|
||||||
// EventIdsBySerial retrieves event IDs by serial range
|
|
||||||
func (d *D) EventIdsBySerial(start uint64, count int) ( |
|
||||||
evs []uint64, err error, |
|
||||||
) { |
|
||||||
// Query for events in the specified serial range
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: ge(event.serial, %d), orderdesc: event.serial, first: %d) { |
|
||||||
event.serial |
|
||||||
} |
|
||||||
}`, start, count) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to query event IDs by serial: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
evs = make([]uint64, 0, len(result.Events)) |
|
||||||
for _, ev := range result.Events { |
|
||||||
evs = append(evs, uint64(ev.Serial)) |
|
||||||
} |
|
||||||
|
|
||||||
return evs, nil |
|
||||||
} |
|
||||||
|
|
||||||
// RunMigrations runs database migrations (no-op for dgraph)
|
|
||||||
func (d *D) RunMigrations() { |
|
||||||
// No-op for dgraph
|
|
||||||
} |
|
||||||
|
|
||||||
// Ready returns a channel that closes when the database is ready to serve requests.
|
|
||||||
// This allows callers to wait for database warmup to complete.
|
|
||||||
func (d *D) Ready() <-chan struct{} { |
|
||||||
return d.ready |
|
||||||
} |
|
||||||
|
|
||||||
// warmup performs database warmup operations and closes the ready channel when complete.
|
|
||||||
// For Dgraph, warmup ensures the connection is healthy and schema is applied.
|
|
||||||
func (d *D) warmup() { |
|
||||||
defer close(d.ready) |
|
||||||
|
|
||||||
// Dgraph connection and schema are already verified during initialization
|
|
||||||
// Just give a brief moment for any background processes to settle
|
|
||||||
d.Logger.Infof("dgraph database warmup complete, ready to serve requests") |
|
||||||
} |
|
||||||
func (d *D) GetCachedJSON(f *filter.F) ([][]byte, bool) { return nil, false } |
|
||||||
func (d *D) CacheMarshaledJSON(f *filter.F, marshaledJSON [][]byte) {} |
|
||||||
func (d *D) GetCachedEvents(f *filter.F) (event.S, bool) { return nil, false } |
|
||||||
func (d *D) CacheEvents(f *filter.F, events event.S) {} |
|
||||||
func (d *D) InvalidateQueryCache() {} |
|
||||||
@ -1,392 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"strings" |
|
||||||
|
|
||||||
"next.orly.dev/pkg/database" |
|
||||||
"next.orly.dev/pkg/database/indexes/types" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/tag" |
|
||||||
"next.orly.dev/pkg/interfaces/store" |
|
||||||
) |
|
||||||
|
|
||||||
// FetchEventBySerial retrieves an event by its serial number
|
|
||||||
func (d *D) FetchEventBySerial(ser *types.Uint40) (ev *event.E, err error) { |
|
||||||
serial := ser.Get() |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.serial, %d)) { |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.tags |
|
||||||
} |
|
||||||
}`, serial) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to fetch event by serial: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
evs, err := d.parseEventsFromResponse(resp.Json) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
if len(evs) == 0 { |
|
||||||
return nil, fmt.Errorf("event not found") |
|
||||||
} |
|
||||||
|
|
||||||
return evs[0], nil |
|
||||||
} |
|
||||||
|
|
||||||
// FetchEventsBySerials retrieves multiple events by their serial numbers
|
|
||||||
func (d *D) FetchEventsBySerials(serials []*types.Uint40) ( |
|
||||||
events map[uint64]*event.E, err error, |
|
||||||
) { |
|
||||||
if len(serials) == 0 { |
|
||||||
return make(map[uint64]*event.E), nil |
|
||||||
} |
|
||||||
|
|
||||||
// Build a filter for multiple serials using OR conditions
|
|
||||||
serialConditions := make([]string, len(serials)) |
|
||||||
for i, ser := range serials { |
|
||||||
serialConditions[i] = fmt.Sprintf("eq(event.serial, %d)", ser.Get()) |
|
||||||
} |
|
||||||
serialFilter := strings.Join(serialConditions, " OR ") |
|
||||||
|
|
||||||
// Query with proper batch filtering
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: has(event.serial)) @filter(%s) { |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.tags |
|
||||||
event.serial |
|
||||||
} |
|
||||||
}`, serialFilter) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to fetch events by serials: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Parse the response including serial numbers
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
ID string `json:"event.id"` |
|
||||||
Kind int `json:"event.kind"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
Content string `json:"event.content"` |
|
||||||
Sig string `json:"event.sig"` |
|
||||||
Pubkey string `json:"event.pubkey"` |
|
||||||
Tags string `json:"event.tags"` |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
// Map events by their serial numbers
|
|
||||||
events = make(map[uint64]*event.E) |
|
||||||
for _, ev := range result.Events { |
|
||||||
// Decode hex strings
|
|
||||||
id, err := hex.Dec(ev.ID) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
sig, err := hex.Dec(ev.Sig) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
pubkey, err := hex.Dec(ev.Pubkey) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Parse tags from JSON
|
|
||||||
var tags tag.S |
|
||||||
if ev.Tags != "" { |
|
||||||
if err := json.Unmarshal([]byte(ev.Tags), &tags); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Create event
|
|
||||||
e := &event.E{ |
|
||||||
Kind: uint16(ev.Kind), |
|
||||||
CreatedAt: ev.CreatedAt, |
|
||||||
Content: []byte(ev.Content), |
|
||||||
Tags: &tags, |
|
||||||
} |
|
||||||
|
|
||||||
// Copy fixed-size arrays
|
|
||||||
copy(e.ID[:], id) |
|
||||||
copy(e.Sig[:], sig) |
|
||||||
copy(e.Pubkey[:], pubkey) |
|
||||||
|
|
||||||
events[uint64(ev.Serial)] = e |
|
||||||
} |
|
||||||
|
|
||||||
return events, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetSerialById retrieves the serial number for an event ID
|
|
||||||
func (d *D) GetSerialById(id []byte) (ser *types.Uint40, err error) { |
|
||||||
idStr := hex.Enc(id) |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.id, %q)) { |
|
||||||
event.serial |
|
||||||
} |
|
||||||
}`, idStr) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to get serial by ID: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Event []struct { |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"event"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Event) == 0 { |
|
||||||
return nil, fmt.Errorf("event not found") |
|
||||||
} |
|
||||||
|
|
||||||
ser = &types.Uint40{} |
|
||||||
ser.Set(uint64(result.Event[0].Serial)) |
|
||||||
|
|
||||||
return ser, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetSerialsByIds retrieves serial numbers for multiple event IDs
|
|
||||||
func (d *D) GetSerialsByIds(ids *tag.T) ( |
|
||||||
serials map[string]*types.Uint40, err error, |
|
||||||
) { |
|
||||||
serials = make(map[string]*types.Uint40) |
|
||||||
|
|
||||||
if len(ids.T) == 0 { |
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Build batch query for all IDs at once
|
|
||||||
idConditions := make([]string, 0, len(ids.T)) |
|
||||||
idMap := make(map[string][]byte) // Map hex ID to original bytes
|
|
||||||
|
|
||||||
for _, idBytes := range ids.T { |
|
||||||
if len(idBytes) > 0 { |
|
||||||
idStr := hex.Enc(idBytes) |
|
||||||
idConditions = append(idConditions, fmt.Sprintf("eq(event.id, %q)", idStr)) |
|
||||||
idMap[idStr] = idBytes |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
if len(idConditions) == 0 { |
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Create single query with OR conditions
|
|
||||||
idFilter := strings.Join(idConditions, " OR ") |
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: has(event.id)) @filter(%s) { |
|
||||||
event.id |
|
||||||
event.serial |
|
||||||
} |
|
||||||
}`, idFilter) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to batch query serials by IDs: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
ID string `json:"event.id"` |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
// Map results back
|
|
||||||
for _, ev := range result.Events { |
|
||||||
serial := types.Uint40{} |
|
||||||
serial.Set(uint64(ev.Serial)) |
|
||||||
serials[ev.ID] = &serial |
|
||||||
} |
|
||||||
|
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetSerialsByIdsWithFilter retrieves serials with a filter function
|
|
||||||
func (d *D) GetSerialsByIdsWithFilter( |
|
||||||
ids *tag.T, fn func(ev *event.E, ser *types.Uint40) bool, |
|
||||||
) (serials map[string]*types.Uint40, err error) { |
|
||||||
serials = make(map[string]*types.Uint40) |
|
||||||
|
|
||||||
if fn == nil { |
|
||||||
// No filter, just return all
|
|
||||||
return d.GetSerialsByIds(ids) |
|
||||||
} |
|
||||||
|
|
||||||
// With filter, need to fetch events
|
|
||||||
for _, id := range ids.T { |
|
||||||
if len(id) > 0 { |
|
||||||
serial, err := d.GetSerialById(id) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
ev, err := d.FetchEventBySerial(serial) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
if fn(ev, serial) { |
|
||||||
serials[string(id)] = serial |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetSerialsByRange retrieves serials within a range
|
|
||||||
func (d *D) GetSerialsByRange(idx database.Range) ( |
|
||||||
serials types.Uint40s, err error, |
|
||||||
) { |
|
||||||
// Range represents a byte-prefix range for index scanning
|
|
||||||
// For dgraph, we need to convert this to a query on indexed fields
|
|
||||||
// The range is typically used for scanning event IDs or other hex-encoded keys
|
|
||||||
|
|
||||||
if len(idx.Start) == 0 && len(idx.End) == 0 { |
|
||||||
return nil, fmt.Errorf("empty range provided") |
|
||||||
} |
|
||||||
|
|
||||||
startStr := hex.Enc(idx.Start) |
|
||||||
endStr := hex.Enc(idx.End) |
|
||||||
|
|
||||||
// Query for events with IDs in the specified range
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: ge(event.id, %q)) @filter(le(event.id, %q)) { |
|
||||||
event.serial |
|
||||||
} |
|
||||||
}`, startStr, endStr) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to query serials by range: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
serials = make([]*types.Uint40, 0, len(result.Events)) |
|
||||||
for _, ev := range result.Events { |
|
||||||
serial := types.Uint40{} |
|
||||||
serial.Set(uint64(ev.Serial)) |
|
||||||
serials = append(serials, &serial) |
|
||||||
} |
|
||||||
|
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetFullIdPubkeyBySerial retrieves ID and pubkey for a serial number
|
|
||||||
func (d *D) GetFullIdPubkeyBySerial(ser *types.Uint40) ( |
|
||||||
fidpk *store.IdPkTs, err error, |
|
||||||
) { |
|
||||||
serial := ser.Get() |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.serial, %d)) { |
|
||||||
event.id |
|
||||||
event.pubkey |
|
||||||
event.created_at |
|
||||||
} |
|
||||||
}`, serial) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to get ID and pubkey by serial: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Event []struct { |
|
||||||
ID string `json:"event.id"` |
|
||||||
Pubkey string `json:"event.pubkey"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
} `json:"event"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Event) == 0 { |
|
||||||
return nil, fmt.Errorf("event not found") |
|
||||||
} |
|
||||||
|
|
||||||
id, err := hex.Dec(result.Event[0].ID) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
pubkey, err := hex.Dec(result.Event[0].Pubkey) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
fidpk = &store.IdPkTs{ |
|
||||||
Id: id, |
|
||||||
Pub: pubkey, |
|
||||||
Ts: result.Event[0].CreatedAt, |
|
||||||
Ser: serial, |
|
||||||
} |
|
||||||
|
|
||||||
return fidpk, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetFullIdPubkeyBySerials retrieves IDs and pubkeys for multiple serials
|
|
||||||
func (d *D) GetFullIdPubkeyBySerials(sers []*types.Uint40) ( |
|
||||||
fidpks []*store.IdPkTs, err error, |
|
||||||
) { |
|
||||||
fidpks = make([]*store.IdPkTs, 0, len(sers)) |
|
||||||
|
|
||||||
for _, ser := range sers { |
|
||||||
fidpk, err := d.GetFullIdPubkeyBySerial(ser) |
|
||||||
if err != nil { |
|
||||||
continue // Skip errors, continue with others
|
|
||||||
} |
|
||||||
fidpks = append(fidpks, fidpk) |
|
||||||
} |
|
||||||
|
|
||||||
return fidpks, nil |
|
||||||
} |
|
||||||
@ -1,144 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"bufio" |
|
||||||
"bytes" |
|
||||||
"context" |
|
||||||
"net" |
|
||||||
"os" |
|
||||||
"sort" |
|
||||||
"testing" |
|
||||||
"time" |
|
||||||
|
|
||||||
"lol.mleku.dev/chk" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event/examples" |
|
||||||
) |
|
||||||
|
|
||||||
// isDgraphAvailable checks if a dgraph server is running
|
|
||||||
func isDgraphAvailable() bool { |
|
||||||
dgraphURL := os.Getenv("ORLY_DGRAPH_URL") |
|
||||||
if dgraphURL == "" { |
|
||||||
dgraphURL = "localhost:9080" |
|
||||||
} |
|
||||||
|
|
||||||
conn, err := net.DialTimeout("tcp", dgraphURL, 2*time.Second) |
|
||||||
if err != nil { |
|
||||||
return false |
|
||||||
} |
|
||||||
conn.Close() |
|
||||||
return true |
|
||||||
} |
|
||||||
|
|
||||||
// skipIfDgraphNotAvailable skips the test if dgraph is not available
|
|
||||||
func skipIfDgraphNotAvailable(t *testing.T) { |
|
||||||
if !isDgraphAvailable() { |
|
||||||
dgraphURL := os.Getenv("ORLY_DGRAPH_URL") |
|
||||||
if dgraphURL == "" { |
|
||||||
dgraphURL = "localhost:9080" |
|
||||||
} |
|
||||||
t.Skipf("Dgraph server not available at %s. Start with: docker run -p 9080:9080 dgraph/standalone:latest", dgraphURL) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// setupTestDB creates a new test dgraph database and loads example events
|
|
||||||
func setupTestDB(t *testing.T) ( |
|
||||||
*D, []*event.E, context.Context, context.CancelFunc, string, |
|
||||||
) { |
|
||||||
skipIfDgraphNotAvailable(t) |
|
||||||
|
|
||||||
// Create a temporary directory for metadata storage
|
|
||||||
tempDir, err := os.MkdirTemp("", "test-dgraph-*") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create temporary directory: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a context and cancel function for the database
|
|
||||||
ctx, cancel := context.WithCancel(context.Background()) |
|
||||||
|
|
||||||
// Initialize the dgraph database
|
|
||||||
db, err := New(ctx, cancel, tempDir, "info") |
|
||||||
if err != nil { |
|
||||||
cancel() |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
t.Fatalf("Failed to create dgraph database: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Drop all data to start fresh
|
|
||||||
if err := db.dropAll(ctx); err != nil { |
|
||||||
db.Close() |
|
||||||
cancel() |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
t.Fatalf("Failed to drop all data: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a scanner to read events from examples.Cache
|
|
||||||
scanner := bufio.NewScanner(bytes.NewBuffer(examples.Cache)) |
|
||||||
scanner.Buffer(make([]byte, 0, 1_000_000_000), 1_000_000_000) |
|
||||||
|
|
||||||
var events []*event.E |
|
||||||
|
|
||||||
// First, collect all events from examples.Cache
|
|
||||||
for scanner.Scan() { |
|
||||||
chk.E(scanner.Err()) |
|
||||||
b := scanner.Bytes() |
|
||||||
ev := event.New() |
|
||||||
|
|
||||||
// Unmarshal the event
|
|
||||||
if _, err = ev.Unmarshal(b); chk.E(err) { |
|
||||||
ev.Free() |
|
||||||
db.Close() |
|
||||||
cancel() |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
events = append(events, ev) |
|
||||||
} |
|
||||||
|
|
||||||
// Check for scanner errors
|
|
||||||
if err = scanner.Err(); err != nil { |
|
||||||
db.Close() |
|
||||||
cancel() |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
t.Fatalf("Scanner error: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Sort events by CreatedAt to ensure addressable events are processed in chronological order
|
|
||||||
sort.Slice(events, func(i, j int) bool { |
|
||||||
return events[i].CreatedAt < events[j].CreatedAt |
|
||||||
}) |
|
||||||
|
|
||||||
// Count the number of events processed
|
|
||||||
eventCount := 0 |
|
||||||
|
|
||||||
// Now process each event in chronological order
|
|
||||||
for _, ev := range events { |
|
||||||
// Save the event to the database
|
|
||||||
if _, err = db.SaveEvent(ctx, ev); err != nil { |
|
||||||
db.Close() |
|
||||||
cancel() |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err) |
|
||||||
} |
|
||||||
|
|
||||||
eventCount++ |
|
||||||
} |
|
||||||
|
|
||||||
t.Logf("Successfully saved %d events to dgraph database", eventCount) |
|
||||||
|
|
||||||
return db, events, ctx, cancel, tempDir |
|
||||||
} |
|
||||||
|
|
||||||
// cleanupTestDB cleans up the test database
|
|
||||||
func cleanupTestDB(t *testing.T, db *D, cancel context.CancelFunc, tempDir string) { |
|
||||||
if db != nil { |
|
||||||
db.Close() |
|
||||||
} |
|
||||||
if cancel != nil { |
|
||||||
cancel() |
|
||||||
} |
|
||||||
if tempDir != "" { |
|
||||||
os.RemoveAll(tempDir) |
|
||||||
} |
|
||||||
} |
|
||||||
@ -1,44 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"fmt" |
|
||||||
|
|
||||||
"git.mleku.dev/mleku/nostr/crypto/keys" |
|
||||||
) |
|
||||||
|
|
||||||
// Relay identity methods
|
|
||||||
// We use the marker system to store the relay's private key
|
|
||||||
|
|
||||||
const relayIdentityMarkerKey = "relay_identity_secret" |
|
||||||
|
|
||||||
// GetRelayIdentitySecret retrieves the relay's identity secret key
|
|
||||||
func (d *D) GetRelayIdentitySecret() (skb []byte, err error) { |
|
||||||
return d.GetMarker(relayIdentityMarkerKey) |
|
||||||
} |
|
||||||
|
|
||||||
// SetRelayIdentitySecret sets the relay's identity secret key
|
|
||||||
func (d *D) SetRelayIdentitySecret(skb []byte) error { |
|
||||||
return d.SetMarker(relayIdentityMarkerKey, skb) |
|
||||||
} |
|
||||||
|
|
||||||
// GetOrCreateRelayIdentitySecret retrieves or creates the relay identity
|
|
||||||
func (d *D) GetOrCreateRelayIdentitySecret() (skb []byte, err error) { |
|
||||||
skb, err = d.GetRelayIdentitySecret() |
|
||||||
if err == nil { |
|
||||||
return skb, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Generate new identity
|
|
||||||
skb, err = keys.GenerateSecretKey() |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to generate identity: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Store it
|
|
||||||
if err = d.SetRelayIdentitySecret(skb); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to store identity: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
d.Logger.Infof("generated new relay identity") |
|
||||||
return skb, nil |
|
||||||
} |
|
||||||
@ -1,171 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"bufio" |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"io" |
|
||||||
"strings" |
|
||||||
|
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// Import imports events from a reader (JSONL format)
|
|
||||||
func (d *D) Import(rr io.Reader) { |
|
||||||
d.ImportEventsFromReader(context.Background(), rr) |
|
||||||
} |
|
||||||
|
|
||||||
// Export exports events to a writer (JSONL format)
|
|
||||||
func (d *D) Export(c context.Context, w io.Writer, pubkeys ...[]byte) { |
|
||||||
// Build query based on whether pubkeys are specified
|
|
||||||
var query string |
|
||||||
|
|
||||||
if len(pubkeys) > 0 { |
|
||||||
// Build pubkey filter
|
|
||||||
pubkeyStrs := make([]string, len(pubkeys)) |
|
||||||
for i, pk := range pubkeys { |
|
||||||
pubkeyStrs[i] = fmt.Sprintf("eq(event.pubkey, %q)", hex.Enc(pk)) |
|
||||||
} |
|
||||||
pubkeyFilter := strings.Join(pubkeyStrs, " OR ") |
|
||||||
|
|
||||||
query = fmt.Sprintf(`{ |
|
||||||
events(func: has(event.id)) @filter(%s) { |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.tags |
|
||||||
} |
|
||||||
}`, pubkeyFilter) |
|
||||||
} else { |
|
||||||
// Export all events
|
|
||||||
query = `{ |
|
||||||
events(func: has(event.id)) { |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.tags |
|
||||||
} |
|
||||||
}` |
|
||||||
} |
|
||||||
|
|
||||||
// Execute query
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
d.Logger.Errorf("failed to query events for export: %v", err) |
|
||||||
fmt.Fprintf(w, "# Error: failed to query events: %v\n", err) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Parse events
|
|
||||||
evs, err := d.parseEventsFromResponse(resp.Json) |
|
||||||
if err != nil { |
|
||||||
d.Logger.Errorf("failed to parse events for export: %v", err) |
|
||||||
fmt.Fprintf(w, "# Error: failed to parse events: %v\n", err) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Write header comment
|
|
||||||
fmt.Fprintf(w, "# Exported %d events from dgraph\n", len(evs)) |
|
||||||
|
|
||||||
// Write each event as JSONL
|
|
||||||
count := 0 |
|
||||||
for _, ev := range evs { |
|
||||||
jsonData, err := json.Marshal(ev) |
|
||||||
if err != nil { |
|
||||||
d.Logger.Warningf("failed to marshal event: %v", err) |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
if _, err := fmt.Fprintf(w, "%s\n", jsonData); err != nil { |
|
||||||
d.Logger.Errorf("failed to write event: %v", err) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
count++ |
|
||||||
if count%1000 == 0 { |
|
||||||
d.Logger.Infof("exported %d events", count) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
d.Logger.Infof("export complete: %d events written", count) |
|
||||||
} |
|
||||||
|
|
||||||
// ImportEventsFromReader imports events from a reader
|
|
||||||
func (d *D) ImportEventsFromReader(ctx context.Context, rr io.Reader) error { |
|
||||||
scanner := bufio.NewScanner(rr) |
|
||||||
scanner.Buffer(make([]byte, 1024*1024), 10*1024*1024) // 10MB max line size
|
|
||||||
|
|
||||||
count := 0 |
|
||||||
for scanner.Scan() { |
|
||||||
line := scanner.Bytes() |
|
||||||
if len(line) == 0 { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Skip comments
|
|
||||||
if line[0] == '#' { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Parse event
|
|
||||||
ev := &event.E{} |
|
||||||
if err := json.Unmarshal(line, ev); err != nil { |
|
||||||
d.Logger.Warningf("failed to parse event: %v", err) |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Save event
|
|
||||||
if _, err := d.SaveEvent(ctx, ev); err != nil { |
|
||||||
d.Logger.Warningf("failed to import event: %v", err) |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
count++ |
|
||||||
if count%1000 == 0 { |
|
||||||
d.Logger.Infof("imported %d events", count) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
if err := scanner.Err(); err != nil { |
|
||||||
return fmt.Errorf("scanner error: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
d.Logger.Infof("import complete: %d events", count) |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// ImportEventsFromStrings imports events from JSON strings
|
|
||||||
func (d *D) ImportEventsFromStrings( |
|
||||||
ctx context.Context, |
|
||||||
eventJSONs []string, |
|
||||||
policyManager interface{ CheckPolicy(action string, ev *event.E, pubkey []byte, remote string) (bool, error) }, |
|
||||||
) error { |
|
||||||
for _, eventJSON := range eventJSONs { |
|
||||||
ev := &event.E{} |
|
||||||
if err := json.Unmarshal([]byte(eventJSON), ev); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Check policy if manager is provided
|
|
||||||
if policyManager != nil { |
|
||||||
if allowed, err := policyManager.CheckPolicy("write", ev, ev.Pubkey[:], "import"); err != nil || !allowed { |
|
||||||
continue |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Save event
|
|
||||||
if _, err := d.SaveEvent(ctx, ev); err != nil { |
|
||||||
d.Logger.Warningf("failed to import event: %v", err) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
@ -1,783 +0,0 @@ |
|||||||
# Dgraph Integration Guide for ORLY Relay |
|
||||||
|
|
||||||
This document outlines how to integrate Dgraph as an embedded graph database within the ORLY Nostr relay, enabling advanced querying capabilities beyond standard Nostr REQ filters. |
|
||||||
|
|
||||||
## Table of Contents |
|
||||||
|
|
||||||
1. [Overview](#overview) |
|
||||||
2. [Architecture](#architecture) |
|
||||||
3. [Embedding Dgraph as a Goroutine](#embedding-dgraph-as-a-goroutine) |
|
||||||
4. [Internal Query Interface](#internal-query-interface) |
|
||||||
5. [GraphQL Endpoint Setup](#graphql-endpoint-setup) |
|
||||||
6. [Schema Design](#schema-design) |
|
||||||
7. [Integration Points](#integration-points) |
|
||||||
8. [Performance Considerations](#performance-considerations) |
|
||||||
|
|
||||||
## Overview |
|
||||||
|
|
||||||
### What Dgraph Provides |
|
||||||
|
|
||||||
Dgraph is a distributed graph database that can be embedded into Go applications. For ORLY, it offers: |
|
||||||
|
|
||||||
- **Graph Queries**: Traverse relationships between events, authors, and tags |
|
||||||
- **GraphQL API**: External access to relay data with complex queries |
|
||||||
- **DQL (Dgraph Query Language)**: Internal programmatic queries |
|
||||||
- **Real-time Updates**: Live query subscriptions |
|
||||||
- **Advanced Filtering**: Complex multi-hop queries impossible with Nostr REQ |
|
||||||
|
|
||||||
### Why Integrate? |
|
||||||
|
|
||||||
Nostr REQ filters are limited to: |
|
||||||
- Single-author or tag-based queries |
|
||||||
- Time range filters |
|
||||||
- Kind filters |
|
||||||
- Simple AND/OR combinations |
|
||||||
|
|
||||||
Dgraph enables: |
|
||||||
- "Find all events from users followed by my follows" (2-hop social graph) |
|
||||||
- "Show threads where Alice replied to Bob who replied to Carol" |
|
||||||
- "Find all events tagged with #bitcoin by authors in my Web of Trust" |
|
||||||
- Complex graph analytics on social networks |
|
||||||
|
|
||||||
## Architecture |
|
||||||
|
|
||||||
### Dgraph Components |
|
||||||
|
|
||||||
``` |
|
||||||
┌────────────────────────────────────────────────────────┐ |
|
||||||
│ ORLY Relay │ |
|
||||||
│ │ |
|
||||||
│ ┌──────────────┐ ┌─────────────────────────┐ │ |
|
||||||
│ │ HTTP API │◄────────┤ GraphQL Endpoint │ │ |
|
||||||
│ │ (existing) │ │ (new - external) │ │ |
|
||||||
│ └──────────────┘ └─────────────────────────┘ │ |
|
||||||
│ │ │ │ |
|
||||||
│ ▼ ▼ │ |
|
||||||
│ ┌──────────────────────────────────────────────────┐ │ |
|
||||||
│ │ Event Ingestion Layer │ │ |
|
||||||
│ │ - Save to Badger (existing) │ │ |
|
||||||
│ │ - Sync to Dgraph (new) │ │ |
|
||||||
│ └──────────────────────────────────────────────────┘ │ |
|
||||||
│ │ │ │ |
|
||||||
│ ▼ ▼ │ |
|
||||||
│ ┌────────────┐ ┌─────────────────┐ │ |
|
||||||
│ │ Badger │ │ Dgraph Engine │ │ |
|
||||||
│ │ (events) │ │ (graph index) │ │ |
|
||||||
│ └────────────┘ └─────────────────┘ │ |
|
||||||
│ │ │ |
|
||||||
│ ┌────────┴────────┐ │ |
|
||||||
│ │ │ │ |
|
||||||
│ ▼ ▼ │ |
|
||||||
│ ┌──────────┐ ┌──────────┐ │ |
|
||||||
│ │ Badger │ │ RaftWAL │ │ |
|
||||||
│ │(postings)│ │ (WAL) │ │ |
|
||||||
│ └──────────┘ └──────────┘ │ |
|
||||||
└─────────────────────────────────────────────────────────┘ |
|
||||||
``` |
|
||||||
|
|
||||||
### Storage Strategy |
|
||||||
|
|
||||||
**Dual Storage Approach:** |
|
||||||
|
|
||||||
1. **Badger (Primary)**: Continue using existing Badger database for: |
|
||||||
- Fast event retrieval by ID |
|
||||||
- Time-based queries |
|
||||||
- Author-based queries |
|
||||||
- Tag-based queries |
|
||||||
- Kind-based queries |
|
||||||
|
|
||||||
2. **Dgraph (Secondary)**: Use for: |
|
||||||
- Graph relationship queries |
|
||||||
- Complex multi-hop traversals |
|
||||||
- Social graph analytics |
|
||||||
- Web of Trust calculations |
|
||||||
|
|
||||||
**Data Sync**: Events are written to both stores, but Dgraph contains: |
|
||||||
- Event nodes (ID, kind, created_at, content) |
|
||||||
- Author nodes (pubkey) |
|
||||||
- Tag nodes (tag values) |
|
||||||
- Relationships (authored_by, tagged_with, replies_to, mentions, etc.) |
|
||||||
|
|
||||||
## Embedding Dgraph as a Goroutine |
|
||||||
|
|
||||||
### Initialization Pattern |
|
||||||
|
|
||||||
Based on dgraph's embedded mode (`worker/embedded.go` and `worker/server_state.go`): |
|
||||||
|
|
||||||
```go |
|
||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"net" |
|
||||||
"net/http" |
|
||||||
|
|
||||||
"github.com/dgraph-io/badger/v4" |
|
||||||
"github.com/dgraph-io/dgraph/edgraph" |
|
||||||
"github.com/dgraph-io/dgraph/graphql/admin" |
|
||||||
"github.com/dgraph-io/dgraph/posting" |
|
||||||
"github.com/dgraph-io/dgraph/schema" |
|
||||||
"github.com/dgraph-io/dgraph/worker" |
|
||||||
"github.com/dgraph-io/dgraph/x" |
|
||||||
"github.com/dgraph-io/ristretto/z" |
|
||||||
) |
|
||||||
|
|
||||||
// Manager handles the embedded Dgraph instance |
|
||||||
type Manager struct { |
|
||||||
ctx context.Context |
|
||||||
cancel context.CancelFunc |
|
||||||
|
|
||||||
// Dgraph components |
|
||||||
pstore *badger.DB // Postings store |
|
||||||
walstore *worker.DiskStorage // Write-ahead log |
|
||||||
|
|
||||||
// GraphQL servers |
|
||||||
mainServer admin.IServeGraphQL |
|
||||||
adminServer admin.IServeGraphQL |
|
||||||
healthStore *admin.GraphQLHealthStore |
|
||||||
|
|
||||||
// Lifecycle |
|
||||||
closer *z.Closer |
|
||||||
serverCloser *z.Closer |
|
||||||
} |
|
||||||
|
|
||||||
// Config holds Dgraph configuration |
|
||||||
type Config struct { |
|
||||||
DataDir string |
|
||||||
PostingDir string |
|
||||||
WALDir string |
|
||||||
|
|
||||||
// Performance tuning |
|
||||||
PostingCacheMB int64 |
|
||||||
MutationsMode string |
|
||||||
|
|
||||||
// Network |
|
||||||
GraphQLPort int |
|
||||||
AdminPort int |
|
||||||
|
|
||||||
// Feature flags |
|
||||||
EnableGraphQL bool |
|
||||||
EnableIntrospection bool |
|
||||||
} |
|
||||||
|
|
||||||
// New creates a new embedded Dgraph manager |
|
||||||
func New(ctx context.Context, cfg *Config) (*Manager, error) { |
|
||||||
ctx, cancel := context.WithCancel(ctx) |
|
||||||
|
|
||||||
m := &Manager{ |
|
||||||
ctx: ctx, |
|
||||||
cancel: cancel, |
|
||||||
closer: z.NewCloser(1), |
|
||||||
serverCloser: z.NewCloser(3), |
|
||||||
} |
|
||||||
|
|
||||||
// Initialize storage |
|
||||||
if err := m.initStorage(cfg); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
// Initialize Dgraph components |
|
||||||
if err := m.initDgraph(cfg); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
// Setup GraphQL endpoints |
|
||||||
if cfg.EnableGraphQL { |
|
||||||
if err := m.setupGraphQL(cfg); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return m, nil |
|
||||||
} |
|
||||||
|
|
||||||
// initStorage opens Badger databases for postings and WAL |
|
||||||
func (m *Manager) initStorage(cfg *Config) error { |
|
||||||
// Open postings store (Dgraph's main data) |
|
||||||
opts := badger.DefaultOptions(cfg.PostingDir). |
|
||||||
WithNumVersionsToKeep(math.MaxInt32). |
|
||||||
WithNamespaceOffset(x.NamespaceOffset) |
|
||||||
|
|
||||||
var err error |
|
||||||
m.pstore, err = badger.OpenManaged(opts) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to open postings store: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Open WAL store |
|
||||||
m.walstore, err = worker.InitStorage(cfg.WALDir) |
|
||||||
if err != nil { |
|
||||||
m.pstore.Close() |
|
||||||
return fmt.Errorf("failed to open WAL: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// initDgraph initializes Dgraph worker components |
|
||||||
func (m *Manager) initDgraph(cfg *Config) error { |
|
||||||
// Initialize server state |
|
||||||
worker.State.Pstore = m.pstore |
|
||||||
worker.State.WALstore = m.walstore |
|
||||||
worker.State.FinishCh = make(chan struct{}) |
|
||||||
|
|
||||||
// Initialize schema and posting layers |
|
||||||
schema.Init(m.pstore) |
|
||||||
posting.Init(m.pstore, cfg.PostingCacheMB, true) |
|
||||||
worker.Init(m.pstore) |
|
||||||
|
|
||||||
// For embedded/lite mode without Raft |
|
||||||
worker.InitForLite(m.pstore) |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// setupGraphQL initializes GraphQL servers |
|
||||||
func (m *Manager) setupGraphQL(cfg *Config) error { |
|
||||||
globalEpoch := make(map[uint64]*uint64) |
|
||||||
|
|
||||||
// Create GraphQL servers |
|
||||||
m.mainServer, m.adminServer, m.healthStore = admin.NewServers( |
|
||||||
cfg.EnableIntrospection, |
|
||||||
globalEpoch, |
|
||||||
m.serverCloser, |
|
||||||
) |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Start launches Dgraph in goroutines |
|
||||||
func (m *Manager) Start() error { |
|
||||||
// Start worker server (internal gRPC) |
|
||||||
go worker.RunServer(false) |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Stop gracefully shuts down Dgraph |
|
||||||
func (m *Manager) Stop() error { |
|
||||||
m.cancel() |
|
||||||
|
|
||||||
// Signal shutdown |
|
||||||
m.closer.SignalAndWait() |
|
||||||
m.serverCloser.SignalAndWait() |
|
||||||
|
|
||||||
// Close databases |
|
||||||
if m.walstore != nil { |
|
||||||
m.walstore.Close() |
|
||||||
} |
|
||||||
if m.pstore != nil { |
|
||||||
m.pstore.Close() |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Integration with ORLY Main |
|
||||||
|
|
||||||
In `app/main.go`: |
|
||||||
|
|
||||||
```go |
|
||||||
import ( |
|
||||||
"next.orly.dev/pkg/dgraph" |
|
||||||
) |
|
||||||
|
|
||||||
type Listener struct { |
|
||||||
// ... existing fields ... |
|
||||||
|
|
||||||
dgraphManager *dgraph.Manager |
|
||||||
} |
|
||||||
|
|
||||||
func (l *Listener) init(ctx context.Context, cfg *config.C) (err error) { |
|
||||||
// ... existing initialization ... |
|
||||||
|
|
||||||
// Initialize Dgraph if enabled |
|
||||||
if cfg.DgraphEnabled { |
|
||||||
dgraphCfg := &dgraph.Config{ |
|
||||||
DataDir: cfg.DgraphDataDir, |
|
||||||
PostingDir: filepath.Join(cfg.DgraphDataDir, "p"), |
|
||||||
WALDir: filepath.Join(cfg.DgraphDataDir, "w"), |
|
||||||
PostingCacheMB: cfg.DgraphCacheMB, |
|
||||||
EnableGraphQL: cfg.DgraphGraphQL, |
|
||||||
EnableIntrospection: cfg.DgraphIntrospection, |
|
||||||
GraphQLPort: cfg.DgraphGraphQLPort, |
|
||||||
} |
|
||||||
|
|
||||||
l.dgraphManager, err = dgraph.New(ctx, dgraphCfg) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to initialize dgraph: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
if err = l.dgraphManager.Start(); err != nil { |
|
||||||
return fmt.Errorf("failed to start dgraph: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
log.I.F("dgraph manager started successfully") |
|
||||||
} |
|
||||||
|
|
||||||
// ... rest of initialization ... |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Internal Query Interface |
|
||||||
|
|
||||||
### Direct Query Execution |
|
||||||
|
|
||||||
Dgraph provides `edgraph.Server{}.QueryNoGrpc()` for internal queries: |
|
||||||
|
|
||||||
```go |
|
||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
"github.com/dgraph-io/dgraph/edgraph" |
|
||||||
) |
|
||||||
|
|
||||||
// Query executes a DQL query internally |
|
||||||
func (m *Manager) Query(ctx context.Context, query string) (*api.Response, error) { |
|
||||||
server := &edgraph.Server{} |
|
||||||
|
|
||||||
req := &api.Request{ |
|
||||||
Query: query, |
|
||||||
} |
|
||||||
|
|
||||||
return server.QueryNoGrpc(ctx, req) |
|
||||||
} |
|
||||||
|
|
||||||
// Mutate applies a mutation to the graph |
|
||||||
func (m *Manager) Mutate(ctx context.Context, mutation *api.Mutation) (*api.Response, error) { |
|
||||||
server := &edgraph.Server{} |
|
||||||
|
|
||||||
req := &api.Request{ |
|
||||||
Mutations: []*api.Mutation{mutation}, |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
return server.QueryNoGrpc(ctx, req) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Example: Adding Events to Graph |
|
||||||
|
|
||||||
```go |
|
||||||
// AddEvent indexes a Nostr event in the graph |
|
||||||
func (m *Manager) AddEvent(ctx context.Context, event *event.E) error { |
|
||||||
// Build RDF triples for the event |
|
||||||
nquads := buildEventNQuads(event) |
|
||||||
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(nquads), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
_, err := m.Mutate(ctx, mutation) |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
func buildEventNQuads(event *event.E) string { |
|
||||||
var nquads strings.Builder |
|
||||||
|
|
||||||
eventID := hex.EncodeToString(event.ID[:]) |
|
||||||
authorPubkey := hex.EncodeToString(event.Pubkey) |
|
||||||
|
|
||||||
// Event node |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Event\" .\n", eventID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.id> %q .\n", eventID, eventID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.kind> %q .\n", eventID, event.Kind)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.created_at> %q .\n", eventID, event.CreatedAt)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.content> %q .\n", eventID, event.Content)) |
|
||||||
|
|
||||||
// Author relationship |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <authored_by> _:%s .\n", eventID, authorPubkey)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Author\" .\n", authorPubkey)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <author.pubkey> %q .\n", authorPubkey, authorPubkey)) |
|
||||||
|
|
||||||
// Tag relationships |
|
||||||
for _, tag := range event.Tags { |
|
||||||
if len(tag) >= 2 { |
|
||||||
tagType := string(tag[0]) |
|
||||||
tagValue := string(tag[1]) |
|
||||||
|
|
||||||
switch tagType { |
|
||||||
case "e": // Event reference |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <references> _:%s .\n", eventID, tagValue)) |
|
||||||
case "p": // Pubkey mention |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <mentions> _:%s .\n", eventID, tagValue)) |
|
||||||
case "t": // Hashtag |
|
||||||
tagID := "tag_" + tagValue |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tagged_with> _:%s .\n", eventID, tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Tag\" .\n", tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tag.value> %q .\n", tagID, tagValue)) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return nquads.String() |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Example: Query Social Graph |
|
||||||
|
|
||||||
```go |
|
||||||
// FindFollowsOfFollows returns events from 2-hop social network |
|
||||||
func (m *Manager) FindFollowsOfFollows(ctx context.Context, pubkey []byte) ([]*event.E, error) { |
|
||||||
pubkeyHex := hex.EncodeToString(pubkey) |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
follows_of_follows(func: eq(author.pubkey, %q)) { |
|
||||||
# My follows (kind 3) |
|
||||||
~authored_by @filter(eq(event.kind, "3")) { |
|
||||||
# Their follows |
|
||||||
references { |
|
||||||
# Events from their follows |
|
||||||
~authored_by { |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
authored_by { |
|
||||||
author.pubkey |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
}`, pubkeyHex) |
|
||||||
|
|
||||||
resp, err := m.Query(ctx, query) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
// Parse response and convert to Nostr events |
|
||||||
return parseEventsFromDgraphResponse(resp.Json) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## GraphQL Endpoint Setup |
|
||||||
|
|
||||||
### Exposing GraphQL via HTTP |
|
||||||
|
|
||||||
Add GraphQL handlers to the existing HTTP mux in `app/server.go`: |
|
||||||
|
|
||||||
```go |
|
||||||
// setupGraphQLEndpoints adds Dgraph GraphQL endpoints |
|
||||||
func (s *Server) setupGraphQLEndpoints() { |
|
||||||
if s.dgraphManager == nil { |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Main GraphQL endpoint for queries |
|
||||||
s.mux.HandleFunc("/graphql", func(w http.ResponseWriter, r *http.Request) { |
|
||||||
// Extract namespace (for multi-tenancy) |
|
||||||
namespace := x.ExtractNamespaceHTTP(r) |
|
||||||
|
|
||||||
// Lazy load schema |
|
||||||
admin.LazyLoadSchema(namespace) |
|
||||||
|
|
||||||
// Serve GraphQL |
|
||||||
s.dgraphManager.MainServer().HTTPHandler().ServeHTTP(w, r) |
|
||||||
}) |
|
||||||
|
|
||||||
// Admin endpoint for schema updates |
|
||||||
s.mux.HandleFunc("/admin", func(w http.ResponseWriter, r *http.Request) { |
|
||||||
namespace := x.ExtractNamespaceHTTP(r) |
|
||||||
admin.LazyLoadSchema(namespace) |
|
||||||
s.dgraphManager.AdminServer().HTTPHandler().ServeHTTP(w, r) |
|
||||||
}) |
|
||||||
|
|
||||||
// Health check |
|
||||||
s.mux.HandleFunc("/graphql/health", func(w http.ResponseWriter, r *http.Request) { |
|
||||||
health := s.dgraphManager.HealthStore() |
|
||||||
if health.IsGraphQLReady() { |
|
||||||
w.WriteHeader(http.StatusOK) |
|
||||||
w.Write([]byte("GraphQL is ready")) |
|
||||||
} else { |
|
||||||
w.WriteHeader(http.StatusServiceUnavailable) |
|
||||||
w.Write([]byte("GraphQL is not ready")) |
|
||||||
} |
|
||||||
}) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### GraphQL Resolver Integration |
|
||||||
|
|
||||||
The manager needs to expose the GraphQL servers: |
|
||||||
|
|
||||||
```go |
|
||||||
// MainServer returns the main GraphQL server |
|
||||||
func (m *Manager) MainServer() admin.IServeGraphQL { |
|
||||||
return m.mainServer |
|
||||||
} |
|
||||||
|
|
||||||
// AdminServer returns the admin GraphQL server |
|
||||||
func (m *Manager) AdminServer() admin.IServeGraphQL { |
|
||||||
return m.adminServer |
|
||||||
} |
|
||||||
|
|
||||||
// HealthStore returns the health check store |
|
||||||
func (m *Manager) HealthStore() *admin.GraphQLHealthStore { |
|
||||||
return m.healthStore |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Schema Design |
|
||||||
|
|
||||||
### Dgraph Schema for Nostr Events |
|
||||||
|
|
||||||
```graphql |
|
||||||
# Types |
|
||||||
type Event { |
|
||||||
id: String! @id @index(exact) |
|
||||||
kind: Int! @index(int) |
|
||||||
created_at: Int! @index(int) |
|
||||||
content: String @index(fulltext) |
|
||||||
sig: String |
|
||||||
|
|
||||||
# Relationships |
|
||||||
authored_by: Author! @reverse |
|
||||||
references: [Event] @reverse |
|
||||||
mentions: [Author] @reverse |
|
||||||
tagged_with: [Tag] @reverse |
|
||||||
replies_to: Event @reverse |
|
||||||
} |
|
||||||
|
|
||||||
type Author { |
|
||||||
pubkey: String! @id @index(exact) |
|
||||||
|
|
||||||
# Relationships |
|
||||||
events: [Event] @reverse |
|
||||||
follows: [Author] @reverse |
|
||||||
followed_by: [Author] @reverse |
|
||||||
|
|
||||||
# Computed/cached fields |
|
||||||
follower_count: Int |
|
||||||
following_count: Int |
|
||||||
event_count: Int |
|
||||||
} |
|
||||||
|
|
||||||
type Tag { |
|
||||||
value: String! @id @index(exact, term, fulltext) |
|
||||||
type: String @index(exact) |
|
||||||
|
|
||||||
# Relationships |
|
||||||
events: [Event] @reverse |
|
||||||
usage_count: Int |
|
||||||
} |
|
||||||
|
|
||||||
# Indexes for efficient queries |
|
||||||
<event.kind>: int @index . |
|
||||||
<event.created_at>: int @index . |
|
||||||
<event.content>: string @index(fulltext) . |
|
||||||
<author.pubkey>: string @index(exact) . |
|
||||||
<tag.value>: string @index(exact, term, fulltext) . |
|
||||||
``` |
|
||||||
|
|
||||||
### Setting the Schema |
|
||||||
|
|
||||||
```go |
|
||||||
func (m *Manager) SetSchema(ctx context.Context) error { |
|
||||||
schemaStr := ` |
|
||||||
type Event { |
|
||||||
event.id: string @index(exact) . |
|
||||||
event.kind: int @index(int) . |
|
||||||
event.created_at: int @index(int) . |
|
||||||
event.content: string @index(fulltext) . |
|
||||||
authored_by: uid @reverse . |
|
||||||
references: [uid] @reverse . |
|
||||||
mentions: [uid] @reverse . |
|
||||||
tagged_with: [uid] @reverse . |
|
||||||
} |
|
||||||
|
|
||||||
type Author { |
|
||||||
author.pubkey: string @index(exact) . |
|
||||||
} |
|
||||||
|
|
||||||
type Tag { |
|
||||||
tag.value: string @index(exact, term, fulltext) . |
|
||||||
} |
|
||||||
` |
|
||||||
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(schemaStr), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
_, err := m.Mutate(ctx, mutation) |
|
||||||
return err |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Integration Points |
|
||||||
|
|
||||||
### Event Ingestion Hook |
|
||||||
|
|
||||||
Modify `pkg/database/save-event.go` to sync events to Dgraph: |
|
||||||
|
|
||||||
```go |
|
||||||
func (d *D) SaveEvent(ctx context.Context, ev *event.E) (exists bool, err error) { |
|
||||||
// ... existing Badger save logic ... |
|
||||||
|
|
||||||
// Sync to Dgraph if enabled |
|
||||||
if d.dgraphManager != nil { |
|
||||||
go func() { |
|
||||||
if err := d.dgraphManager.AddEvent(context.Background(), ev); err != nil { |
|
||||||
log.E.F("failed to sync event to dgraph: %v", err) |
|
||||||
} |
|
||||||
}() |
|
||||||
} |
|
||||||
|
|
||||||
return |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
### Query Interface Extension |
|
||||||
|
|
||||||
Add GraphQL query support alongside Nostr REQ: |
|
||||||
|
|
||||||
```go |
|
||||||
// app/handle-graphql.go |
|
||||||
|
|
||||||
func (s *Server) handleGraphQLQuery(w http.ResponseWriter, r *http.Request) { |
|
||||||
if s.dgraphManager == nil { |
|
||||||
http.Error(w, "GraphQL not enabled", http.StatusNotImplemented) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Read GraphQL query from request |
|
||||||
var req struct { |
|
||||||
Query string `json:"query"` |
|
||||||
Variables map[string]interface{} `json:"variables"` |
|
||||||
} |
|
||||||
|
|
||||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil { |
|
||||||
http.Error(w, err.Error(), http.StatusBadRequest) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Execute via Dgraph |
|
||||||
gqlReq := &schema.Request{ |
|
||||||
Query: req.Query, |
|
||||||
Variables: req.Variables, |
|
||||||
} |
|
||||||
|
|
||||||
namespace := x.ExtractNamespaceHTTP(r) |
|
||||||
resp := s.dgraphManager.MainServer().ResolveWithNs(r.Context(), namespace, gqlReq) |
|
||||||
|
|
||||||
// Return response |
|
||||||
w.Header().Set("Content-Type", "application/json") |
|
||||||
json.NewEncoder(w).Encode(resp) |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Performance Considerations |
|
||||||
|
|
||||||
### Memory Usage |
|
||||||
|
|
||||||
- **Dgraph Overhead**: ~500MB-1GB baseline |
|
||||||
- **Posting Cache**: Configurable (recommend 25% of available RAM) |
|
||||||
- **WAL**: Disk-based, minimal memory impact |
|
||||||
|
|
||||||
### Storage Requirements |
|
||||||
|
|
||||||
- **Badger (Postings)**: ~2-3x event data size (compressed) |
|
||||||
- **WAL**: ~1.5x mutation data (compacted periodically) |
|
||||||
- **Total**: Estimate 4-5x your Nostr event storage |
|
||||||
|
|
||||||
### Query Performance |
|
||||||
|
|
||||||
- **Graph Traversals**: O(edges) typically sub-100ms for 2-3 hops |
|
||||||
- **Full-text Search**: O(log n) with indexes |
|
||||||
- **Time-range Queries**: O(log n) with int indexes |
|
||||||
- **Complex Joins**: Can be expensive; use pagination |
|
||||||
|
|
||||||
### Optimization Strategies |
|
||||||
|
|
||||||
1. **Selective Indexing**: Only index events that need graph queries (e.g., kinds 1, 3, 6, 7) |
|
||||||
2. **Async Writes**: Don't block event saves on Dgraph sync |
|
||||||
3. **Read-through Cache**: Query Badger first for simple lookups |
|
||||||
4. **Batch Mutations**: Accumulate mutations and apply in batches |
|
||||||
5. **Schema Optimization**: Only index fields you'll query |
|
||||||
6. **Pagination**: Always use `first:` and `after:` in GraphQL queries |
|
||||||
|
|
||||||
### Monitoring |
|
||||||
|
|
||||||
```go |
|
||||||
// Add metrics |
|
||||||
var ( |
|
||||||
dgraphQueriesTotal = prometheus.NewCounter(...) |
|
||||||
dgraphQueryDuration = prometheus.NewHistogram(...) |
|
||||||
dgraphMutationsTotal = prometheus.NewCounter(...) |
|
||||||
dgraphErrors = prometheus.NewCounter(...) |
|
||||||
) |
|
||||||
|
|
||||||
// Wrap queries with instrumentation |
|
||||||
func (m *Manager) Query(ctx context.Context, query string) (*api.Response, error) { |
|
||||||
start := time.Now() |
|
||||||
defer func() { |
|
||||||
dgraphQueriesTotal.Inc() |
|
||||||
dgraphQueryDuration.Observe(time.Since(start).Seconds()) |
|
||||||
}() |
|
||||||
|
|
||||||
resp, err := m.query(ctx, query) |
|
||||||
if err != nil { |
|
||||||
dgraphErrors.Inc() |
|
||||||
} |
|
||||||
return resp, err |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
## Alternative: Lightweight Graph Library |
|
||||||
|
|
||||||
Given Dgraph's complexity and resource requirements, consider these alternatives: |
|
||||||
|
|
||||||
### cayley (Google's graph database) |
|
||||||
|
|
||||||
```bash |
|
||||||
go get github.com/cayleygraph/cayley |
|
||||||
``` |
|
||||||
|
|
||||||
- Lighter weight (~50MB overhead) |
|
||||||
- Multiple backend support (Badger, Memory, SQL) |
|
||||||
- Simpler API |
|
||||||
- Good for smaller graphs (<10M nodes) |
|
||||||
|
|
||||||
### badger-graph (Custom Implementation) |
|
||||||
|
|
||||||
Build a custom graph layer on top of existing Badger: |
|
||||||
|
|
||||||
```go |
|
||||||
// Simplified graph index using Badger directly |
|
||||||
type GraphIndex struct { |
|
||||||
db *badger.DB |
|
||||||
} |
|
||||||
|
|
||||||
// Store edge: subject -> predicate -> object |
|
||||||
func (g *GraphIndex) AddEdge(subject, predicate, object string) error { |
|
||||||
key := fmt.Sprintf("edge:%s:%s:%s", subject, predicate, object) |
|
||||||
return g.db.Update(func(txn *badger.Txn) error { |
|
||||||
return txn.Set([]byte(key), []byte{}) |
|
||||||
}) |
|
||||||
} |
|
||||||
|
|
||||||
// Query edges |
|
||||||
func (g *GraphIndex) GetEdges(subject, predicate string) ([]string, error) { |
|
||||||
prefix := fmt.Sprintf("edge:%s:%s:", subject, predicate) |
|
||||||
// Iterate and collect |
|
||||||
} |
|
||||||
``` |
|
||||||
|
|
||||||
This avoids Dgraph's overhead while providing basic graph functionality. |
|
||||||
|
|
||||||
## Conclusion |
|
||||||
|
|
||||||
Embedding Dgraph in ORLY enables powerful graph queries that extend far beyond Nostr's REQ filters. However, it comes with significant complexity and resource requirements. Consider: |
|
||||||
|
|
||||||
- **Full Dgraph**: For production relays with advanced query needs |
|
||||||
- **Cayley**: For medium-sized relays with moderate graph needs |
|
||||||
- **Custom Badger-Graph**: For lightweight graph indexing with minimal overhead |
|
||||||
|
|
||||||
Choose based on your specific use case, expected load, and query complexity requirements. |
|
||||||
@ -1,68 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"fmt" |
|
||||||
"runtime" |
|
||||||
"strings" |
|
||||||
|
|
||||||
"go.uber.org/atomic" |
|
||||||
"lol.mleku.dev" |
|
||||||
"lol.mleku.dev/log" |
|
||||||
) |
|
||||||
|
|
||||||
// NewLogger creates a new dgraph logger.
|
|
||||||
func NewLogger(logLevel int, label string) (l *logger) { |
|
||||||
l = &logger{Label: label} |
|
||||||
l.Level.Store(int32(logLevel)) |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
type logger struct { |
|
||||||
Level atomic.Int32 |
|
||||||
Label string |
|
||||||
} |
|
||||||
|
|
||||||
// SetLogLevel atomically adjusts the log level to the given log level code.
|
|
||||||
func (l *logger) SetLogLevel(level int) { |
|
||||||
l.Level.Store(int32(level)) |
|
||||||
} |
|
||||||
|
|
||||||
// Errorf is a log printer for this level of message.
|
|
||||||
func (l *logger) Errorf(s string, i ...interface{}) { |
|
||||||
if l.Level.Load() >= lol.Error { |
|
||||||
s = l.Label + ": " + s |
|
||||||
txt := fmt.Sprintf(s, i...) |
|
||||||
_, file, line, _ := runtime.Caller(2) |
|
||||||
log.E.F("%s\n%s:%d", strings.TrimSpace(txt), file, line) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Warningf is a log printer for this level of message.
|
|
||||||
func (l *logger) Warningf(s string, i ...interface{}) { |
|
||||||
if l.Level.Load() >= lol.Warn { |
|
||||||
s = l.Label + ": " + s |
|
||||||
txt := fmt.Sprintf(s, i...) |
|
||||||
_, file, line, _ := runtime.Caller(2) |
|
||||||
log.W.F("%s\n%s:%d", strings.TrimSpace(txt), file, line) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Infof is a log printer for this level of message.
|
|
||||||
func (l *logger) Infof(s string, i ...interface{}) { |
|
||||||
if l.Level.Load() >= lol.Info { |
|
||||||
s = l.Label + ": " + s |
|
||||||
txt := fmt.Sprintf(s, i...) |
|
||||||
_, file, line, _ := runtime.Caller(2) |
|
||||||
log.I.F("%s\n%s:%d", strings.TrimSpace(txt), file, line) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Debugf is a log printer for this level of message.
|
|
||||||
func (l *logger) Debugf(s string, i ...interface{}) { |
|
||||||
if l.Level.Load() >= lol.Debug { |
|
||||||
s = l.Label + ": " + s |
|
||||||
txt := fmt.Sprintf(s, i...) |
|
||||||
_, file, line, _ := runtime.Caller(2) |
|
||||||
log.D.F("%s\n%s:%d", strings.TrimSpace(txt), file, line) |
|
||||||
} |
|
||||||
} |
|
||||||
@ -1,120 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// Markers provide metadata key-value storage using Dgraph predicates
|
|
||||||
// We store markers as special nodes with type "Marker"
|
|
||||||
|
|
||||||
// SetMarker sets a metadata marker
|
|
||||||
func (d *D) SetMarker(key string, value []byte) error { |
|
||||||
// Create or update a marker node
|
|
||||||
markerID := "marker_" + key |
|
||||||
valueHex := hex.Enc(value) |
|
||||||
|
|
||||||
nquads := fmt.Sprintf(` |
|
||||||
_:%s <dgraph.type> "Marker" . |
|
||||||
_:%s <marker.key> %q . |
|
||||||
_:%s <marker.value> %q . |
|
||||||
`, markerID, markerID, key, markerID, valueHex) |
|
||||||
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(nquads), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err := d.Mutate(context.Background(), mutation); err != nil { |
|
||||||
return fmt.Errorf("failed to set marker: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetMarker retrieves a metadata marker
|
|
||||||
func (d *D) GetMarker(key string) (value []byte, err error) { |
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
marker(func: eq(marker.key, %q)) { |
|
||||||
marker.value |
|
||||||
} |
|
||||||
}`, key) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to get marker: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Marker []struct { |
|
||||||
Value string `json:"marker.value"` |
|
||||||
} `json:"marker"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to parse marker response: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Marker) == 0 { |
|
||||||
return nil, fmt.Errorf("marker not found: %s", key) |
|
||||||
} |
|
||||||
|
|
||||||
// Decode hex value
|
|
||||||
value, err = hex.Dec(result.Marker[0].Value) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to decode marker value: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return value, nil |
|
||||||
} |
|
||||||
|
|
||||||
// HasMarker checks if a marker exists
|
|
||||||
func (d *D) HasMarker(key string) bool { |
|
||||||
_, err := d.GetMarker(key) |
|
||||||
return err == nil |
|
||||||
} |
|
||||||
|
|
||||||
// DeleteMarker removes a metadata marker
|
|
||||||
func (d *D) DeleteMarker(key string) error { |
|
||||||
// Find the marker's UID
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
marker(func: eq(marker.key, %q)) { |
|
||||||
uid |
|
||||||
} |
|
||||||
}`, key) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to find marker: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Marker []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
} `json:"marker"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return fmt.Errorf("failed to parse marker query: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Marker) == 0 { |
|
||||||
return nil // Marker doesn't exist
|
|
||||||
} |
|
||||||
|
|
||||||
// Delete the marker node
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
DelNquads: []byte(fmt.Sprintf("<%s> * * .", result.Marker[0].UID)), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(context.Background(), mutation); err != nil { |
|
||||||
return fmt.Errorf("failed to delete marker: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return nil |
|
||||||
} |
|
||||||
@ -1,211 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"time" |
|
||||||
|
|
||||||
"next.orly.dev/pkg/database" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// NIP-43 Invite-based ACL methods
|
|
||||||
// Simplified implementation using marker-based storage
|
|
||||||
|
|
||||||
// AddNIP43Member adds a member using an invite code
|
|
||||||
func (d *D) AddNIP43Member(pubkey []byte, inviteCode string) error { |
|
||||||
key := "nip43_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
member := database.NIP43Membership{ |
|
||||||
InviteCode: inviteCode, |
|
||||||
AddedAt: time.Now(), |
|
||||||
} |
|
||||||
copy(member.Pubkey[:], pubkey) |
|
||||||
|
|
||||||
data, err := json.Marshal(member) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal membership: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Also add to members list
|
|
||||||
if err := d.addToMembersList(pubkey); err != nil { |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker(key, data) |
|
||||||
} |
|
||||||
|
|
||||||
// RemoveNIP43Member removes a member
|
|
||||||
func (d *D) RemoveNIP43Member(pubkey []byte) error { |
|
||||||
key := "nip43_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
// Remove from members list
|
|
||||||
if err := d.removeFromMembersList(pubkey); err != nil { |
|
||||||
return err |
|
||||||
} |
|
||||||
|
|
||||||
return d.DeleteMarker(key) |
|
||||||
} |
|
||||||
|
|
||||||
// IsNIP43Member checks if a pubkey is a member
|
|
||||||
func (d *D) IsNIP43Member(pubkey []byte) (isMember bool, err error) { |
|
||||||
_, err = d.GetNIP43Membership(pubkey) |
|
||||||
return err == nil, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetNIP43Membership retrieves membership information
|
|
||||||
func (d *D) GetNIP43Membership(pubkey []byte) (*database.NIP43Membership, error) { |
|
||||||
key := "nip43_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
var member database.NIP43Membership |
|
||||||
if err := json.Unmarshal(data, &member); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to unmarshal membership: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return &member, nil |
|
||||||
} |
|
||||||
|
|
||||||
// GetAllNIP43Members retrieves all member pubkeys
|
|
||||||
func (d *D) GetAllNIP43Members() ([][]byte, error) { |
|
||||||
data, err := d.GetMarker("nip43_members_list") |
|
||||||
if err != nil { |
|
||||||
return nil, nil // No members = empty list
|
|
||||||
} |
|
||||||
|
|
||||||
var members []string |
|
||||||
if err := json.Unmarshal(data, &members); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to unmarshal members list: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
result := make([][]byte, 0, len(members)) |
|
||||||
for _, hexPubkey := range members { |
|
||||||
pubkey, err := hex.Dec(hexPubkey) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
result = append(result, pubkey) |
|
||||||
} |
|
||||||
|
|
||||||
return result, nil |
|
||||||
} |
|
||||||
|
|
||||||
// StoreInviteCode stores an invite code with expiration
|
|
||||||
func (d *D) StoreInviteCode(code string, expiresAt time.Time) error { |
|
||||||
key := "invite_" + code |
|
||||||
|
|
||||||
inviteData := map[string]interface{}{ |
|
||||||
"code": code, |
|
||||||
"expiresAt": expiresAt, |
|
||||||
} |
|
||||||
|
|
||||||
data, err := json.Marshal(inviteData) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal invite: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker(key, data) |
|
||||||
} |
|
||||||
|
|
||||||
// ValidateInviteCode checks if an invite code is valid
|
|
||||||
func (d *D) ValidateInviteCode(code string) (valid bool, err error) { |
|
||||||
key := "invite_" + code |
|
||||||
|
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err != nil { |
|
||||||
return false, nil // Code doesn't exist
|
|
||||||
} |
|
||||||
|
|
||||||
var inviteData map[string]interface{} |
|
||||||
if err := json.Unmarshal(data, &inviteData); err != nil { |
|
||||||
return false, fmt.Errorf("failed to unmarshal invite: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Check expiration
|
|
||||||
if expiresStr, ok := inviteData["expiresAt"].(string); ok { |
|
||||||
expiresAt, err := time.Parse(time.RFC3339, expiresStr) |
|
||||||
if err == nil && time.Now().After(expiresAt) { |
|
||||||
return false, nil // Expired
|
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return true, nil |
|
||||||
} |
|
||||||
|
|
||||||
// DeleteInviteCode removes an invite code
|
|
||||||
func (d *D) DeleteInviteCode(code string) error { |
|
||||||
key := "invite_" + code |
|
||||||
return d.DeleteMarker(key) |
|
||||||
} |
|
||||||
|
|
||||||
// PublishNIP43MembershipEvent publishes a membership event
|
|
||||||
func (d *D) PublishNIP43MembershipEvent(kind int, pubkey []byte) error { |
|
||||||
// This would require publishing an actual Nostr event
|
|
||||||
// For now, just log it
|
|
||||||
d.Logger.Infof("would publish NIP-43 event kind %d for %s", kind, hex.Enc(pubkey)) |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Helper functions
|
|
||||||
|
|
||||||
func (d *D) addToMembersList(pubkey []byte) error { |
|
||||||
data, err := d.GetMarker("nip43_members_list") |
|
||||||
|
|
||||||
var members []string |
|
||||||
if err == nil { |
|
||||||
if err := json.Unmarshal(data, &members); err != nil { |
|
||||||
return fmt.Errorf("failed to unmarshal members list: %w", err) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
hexPubkey := hex.Enc(pubkey) |
|
||||||
|
|
||||||
// Check if already in list
|
|
||||||
for _, member := range members { |
|
||||||
if member == hexPubkey { |
|
||||||
return nil // Already in list
|
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
members = append(members, hexPubkey) |
|
||||||
|
|
||||||
data, err = json.Marshal(members) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal members list: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker("nip43_members_list", data) |
|
||||||
} |
|
||||||
|
|
||||||
func (d *D) removeFromMembersList(pubkey []byte) error { |
|
||||||
data, err := d.GetMarker("nip43_members_list") |
|
||||||
if err != nil { |
|
||||||
return nil // List doesn't exist
|
|
||||||
} |
|
||||||
|
|
||||||
var members []string |
|
||||||
if err := json.Unmarshal(data, &members); err != nil { |
|
||||||
return fmt.Errorf("failed to unmarshal members list: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
hexPubkey := hex.Enc(pubkey) |
|
||||||
|
|
||||||
// Remove from list
|
|
||||||
newMembers := make([]string, 0, len(members)) |
|
||||||
for _, member := range members { |
|
||||||
if member != hexPubkey { |
|
||||||
newMembers = append(newMembers, member) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
data, err = json.Marshal(newMembers) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal members list: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker("nip43_members_list", data) |
|
||||||
} |
|
||||||
@ -1,376 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"strings" |
|
||||||
|
|
||||||
"next.orly.dev/pkg/database/indexes/types" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/filter" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/tag" |
|
||||||
"next.orly.dev/pkg/interfaces/store" |
|
||||||
) |
|
||||||
|
|
||||||
// QueryEvents retrieves events matching the given filter
|
|
||||||
func (d *D) QueryEvents(c context.Context, f *filter.F) (evs event.S, err error) { |
|
||||||
return d.QueryEventsWithOptions(c, f, false, false) |
|
||||||
} |
|
||||||
|
|
||||||
// QueryAllVersions retrieves all versions of events matching the filter
|
|
||||||
func (d *D) QueryAllVersions(c context.Context, f *filter.F) (evs event.S, err error) { |
|
||||||
return d.QueryEventsWithOptions(c, f, false, true) |
|
||||||
} |
|
||||||
|
|
||||||
// QueryEventsWithOptions retrieves events with specific options
|
|
||||||
func (d *D) QueryEventsWithOptions( |
|
||||||
c context.Context, f *filter.F, includeDeleteEvents bool, showAllVersions bool, |
|
||||||
) (evs event.S, err error) { |
|
||||||
// Build DQL query from Nostr filter
|
|
||||||
query := d.buildDQLQuery(f, includeDeleteEvents) |
|
||||||
|
|
||||||
// Execute query
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to execute query: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Parse response
|
|
||||||
evs, err = d.parseEventsFromResponse(resp.Json) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to parse events: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return evs, nil |
|
||||||
} |
|
||||||
|
|
||||||
// buildDQLQuery constructs a DQL query from a Nostr filter
|
|
||||||
func (d *D) buildDQLQuery(f *filter.F, includeDeleteEvents bool) string { |
|
||||||
return d.buildDQLQueryWithFields(f, includeDeleteEvents, []string{ |
|
||||||
"uid", |
|
||||||
"event.id", |
|
||||||
"event.kind", |
|
||||||
"event.created_at", |
|
||||||
"event.content", |
|
||||||
"event.sig", |
|
||||||
"event.pubkey", |
|
||||||
"event.tags", |
|
||||||
}) |
|
||||||
} |
|
||||||
|
|
||||||
// buildDQLQueryWithFields constructs a DQL query with custom field selection
|
|
||||||
func (d *D) buildDQLQueryWithFields(f *filter.F, includeDeleteEvents bool, fields []string) string { |
|
||||||
var conditions []string |
|
||||||
var funcQuery string |
|
||||||
|
|
||||||
// IDs filter
|
|
||||||
if len(f.Ids.T) > 0 { |
|
||||||
idConditions := make([]string, len(f.Ids.T)) |
|
||||||
for i, id := range f.Ids.T { |
|
||||||
// Handle prefix matching
|
|
||||||
if len(id) < 64 { |
|
||||||
// Prefix search
|
|
||||||
idConditions[i] = fmt.Sprintf("regexp(event.id, /^%s/)", hex.Enc(id)) |
|
||||||
} else { |
|
||||||
idConditions[i] = fmt.Sprintf("eq(event.id, %q)", hex.Enc(id)) |
|
||||||
} |
|
||||||
} |
|
||||||
if len(idConditions) == 1 { |
|
||||||
funcQuery = idConditions[0] |
|
||||||
} else { |
|
||||||
conditions = append(conditions, "("+strings.Join(idConditions, " OR ")+")") |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Authors filter
|
|
||||||
if len(f.Authors.T) > 0 { |
|
||||||
authorConditions := make([]string, len(f.Authors.T)) |
|
||||||
for i, author := range f.Authors.T { |
|
||||||
// Handle prefix matching
|
|
||||||
if len(author) < 64 { |
|
||||||
authorConditions[i] = fmt.Sprintf("regexp(event.pubkey, /^%s/)", hex.Enc(author)) |
|
||||||
} else { |
|
||||||
authorConditions[i] = fmt.Sprintf("eq(event.pubkey, %q)", hex.Enc(author)) |
|
||||||
} |
|
||||||
} |
|
||||||
if funcQuery == "" && len(authorConditions) == 1 { |
|
||||||
funcQuery = authorConditions[0] |
|
||||||
} else { |
|
||||||
conditions = append(conditions, "("+strings.Join(authorConditions, " OR ")+")") |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Kinds filter
|
|
||||||
if len(f.Kinds.K) > 0 { |
|
||||||
kindConditions := make([]string, len(f.Kinds.K)) |
|
||||||
for i, kind := range f.Kinds.K { |
|
||||||
kindConditions[i] = fmt.Sprintf("eq(event.kind, %d)", kind) |
|
||||||
} |
|
||||||
conditions = append(conditions, "("+strings.Join(kindConditions, " OR ")+")") |
|
||||||
} |
|
||||||
|
|
||||||
// Time range filters
|
|
||||||
if f.Since != nil { |
|
||||||
conditions = append(conditions, fmt.Sprintf("ge(event.created_at, %d)", f.Since.V)) |
|
||||||
} |
|
||||||
if f.Until != nil { |
|
||||||
conditions = append(conditions, fmt.Sprintf("le(event.created_at, %d)", f.Until.V)) |
|
||||||
} |
|
||||||
|
|
||||||
// Tag filters
|
|
||||||
for _, tagValues := range *f.Tags { |
|
||||||
if len(tagValues.T) > 0 { |
|
||||||
tagConditions := make([]string, len(tagValues.T)) |
|
||||||
for i, tagValue := range tagValues.T { |
|
||||||
// This is a simplified tag query - in production you'd want to use facets
|
|
||||||
tagConditions[i] = fmt.Sprintf("eq(tag.value, %q)", string(tagValue)) |
|
||||||
} |
|
||||||
conditions = append(conditions, "("+strings.Join(tagConditions, " OR ")+")") |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Exclude delete events unless requested
|
|
||||||
if !includeDeleteEvents { |
|
||||||
conditions = append(conditions, "NOT eq(event.kind, 5)") |
|
||||||
} |
|
||||||
|
|
||||||
// Build the final query
|
|
||||||
if funcQuery == "" { |
|
||||||
funcQuery = "has(event.id)" |
|
||||||
} |
|
||||||
|
|
||||||
filterStr := "" |
|
||||||
if len(conditions) > 0 { |
|
||||||
filterStr = " @filter(" + strings.Join(conditions, " AND ") + ")" |
|
||||||
} |
|
||||||
|
|
||||||
// Add ordering and limit
|
|
||||||
orderBy := ", orderdesc: event.created_at" |
|
||||||
limitStr := "" |
|
||||||
if *f.Limit > 0 { |
|
||||||
limitStr = fmt.Sprintf(", first: %d", f.Limit) |
|
||||||
} |
|
||||||
|
|
||||||
// Build field list
|
|
||||||
fieldStr := strings.Join(fields, "\n\t\t\t") |
|
||||||
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: %s%s%s%s) { |
|
||||||
%s |
|
||||||
} |
|
||||||
}`, funcQuery, filterStr, orderBy, limitStr, fieldStr) |
|
||||||
|
|
||||||
return query |
|
||||||
} |
|
||||||
|
|
||||||
// parseEventsFromResponse converts Dgraph JSON response to Nostr events
|
|
||||||
func (d *D) parseEventsFromResponse(jsonData []byte) ([]*event.E, error) { |
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
ID string `json:"event.id"` |
|
||||||
Kind int `json:"event.kind"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
Content string `json:"event.content"` |
|
||||||
Sig string `json:"event.sig"` |
|
||||||
Pubkey string `json:"event.pubkey"` |
|
||||||
Tags string `json:"event.tags"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err := json.Unmarshal(jsonData, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
events := make([]*event.E, 0, len(result.Events)) |
|
||||||
for _, ev := range result.Events { |
|
||||||
// Decode hex strings
|
|
||||||
id, err := hex.Dec(ev.ID) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
sig, err := hex.Dec(ev.Sig) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
pubkey, err := hex.Dec(ev.Pubkey) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
|
|
||||||
// Parse tags from JSON
|
|
||||||
var tags tag.S |
|
||||||
if ev.Tags != "" { |
|
||||||
if err := json.Unmarshal([]byte(ev.Tags), &tags); err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Create event
|
|
||||||
e := &event.E{ |
|
||||||
Kind: uint16(ev.Kind), |
|
||||||
CreatedAt: ev.CreatedAt, |
|
||||||
Content: []byte(ev.Content), |
|
||||||
Tags: &tags, |
|
||||||
} |
|
||||||
|
|
||||||
// Copy fixed-size arrays
|
|
||||||
copy(e.ID[:], id) |
|
||||||
copy(e.Sig[:], sig) |
|
||||||
copy(e.Pubkey[:], pubkey) |
|
||||||
|
|
||||||
events = append(events, e) |
|
||||||
} |
|
||||||
|
|
||||||
return events, nil |
|
||||||
} |
|
||||||
|
|
||||||
// QueryDeleteEventsByTargetId retrieves delete events targeting a specific event ID
|
|
||||||
func (d *D) QueryDeleteEventsByTargetId(c context.Context, targetEventId []byte) ( |
|
||||||
evs event.S, err error, |
|
||||||
) { |
|
||||||
targetIDStr := hex.Enc(targetEventId) |
|
||||||
|
|
||||||
// Query for kind 5 events that reference this event
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: eq(event.kind, 5)) { |
|
||||||
uid |
|
||||||
event.id |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.tags |
|
||||||
references @filter(eq(event.id, %q)) { |
|
||||||
event.id |
|
||||||
} |
|
||||||
} |
|
||||||
}`, targetIDStr) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to query delete events: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
evs, err = d.parseEventsFromResponse(resp.Json) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to parse delete events: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return evs, nil |
|
||||||
} |
|
||||||
|
|
||||||
// QueryForSerials retrieves event serials matching a filter
|
|
||||||
func (d *D) QueryForSerials(c context.Context, f *filter.F) ( |
|
||||||
serials types.Uint40s, err error, |
|
||||||
) { |
|
||||||
// Build query requesting only serial numbers
|
|
||||||
query := d.buildDQLQueryWithFields(f, false, []string{"event.serial"}) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to query serials: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
serials = make([]*types.Uint40, 0, len(result.Events)) |
|
||||||
for _, ev := range result.Events { |
|
||||||
serial := types.Uint40{} |
|
||||||
serial.Set(uint64(ev.Serial)) |
|
||||||
serials = append(serials, &serial) |
|
||||||
} |
|
||||||
|
|
||||||
return serials, nil |
|
||||||
} |
|
||||||
|
|
||||||
// QueryForIds retrieves event IDs matching a filter
|
|
||||||
func (d *D) QueryForIds(c context.Context, f *filter.F) ( |
|
||||||
idPkTs []*store.IdPkTs, err error, |
|
||||||
) { |
|
||||||
// Build query requesting only ID, pubkey, created_at, serial
|
|
||||||
query := d.buildDQLQueryWithFields(f, false, []string{ |
|
||||||
"event.id", |
|
||||||
"event.pubkey", |
|
||||||
"event.created_at", |
|
||||||
"event.serial", |
|
||||||
}) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return nil, fmt.Errorf("failed to query IDs: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
ID string `json:"event.id"` |
|
||||||
Pubkey string `json:"event.pubkey"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
idPkTs = make([]*store.IdPkTs, 0, len(result.Events)) |
|
||||||
for _, ev := range result.Events { |
|
||||||
id, err := hex.Dec(ev.ID) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
pubkey, err := hex.Dec(ev.Pubkey) |
|
||||||
if err != nil { |
|
||||||
continue |
|
||||||
} |
|
||||||
idPkTs = append(idPkTs, &store.IdPkTs{ |
|
||||||
Id: id, |
|
||||||
Pub: pubkey, |
|
||||||
Ts: ev.CreatedAt, |
|
||||||
Ser: uint64(ev.Serial), |
|
||||||
}) |
|
||||||
} |
|
||||||
|
|
||||||
return idPkTs, nil |
|
||||||
} |
|
||||||
|
|
||||||
// CountEvents counts events matching a filter
|
|
||||||
func (d *D) CountEvents(c context.Context, f *filter.F) ( |
|
||||||
count int, approximate bool, err error, |
|
||||||
) { |
|
||||||
// Build query requesting only count
|
|
||||||
query := d.buildDQLQueryWithFields(f, false, []string{"count(uid)"}) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return 0, false, fmt.Errorf("failed to count events: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
Count int `json:"count"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return 0, false, err |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Events) > 0 { |
|
||||||
count = result.Events[0].Count |
|
||||||
} |
|
||||||
|
|
||||||
return count, false, nil |
|
||||||
} |
|
||||||
@ -1,517 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"fmt" |
|
||||||
"testing" |
|
||||||
|
|
||||||
"lol.mleku.dev/chk" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/filter" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/kind" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/tag" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/timestamp" |
|
||||||
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k" |
|
||||||
"next.orly.dev/pkg/utils" |
|
||||||
) |
|
||||||
|
|
||||||
func TestQueryEventsByID(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Test QueryEvents with an ID filter
|
|
||||||
testEvent := events[3] |
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Ids: tag.NewFromBytesSlice(testEvent.ID), |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query events by ID: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got exactly one event
|
|
||||||
if len(evs) != 1 { |
|
||||||
t.Fatalf("Expected 1 event, got %d", len(evs)) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify it's the correct event
|
|
||||||
if !utils.FastEqual(evs[0].ID, testEvent.ID) { |
|
||||||
t.Fatalf( |
|
||||||
"Event ID doesn't match. Got %x, expected %x", evs[0].ID, |
|
||||||
testEvent.ID, |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestQueryEventsByKind(t *testing.T) { |
|
||||||
db, _, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Test querying by kind
|
|
||||||
testKind := kind.New(1) // Kind 1 is typically text notes
|
|
||||||
kindFilter := kind.NewS(testKind) |
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Kinds: kindFilter, |
|
||||||
Tags: tag.NewS(), |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query events by kind: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got results
|
|
||||||
if len(evs) == 0 { |
|
||||||
t.Fatal("Expected events with kind 1, but got none") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify all events have the correct kind
|
|
||||||
for i, ev := range evs { |
|
||||||
if ev.Kind != testKind.K { |
|
||||||
t.Fatalf( |
|
||||||
"Event %d has incorrect kind. Got %d, expected %d", i, |
|
||||||
ev.Kind, testKind.K, |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestQueryEventsByAuthor(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Test querying by author
|
|
||||||
authorFilter := tag.NewFromBytesSlice(events[1].Pubkey) |
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Authors: authorFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query events by author: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got results
|
|
||||||
if len(evs) == 0 { |
|
||||||
t.Fatal("Expected events from author, but got none") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify all events have the correct author
|
|
||||||
for i, ev := range evs { |
|
||||||
if !utils.FastEqual(ev.Pubkey, events[1].Pubkey) { |
|
||||||
t.Fatalf( |
|
||||||
"Event %d has incorrect author. Got %x, expected %x", |
|
||||||
i, ev.Pubkey, events[1].Pubkey, |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestReplaceableEventsAndDeletion(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Create a signer
|
|
||||||
sign := p8k.MustNew() |
|
||||||
if err := sign.Generate(); chk.E(err) { |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a replaceable event
|
|
||||||
replaceableEvent := event.New() |
|
||||||
replaceableEvent.Kind = kind.ProfileMetadata.K // Kind 0 is replaceable
|
|
||||||
replaceableEvent.Pubkey = events[0].Pubkey // Use the same pubkey as an existing event
|
|
||||||
replaceableEvent.CreatedAt = timestamp.Now().V - 7200 // 2 hours ago
|
|
||||||
replaceableEvent.Content = []byte("Original profile") |
|
||||||
replaceableEvent.Tags = tag.NewS() |
|
||||||
replaceableEvent.Sign(sign) |
|
||||||
|
|
||||||
// Save the replaceable event
|
|
||||||
if _, err := db.SaveEvent(ctx, replaceableEvent); err != nil { |
|
||||||
t.Fatalf("Failed to save replaceable event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a newer version of the replaceable event
|
|
||||||
newerEvent := event.New() |
|
||||||
newerEvent.Kind = kind.ProfileMetadata.K // Same kind
|
|
||||||
newerEvent.Pubkey = replaceableEvent.Pubkey // Same pubkey
|
|
||||||
newerEvent.CreatedAt = timestamp.Now().V - 3600 // 1 hour ago (newer than the original)
|
|
||||||
newerEvent.Content = []byte("Updated profile") |
|
||||||
newerEvent.Tags = tag.NewS() |
|
||||||
newerEvent.Sign(sign) |
|
||||||
|
|
||||||
// Save the newer event
|
|
||||||
if _, err := db.SaveEvent(ctx, newerEvent); err != nil { |
|
||||||
t.Fatalf("Failed to save newer event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for the original event by ID
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Ids: tag.NewFromAny(replaceableEvent.ID), |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query for replaced event by ID: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify the original event is still found (it's kept but not returned in general queries)
|
|
||||||
if len(evs) != 1 { |
|
||||||
t.Fatalf("Expected 1 event when querying for replaced event by ID, got %d", len(evs)) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify it's the original event
|
|
||||||
if !utils.FastEqual(evs[0].ID, replaceableEvent.ID) { |
|
||||||
t.Fatalf( |
|
||||||
"Event ID doesn't match when querying for replaced event. Got %x, expected %x", |
|
||||||
evs[0].ID, replaceableEvent.ID, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for all events of this kind and pubkey
|
|
||||||
kindFilter := kind.NewS(kind.ProfileMetadata) |
|
||||||
authorFilter := tag.NewFromAny(replaceableEvent.Pubkey) |
|
||||||
|
|
||||||
evs, err = db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Kinds: kindFilter, |
|
||||||
Authors: authorFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query for replaceable events: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got only one event (the latest one)
|
|
||||||
if len(evs) != 1 { |
|
||||||
t.Fatalf( |
|
||||||
"Expected 1 event when querying for replaceable events, got %d", |
|
||||||
len(evs), |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify it's the newer event
|
|
||||||
if !utils.FastEqual(evs[0].ID, newerEvent.ID) { |
|
||||||
t.Fatalf( |
|
||||||
"Event ID doesn't match when querying for replaceable events. Got %x, expected %x", |
|
||||||
evs[0].ID, newerEvent.ID, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Test deletion events
|
|
||||||
// Create a deletion event that references the replaceable event
|
|
||||||
deletionEvent := event.New() |
|
||||||
deletionEvent.Kind = kind.Deletion.K // Kind 5 is deletion
|
|
||||||
deletionEvent.Pubkey = replaceableEvent.Pubkey // Same pubkey as the event being deleted
|
|
||||||
deletionEvent.CreatedAt = timestamp.Now().V // Current time
|
|
||||||
deletionEvent.Content = []byte("Deleting the replaceable event") |
|
||||||
deletionEvent.Tags = tag.NewS() |
|
||||||
deletionEvent.Sign(sign) |
|
||||||
|
|
||||||
// Add an e-tag referencing the replaceable event
|
|
||||||
*deletionEvent.Tags = append( |
|
||||||
*deletionEvent.Tags, |
|
||||||
tag.NewFromAny("e", hex.Enc(replaceableEvent.ID)), |
|
||||||
) |
|
||||||
|
|
||||||
// Save the deletion event
|
|
||||||
if _, err = db.SaveEvent(ctx, deletionEvent); err != nil { |
|
||||||
t.Fatalf("Failed to save deletion event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for all events of this kind and pubkey again
|
|
||||||
evs, err = db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Kinds: kindFilter, |
|
||||||
Authors: authorFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf( |
|
||||||
"Failed to query for replaceable events after deletion: %v", err, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we still get the newer event (deletion should only affect the original event)
|
|
||||||
if len(evs) != 1 { |
|
||||||
t.Fatalf( |
|
||||||
"Expected 1 event when querying for replaceable events after deletion, got %d", |
|
||||||
len(evs), |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify it's still the newer event
|
|
||||||
if !utils.FastEqual(evs[0].ID, newerEvent.ID) { |
|
||||||
t.Fatalf( |
|
||||||
"Event ID doesn't match after deletion. Got %x, expected %x", |
|
||||||
evs[0].ID, newerEvent.ID, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for the original event by ID
|
|
||||||
evs, err = db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Ids: tag.NewFromBytesSlice(replaceableEvent.ID), |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query for deleted event by ID: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify the original event is not found (it was deleted)
|
|
||||||
if len(evs) != 0 { |
|
||||||
t.Fatalf("Expected 0 events when querying for deleted event by ID, got %d", len(evs)) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestParameterizedReplaceableEventsAndDeletion(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
sign := p8k.MustNew() |
|
||||||
if err := sign.Generate(); chk.E(err) { |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a parameterized replaceable event
|
|
||||||
paramEvent := event.New() |
|
||||||
paramEvent.Kind = 30000 // Kind 30000+ is parameterized replaceable
|
|
||||||
paramEvent.Pubkey = events[0].Pubkey // Use the same pubkey as an existing event
|
|
||||||
paramEvent.CreatedAt = timestamp.Now().V - 7200 // 2 hours ago
|
|
||||||
paramEvent.Content = []byte("Original parameterized event") |
|
||||||
paramEvent.Tags = tag.NewS() |
|
||||||
// Add a d-tag
|
|
||||||
*paramEvent.Tags = append( |
|
||||||
*paramEvent.Tags, tag.NewFromAny([]byte{'d'}, []byte("test-d-tag")), |
|
||||||
) |
|
||||||
paramEvent.Sign(sign) |
|
||||||
|
|
||||||
// Save the parameterized replaceable event
|
|
||||||
if _, err := db.SaveEvent(ctx, paramEvent); err != nil { |
|
||||||
t.Fatalf("Failed to save parameterized replaceable event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a deletion event using e-tag
|
|
||||||
paramDeletionEvent := event.New() |
|
||||||
paramDeletionEvent.Kind = kind.Deletion.K // Kind 5 is deletion
|
|
||||||
paramDeletionEvent.Pubkey = paramEvent.Pubkey // Same pubkey as the event being deleted
|
|
||||||
paramDeletionEvent.CreatedAt = timestamp.Now().V // Current time
|
|
||||||
paramDeletionEvent.Content = []byte("Deleting the parameterized replaceable event with e-tag") |
|
||||||
paramDeletionEvent.Tags = tag.NewS() |
|
||||||
// Add an e-tag referencing the parameterized replaceable event
|
|
||||||
*paramDeletionEvent.Tags = append( |
|
||||||
*paramDeletionEvent.Tags, |
|
||||||
tag.NewFromAny("e", []byte(hex.Enc(paramEvent.ID))), |
|
||||||
) |
|
||||||
paramDeletionEvent.Sign(sign) |
|
||||||
|
|
||||||
// Save the parameterized deletion event with e-tag
|
|
||||||
if _, err := db.SaveEvent(ctx, paramDeletionEvent); err != nil { |
|
||||||
t.Fatalf( |
|
||||||
"Failed to save parameterized deletion event with e-tag: %v", err, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for parameterized events
|
|
||||||
paramKindFilter := kind.NewS(kind.New(paramEvent.Kind)) |
|
||||||
paramAuthorFilter := tag.NewFromBytesSlice(paramEvent.Pubkey) |
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Kinds: paramKindFilter, |
|
||||||
Authors: paramAuthorFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf( |
|
||||||
"Failed to query for parameterized replaceable events after deletion: %v", |
|
||||||
err, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Debug output
|
|
||||||
fmt.Printf("Got %d events after deletion\n", len(evs)) |
|
||||||
for i, ev := range evs { |
|
||||||
fmt.Printf( |
|
||||||
"Event %d: kind=%d, pubkey=%s\n", |
|
||||||
i, ev.Kind, hex.Enc(ev.Pubkey), |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we get no events (since the only one was deleted)
|
|
||||||
if len(evs) != 0 { |
|
||||||
t.Fatalf( |
|
||||||
"Expected 0 events when querying for deleted parameterized replaceable events, got %d", |
|
||||||
len(evs), |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Query for the parameterized event by ID
|
|
||||||
evs, err = db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Ids: tag.NewFromBytesSlice(paramEvent.ID), |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf( |
|
||||||
"Failed to query for deleted parameterized event by ID: %v", err, |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify the deleted event is not found when querying by ID
|
|
||||||
if len(evs) != 0 { |
|
||||||
t.Fatalf( |
|
||||||
"Expected 0 events when querying for deleted parameterized event by ID, got %d", |
|
||||||
len(evs), |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestQueryEventsByTimeRange(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Test querying by time range
|
|
||||||
// Use the timestamp from the middle event as a reference
|
|
||||||
middleIndex := len(events) / 2 |
|
||||||
middleEvent := events[middleIndex] |
|
||||||
|
|
||||||
// Create a timestamp range that includes events before and after the middle event
|
|
||||||
sinceTime := new(timestamp.T) |
|
||||||
sinceTime.V = middleEvent.CreatedAt - 3600 // 1 hour before middle event
|
|
||||||
|
|
||||||
untilTime := new(timestamp.T) |
|
||||||
untilTime.V = middleEvent.CreatedAt + 3600 // 1 hour after middle event
|
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Since: sinceTime, |
|
||||||
Until: untilTime, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query events by time range: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got results
|
|
||||||
if len(evs) == 0 { |
|
||||||
t.Fatal("Expected events in time range, but got none") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify all events are within the time range
|
|
||||||
for i, ev := range evs { |
|
||||||
if ev.CreatedAt < sinceTime.V || ev.CreatedAt > untilTime.V { |
|
||||||
t.Fatalf( |
|
||||||
"Event %d is outside the time range. Got %d, expected between %d and %d", |
|
||||||
i, ev.CreatedAt, sinceTime.V, untilTime.V, |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestQueryEventsByTag(t *testing.T) { |
|
||||||
db, events, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Find an event with tags to use for testing
|
|
||||||
var testTagEvent *event.E |
|
||||||
for _, ev := range events { |
|
||||||
if ev.Tags != nil && ev.Tags.Len() > 0 { |
|
||||||
// Find a tag with at least 2 elements and first element of length 1
|
|
||||||
for _, tag := range *ev.Tags { |
|
||||||
if tag.Len() >= 2 && len(tag.Key()) == 1 { |
|
||||||
testTagEvent = ev |
|
||||||
break |
|
||||||
} |
|
||||||
} |
|
||||||
if testTagEvent != nil { |
|
||||||
break |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
if testTagEvent == nil { |
|
||||||
t.Skip("No suitable event with tags found for testing") |
|
||||||
return |
|
||||||
} |
|
||||||
|
|
||||||
// Get the first tag with at least 2 elements and first element of length 1
|
|
||||||
var testTag *tag.T |
|
||||||
for _, tag := range *testTagEvent.Tags { |
|
||||||
if tag.Len() >= 2 && len(tag.Key()) == 1 { |
|
||||||
testTag = tag |
|
||||||
break |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Create a tags filter with the test tag
|
|
||||||
tagsFilter := tag.NewS(testTag) |
|
||||||
|
|
||||||
evs, err := db.QueryEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Tags: tagsFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to query events by tag: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got results
|
|
||||||
if len(evs) == 0 { |
|
||||||
t.Fatal("Expected events with tag, but got none") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify all events have the tag
|
|
||||||
for i, ev := range evs { |
|
||||||
var hasTag bool |
|
||||||
for _, tag := range *ev.Tags { |
|
||||||
if tag.Len() >= 2 && len(tag.Key()) == 1 { |
|
||||||
if utils.FastEqual(tag.Key(), testTag.Key()) && |
|
||||||
utils.FastEqual(tag.Value(), testTag.Value()) { |
|
||||||
hasTag = true |
|
||||||
break |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
if !hasTag { |
|
||||||
t.Fatalf("Event %d does not have the expected tag", i) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
func TestCountEvents(t *testing.T) { |
|
||||||
db, _, ctx, cancel, tempDir := setupTestDB(t) |
|
||||||
defer cleanupTestDB(t, db, cancel, tempDir) |
|
||||||
|
|
||||||
// Test counting all events
|
|
||||||
count, _, err := db.CountEvents(ctx, &filter.F{}) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to count events: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Verify we got a non-zero count
|
|
||||||
if count == 0 { |
|
||||||
t.Fatal("Expected non-zero event count, but got 0") |
|
||||||
} |
|
||||||
|
|
||||||
t.Logf("Total events in database: %d", count) |
|
||||||
|
|
||||||
// Test counting events by kind
|
|
||||||
testKind := kind.New(1) |
|
||||||
kindFilter := kind.NewS(testKind) |
|
||||||
|
|
||||||
count, _, err = db.CountEvents( |
|
||||||
ctx, &filter.F{ |
|
||||||
Kinds: kindFilter, |
|
||||||
}, |
|
||||||
) |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to count events by kind: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
t.Logf("Events with kind 1: %d", count) |
|
||||||
} |
|
||||||
@ -1,183 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"strings" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
"next.orly.dev/pkg/database/indexes/types" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/filter" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// SaveEvent stores a Nostr event in the Dgraph database.
|
|
||||||
// It creates event nodes and relationships for authors, tags, and references.
|
|
||||||
func (d *D) SaveEvent(c context.Context, ev *event.E) (exists bool, err error) { |
|
||||||
eventID := hex.Enc(ev.ID[:]) |
|
||||||
|
|
||||||
// Check if event already exists
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
event(func: eq(event.id, %q)) { |
|
||||||
uid |
|
||||||
event.id |
|
||||||
} |
|
||||||
}`, eventID) |
|
||||||
|
|
||||||
resp, err := d.Query(c, query) |
|
||||||
if err != nil { |
|
||||||
return false, fmt.Errorf("failed to check event existence: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Parse response to check if event exists
|
|
||||||
var result struct { |
|
||||||
Event []map[string]interface{} `json:"event"` |
|
||||||
} |
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return false, fmt.Errorf("failed to parse query response: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
if len(result.Event) > 0 { |
|
||||||
return true, nil // Event already exists
|
|
||||||
} |
|
||||||
|
|
||||||
// Get next serial number
|
|
||||||
serial, err := d.getNextSerial() |
|
||||||
if err != nil { |
|
||||||
return false, fmt.Errorf("failed to get serial number: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Build N-Quads for the event with serial number
|
|
||||||
nquads := d.buildEventNQuads(ev, serial) |
|
||||||
|
|
||||||
// Store the event
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(nquads), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(c, mutation); err != nil { |
|
||||||
return false, fmt.Errorf("failed to save event: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return false, nil |
|
||||||
} |
|
||||||
|
|
||||||
// buildEventNQuads constructs RDF triples for a Nostr event
|
|
||||||
func (d *D) buildEventNQuads(ev *event.E, serial uint64) string { |
|
||||||
var nquads strings.Builder |
|
||||||
|
|
||||||
eventID := hex.Enc(ev.ID[:]) |
|
||||||
authorPubkey := hex.Enc(ev.Pubkey) |
|
||||||
|
|
||||||
// Event node
|
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Event\" .\n", eventID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.id> %q .\n", eventID, eventID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.serial> \"%d\"^^<xs:int> .\n", eventID, serial)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.kind> \"%d\"^^<xs:int> .\n", eventID, ev.Kind)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.created_at> \"%d\"^^<xs:int> .\n", eventID, int64(ev.CreatedAt))) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.content> %q .\n", eventID, ev.Content)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.sig> %q .\n", eventID, hex.Enc(ev.Sig[:]))) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.pubkey> %q .\n", eventID, authorPubkey)) |
|
||||||
|
|
||||||
// Serialize tags as JSON string for storage
|
|
||||||
tagsJSON, _ := json.Marshal(ev.Tags) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <event.tags> %q .\n", eventID, string(tagsJSON))) |
|
||||||
|
|
||||||
// Author relationship
|
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <authored_by> _:%s .\n", eventID, authorPubkey)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Author\" .\n", authorPubkey)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <author.pubkey> %q .\n", authorPubkey, authorPubkey)) |
|
||||||
|
|
||||||
// Tag relationships
|
|
||||||
for _, tag := range *ev.Tags { |
|
||||||
if len(tag.T) >= 2 { |
|
||||||
tagType := string(tag.T[0]) |
|
||||||
tagValue := string(tag.T[1]) |
|
||||||
|
|
||||||
switch tagType { |
|
||||||
case "e": // Event reference
|
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <references> _:%s .\n", eventID, tagValue)) |
|
||||||
case "p": // Pubkey mention
|
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <mentions> _:%s .\n", eventID, tagValue)) |
|
||||||
// Ensure mentioned author exists
|
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Author\" .\n", tagValue)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <author.pubkey> %q .\n", tagValue, tagValue)) |
|
||||||
case "t": // Hashtag
|
|
||||||
tagID := "tag_" + tagType + "_" + tagValue |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tagged_with> _:%s .\n", eventID, tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Tag\" .\n", tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tag.type> %q .\n", tagID, tagType)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tag.value> %q .\n", tagID, tagValue)) |
|
||||||
default: |
|
||||||
// Store other tag types
|
|
||||||
tagID := "tag_" + tagType + "_" + tagValue |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tagged_with> _:%s .\n", eventID, tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <dgraph.type> \"Tag\" .\n", tagID)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tag.type> %q .\n", tagID, tagType)) |
|
||||||
nquads.WriteString(fmt.Sprintf("_:%s <tag.value> %q .\n", tagID, tagValue)) |
|
||||||
} |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return nquads.String() |
|
||||||
} |
|
||||||
|
|
||||||
// GetSerialsFromFilter returns event serials matching a filter
|
|
||||||
func (d *D) GetSerialsFromFilter(f *filter.F) (serials types.Uint40s, err error) { |
|
||||||
// Use QueryForSerials which already implements the proper filter logic
|
|
||||||
return d.QueryForSerials(context.Background(), f) |
|
||||||
} |
|
||||||
|
|
||||||
// WouldReplaceEvent checks if an event would replace existing events
|
|
||||||
func (d *D) WouldReplaceEvent(ev *event.E) (bool, types.Uint40s, error) { |
|
||||||
// Check for replaceable events (kinds 0, 3, and 10000-19999)
|
|
||||||
isReplaceable := ev.Kind == 0 || ev.Kind == 3 || (ev.Kind >= 10000 && ev.Kind < 20000) |
|
||||||
if !isReplaceable { |
|
||||||
return false, nil, nil |
|
||||||
} |
|
||||||
|
|
||||||
// Query for existing events with same kind and pubkey
|
|
||||||
authorPubkey := hex.Enc(ev.Pubkey) |
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
events(func: eq(event.pubkey, %q)) @filter(eq(event.kind, %d)) { |
|
||||||
uid |
|
||||||
event.serial |
|
||||||
event.created_at |
|
||||||
} |
|
||||||
}`, authorPubkey, ev.Kind) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return false, nil, fmt.Errorf("failed to query replaceable events: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Events []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
Serial int64 `json:"event.serial"` |
|
||||||
CreatedAt int64 `json:"event.created_at"` |
|
||||||
} `json:"events"` |
|
||||||
} |
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return false, nil, fmt.Errorf("failed to parse query response: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Check if our event is newer
|
|
||||||
evTime := int64(ev.CreatedAt) |
|
||||||
var serials types.Uint40s |
|
||||||
wouldReplace := false |
|
||||||
|
|
||||||
for _, existing := range result.Events { |
|
||||||
if existing.CreatedAt < evTime { |
|
||||||
wouldReplace = true |
|
||||||
serial := types.Uint40{} |
|
||||||
serial.Set(uint64(existing.Serial)) |
|
||||||
serials = append(serials, &serial) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return wouldReplace, serials, nil |
|
||||||
} |
|
||||||
@ -1,253 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"bufio" |
|
||||||
"bytes" |
|
||||||
"context" |
|
||||||
"os" |
|
||||||
"sort" |
|
||||||
"testing" |
|
||||||
"time" |
|
||||||
|
|
||||||
"lol.mleku.dev/chk" |
|
||||||
"lol.mleku.dev/errorf" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/event/examples" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/kind" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/tag" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/timestamp" |
|
||||||
"git.mleku.dev/mleku/nostr/interfaces/signer/p8k" |
|
||||||
) |
|
||||||
|
|
||||||
// TestSaveEvents tests saving all events from examples.Cache to the dgraph database
|
|
||||||
// to verify there are no errors during the saving process.
|
|
||||||
func TestSaveEvents(t *testing.T) { |
|
||||||
skipIfDgraphNotAvailable(t) |
|
||||||
|
|
||||||
// Create a temporary directory for metadata
|
|
||||||
tempDir, err := os.MkdirTemp("", "test-dgraph-*") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create temporary directory: %v", err) |
|
||||||
} |
|
||||||
defer os.RemoveAll(tempDir) |
|
||||||
|
|
||||||
// Create a context and cancel function for the database
|
|
||||||
ctx, cancel := context.WithCancel(context.Background()) |
|
||||||
defer cancel() |
|
||||||
|
|
||||||
// Initialize the dgraph database
|
|
||||||
db, err := New(ctx, cancel, tempDir, "info") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create dgraph database: %v", err) |
|
||||||
} |
|
||||||
defer db.Close() |
|
||||||
|
|
||||||
// Drop all data to start fresh
|
|
||||||
if err := db.dropAll(ctx); err != nil { |
|
||||||
t.Fatalf("Failed to drop all data: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a scanner to read events from examples.Cache
|
|
||||||
scanner := bufio.NewScanner(bytes.NewBuffer(examples.Cache)) |
|
||||||
scanner.Buffer(make([]byte, 0, 1_000_000_000), 1_000_000_000) |
|
||||||
|
|
||||||
// Collect all events first
|
|
||||||
var events []*event.E |
|
||||||
var original int |
|
||||||
for scanner.Scan() { |
|
||||||
chk.E(scanner.Err()) |
|
||||||
b := scanner.Bytes() |
|
||||||
original += len(b) |
|
||||||
ev := event.New() |
|
||||||
|
|
||||||
// Unmarshal the event
|
|
||||||
if _, err = ev.Unmarshal(b); chk.E(err) { |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
events = append(events, ev) |
|
||||||
} |
|
||||||
|
|
||||||
// Sort events by timestamp to ensure addressable events are processed in order
|
|
||||||
sort.Slice(events, func(i, j int) bool { |
|
||||||
return events[i].CreatedAt < events[j].CreatedAt |
|
||||||
}) |
|
||||||
|
|
||||||
// Count the number of events processed
|
|
||||||
eventCount := 0 |
|
||||||
now := time.Now() |
|
||||||
|
|
||||||
// Process each event in chronological order
|
|
||||||
for _, ev := range events { |
|
||||||
// Save the event to the database
|
|
||||||
if _, err = db.SaveEvent(ctx, ev); err != nil { |
|
||||||
t.Fatalf("Failed to save event #%d: %v", eventCount+1, err) |
|
||||||
} |
|
||||||
eventCount++ |
|
||||||
} |
|
||||||
|
|
||||||
// Check for scanner errors
|
|
||||||
if err = scanner.Err(); err != nil { |
|
||||||
t.Fatalf("Scanner error: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
dur := time.Since(now) |
|
||||||
t.Logf( |
|
||||||
"Successfully saved %d events (%d bytes) to dgraph in %v (%v/ev; %.2f ev/s)", |
|
||||||
eventCount, |
|
||||||
original, |
|
||||||
dur, |
|
||||||
dur/time.Duration(eventCount), |
|
||||||
float64(time.Second)/float64(dur/time.Duration(eventCount)), |
|
||||||
) |
|
||||||
} |
|
||||||
|
|
||||||
// TestDeletionEventWithETagRejection tests that a deletion event with an "e" tag is rejected.
|
|
||||||
func TestDeletionEventWithETagRejection(t *testing.T) { |
|
||||||
skipIfDgraphNotAvailable(t) |
|
||||||
|
|
||||||
// Create a temporary directory for metadata
|
|
||||||
tempDir, err := os.MkdirTemp("", "test-dgraph-*") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create temporary directory: %v", err) |
|
||||||
} |
|
||||||
defer os.RemoveAll(tempDir) |
|
||||||
|
|
||||||
// Create a context and cancel function for the database
|
|
||||||
ctx, cancel := context.WithCancel(context.Background()) |
|
||||||
defer cancel() |
|
||||||
|
|
||||||
// Initialize the dgraph database
|
|
||||||
db, err := New(ctx, cancel, tempDir, "info") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create dgraph database: %v", err) |
|
||||||
} |
|
||||||
defer db.Close() |
|
||||||
|
|
||||||
// Drop all data to start fresh
|
|
||||||
if err := db.dropAll(ctx); err != nil { |
|
||||||
t.Fatalf("Failed to drop all data: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a signer
|
|
||||||
sign := p8k.MustNew() |
|
||||||
if err := sign.Generate(); chk.E(err) { |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a regular event
|
|
||||||
regularEvent := event.New() |
|
||||||
regularEvent.Kind = kind.TextNote.K |
|
||||||
regularEvent.Pubkey = sign.Pub() |
|
||||||
regularEvent.CreatedAt = timestamp.Now().V - 3600 // 1 hour ago
|
|
||||||
regularEvent.Content = []byte("Regular event") |
|
||||||
regularEvent.Tags = tag.NewS() |
|
||||||
regularEvent.Sign(sign) |
|
||||||
|
|
||||||
// Save the regular event
|
|
||||||
if _, err := db.SaveEvent(ctx, regularEvent); err != nil { |
|
||||||
t.Fatalf("Failed to save regular event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a deletion event with an "e" tag referencing the regular event
|
|
||||||
deletionEvent := event.New() |
|
||||||
deletionEvent.Kind = kind.Deletion.K |
|
||||||
deletionEvent.Pubkey = sign.Pub() |
|
||||||
deletionEvent.CreatedAt = timestamp.Now().V // Current time
|
|
||||||
deletionEvent.Content = []byte("Deleting the regular event") |
|
||||||
deletionEvent.Tags = tag.NewS() |
|
||||||
|
|
||||||
// Add an e-tag referencing the regular event
|
|
||||||
*deletionEvent.Tags = append( |
|
||||||
*deletionEvent.Tags, |
|
||||||
tag.NewFromAny("e", hex.Enc(regularEvent.ID)), |
|
||||||
) |
|
||||||
|
|
||||||
deletionEvent.Sign(sign) |
|
||||||
|
|
||||||
// Check if this is a deletion event with "e" tags
|
|
||||||
if deletionEvent.Kind == kind.Deletion.K && deletionEvent.Tags.GetFirst([]byte{'e'}) != nil { |
|
||||||
// In this test, we want to reject deletion events with "e" tags
|
|
||||||
err = errorf.E("deletion events referencing other events with 'e' tag are not allowed") |
|
||||||
} else { |
|
||||||
// Try to save the deletion event
|
|
||||||
_, err = db.SaveEvent(ctx, deletionEvent) |
|
||||||
} |
|
||||||
|
|
||||||
if err == nil { |
|
||||||
t.Fatal("Expected deletion event with e-tag to be rejected, but it was accepted") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify the error message
|
|
||||||
expectedError := "deletion events referencing other events with 'e' tag are not allowed" |
|
||||||
if err.Error() != expectedError { |
|
||||||
t.Fatalf( |
|
||||||
"Expected error message '%s', got '%s'", expectedError, err.Error(), |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// TestSaveExistingEvent tests that attempting to save an event that already exists
|
|
||||||
// returns an error.
|
|
||||||
func TestSaveExistingEvent(t *testing.T) { |
|
||||||
skipIfDgraphNotAvailable(t) |
|
||||||
|
|
||||||
// Create a temporary directory for metadata
|
|
||||||
tempDir, err := os.MkdirTemp("", "test-dgraph-*") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create temporary directory: %v", err) |
|
||||||
} |
|
||||||
defer os.RemoveAll(tempDir) |
|
||||||
|
|
||||||
// Create a context and cancel function for the database
|
|
||||||
ctx, cancel := context.WithCancel(context.Background()) |
|
||||||
defer cancel() |
|
||||||
|
|
||||||
// Initialize the dgraph database
|
|
||||||
db, err := New(ctx, cancel, tempDir, "info") |
|
||||||
if err != nil { |
|
||||||
t.Fatalf("Failed to create dgraph database: %v", err) |
|
||||||
} |
|
||||||
defer db.Close() |
|
||||||
|
|
||||||
// Drop all data to start fresh
|
|
||||||
if err := db.dropAll(ctx); err != nil { |
|
||||||
t.Fatalf("Failed to drop all data: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create a signer
|
|
||||||
sign := p8k.MustNew() |
|
||||||
if err := sign.Generate(); chk.E(err) { |
|
||||||
t.Fatal(err) |
|
||||||
} |
|
||||||
|
|
||||||
// Create an event
|
|
||||||
ev := event.New() |
|
||||||
ev.Kind = kind.TextNote.K |
|
||||||
ev.Pubkey = sign.Pub() |
|
||||||
ev.CreatedAt = timestamp.Now().V |
|
||||||
ev.Content = []byte("Test event") |
|
||||||
ev.Tags = tag.NewS() |
|
||||||
ev.Sign(sign) |
|
||||||
|
|
||||||
// Save the event for the first time
|
|
||||||
if _, err := db.SaveEvent(ctx, ev); err != nil { |
|
||||||
t.Fatalf("Failed to save event: %v", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Try to save the same event again, it should be rejected
|
|
||||||
_, err = db.SaveEvent(ctx, ev) |
|
||||||
if err == nil { |
|
||||||
t.Fatal("Expected error when saving an existing event, but got nil") |
|
||||||
} |
|
||||||
|
|
||||||
// Verify the error message contains indication of duplicate
|
|
||||||
expectedErrorPrefix := "blocked: event already exists" |
|
||||||
if !bytes.Contains([]byte(err.Error()), []byte(expectedErrorPrefix)) { |
|
||||||
t.Fatalf( |
|
||||||
"Expected error message to contain '%s', got '%s'", |
|
||||||
expectedErrorPrefix, err.Error(), |
|
||||||
) |
|
||||||
} |
|
||||||
} |
|
||||||
@ -1,105 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"fmt" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
) |
|
||||||
|
|
||||||
// NostrSchema defines the Dgraph schema for Nostr events
|
|
||||||
const NostrSchema = ` |
|
||||||
# Event node type |
|
||||||
type Event { |
|
||||||
event.id |
|
||||||
event.serial |
|
||||||
event.kind |
|
||||||
event.created_at |
|
||||||
event.content |
|
||||||
event.sig |
|
||||||
event.pubkey |
|
||||||
event.authored_by |
|
||||||
event.references |
|
||||||
event.mentions |
|
||||||
event.tagged_with |
|
||||||
} |
|
||||||
|
|
||||||
# Author node type |
|
||||||
type Author { |
|
||||||
author.pubkey |
|
||||||
author.events |
|
||||||
} |
|
||||||
|
|
||||||
# Tag node type |
|
||||||
type Tag { |
|
||||||
tag.type |
|
||||||
tag.value |
|
||||||
tag.events |
|
||||||
} |
|
||||||
|
|
||||||
# Marker node type (for key-value metadata) |
|
||||||
type Marker { |
|
||||||
marker.key |
|
||||||
marker.value |
|
||||||
} |
|
||||||
|
|
||||||
# Event fields |
|
||||||
event.id: string @index(exact) @upsert . |
|
||||||
event.serial: int @index(int) . |
|
||||||
event.kind: int @index(int) . |
|
||||||
event.created_at: int @index(int) . |
|
||||||
event.content: string . |
|
||||||
event.sig: string @index(exact) . |
|
||||||
event.pubkey: string @index(exact) . |
|
||||||
|
|
||||||
# Event relationships |
|
||||||
event.authored_by: uid @reverse . |
|
||||||
event.references: [uid] @reverse . |
|
||||||
event.mentions: [uid] @reverse . |
|
||||||
event.tagged_with: [uid] @reverse . |
|
||||||
|
|
||||||
# Author fields |
|
||||||
author.pubkey: string @index(exact) @upsert . |
|
||||||
author.events: [uid] @count @reverse . |
|
||||||
|
|
||||||
# Tag fields |
|
||||||
tag.type: string @index(exact) . |
|
||||||
tag.value: string @index(exact, fulltext) . |
|
||||||
tag.events: [uid] @count @reverse . |
|
||||||
|
|
||||||
# Marker fields (key-value storage) |
|
||||||
marker.key: string @index(exact) @upsert . |
|
||||||
marker.value: string . |
|
||||||
` |
|
||||||
|
|
||||||
// applySchema applies the Nostr schema to the connected Dgraph instance
|
|
||||||
func (d *D) applySchema(ctx context.Context) error { |
|
||||||
d.Logger.Infof("applying Nostr schema to dgraph") |
|
||||||
|
|
||||||
op := &api.Operation{ |
|
||||||
Schema: NostrSchema, |
|
||||||
} |
|
||||||
|
|
||||||
if err := d.client.Alter(ctx, op); err != nil { |
|
||||||
return fmt.Errorf("failed to apply schema: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
d.Logger.Infof("schema applied successfully") |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// dropAll drops all data from dgraph (useful for testing)
|
|
||||||
func (d *D) dropAll(ctx context.Context) error { |
|
||||||
d.Logger.Warningf("dropping all data from dgraph") |
|
||||||
|
|
||||||
op := &api.Operation{ |
|
||||||
DropAll: true, |
|
||||||
} |
|
||||||
|
|
||||||
if err := d.client.Alter(ctx, op); err != nil { |
|
||||||
return fmt.Errorf("failed to drop all data: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Reapply schema after dropping
|
|
||||||
return d.applySchema(ctx) |
|
||||||
} |
|
||||||
@ -1,136 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"context" |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"sync" |
|
||||||
|
|
||||||
"github.com/dgraph-io/dgo/v230/protos/api" |
|
||||||
) |
|
||||||
|
|
||||||
// Serial number management
|
|
||||||
// We use a special counter node to track the next available serial number
|
|
||||||
|
|
||||||
const serialCounterKey = "serial_counter" |
|
||||||
|
|
||||||
var ( |
|
||||||
serialMutex sync.Mutex |
|
||||||
) |
|
||||||
|
|
||||||
// getNextSerial atomically increments and returns the next serial number
|
|
||||||
func (d *D) getNextSerial() (uint64, error) { |
|
||||||
serialMutex.Lock() |
|
||||||
defer serialMutex.Unlock() |
|
||||||
|
|
||||||
// Query current serial value
|
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
counter(func: eq(marker.key, %q)) { |
|
||||||
uid |
|
||||||
marker.value |
|
||||||
} |
|
||||||
}`, serialCounterKey) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return 0, fmt.Errorf("failed to query serial counter: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Counter []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
Value string `json:"marker.value"` |
|
||||||
} `json:"counter"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return 0, fmt.Errorf("failed to parse serial counter: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var currentSerial uint64 = 1 |
|
||||||
var uid string |
|
||||||
|
|
||||||
if len(result.Counter) > 0 { |
|
||||||
// Parse current serial
|
|
||||||
uid = result.Counter[0].UID |
|
||||||
if result.Counter[0].Value != "" { |
|
||||||
fmt.Sscanf(result.Counter[0].Value, "%d", ¤tSerial) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
// Increment serial
|
|
||||||
nextSerial := currentSerial + 1 |
|
||||||
|
|
||||||
// Update or create counter
|
|
||||||
var nquads string |
|
||||||
if uid != "" { |
|
||||||
// Update existing counter
|
|
||||||
nquads = fmt.Sprintf(`<%s> <marker.value> "%d" .`, uid, nextSerial) |
|
||||||
} else { |
|
||||||
// Create new counter
|
|
||||||
nquads = fmt.Sprintf(` |
|
||||||
_:counter <dgraph.type> "Marker" . |
|
||||||
_:counter <marker.key> %q . |
|
||||||
_:counter <marker.value> "%d" . |
|
||||||
`, serialCounterKey, nextSerial) |
|
||||||
} |
|
||||||
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(nquads), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(context.Background(), mutation); err != nil { |
|
||||||
return 0, fmt.Errorf("failed to update serial counter: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return currentSerial, nil |
|
||||||
} |
|
||||||
|
|
||||||
// initSerialCounter initializes the serial counter if it doesn't exist
|
|
||||||
func (d *D) initSerialCounter() error { |
|
||||||
query := fmt.Sprintf(`{ |
|
||||||
counter(func: eq(marker.key, %q)) { |
|
||||||
uid |
|
||||||
} |
|
||||||
}`, serialCounterKey) |
|
||||||
|
|
||||||
resp, err := d.Query(context.Background(), query) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to check serial counter: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
var result struct { |
|
||||||
Counter []struct { |
|
||||||
UID string `json:"uid"` |
|
||||||
} `json:"counter"` |
|
||||||
} |
|
||||||
|
|
||||||
if err = json.Unmarshal(resp.Json, &result); err != nil { |
|
||||||
return fmt.Errorf("failed to parse counter check: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Counter already exists
|
|
||||||
if len(result.Counter) > 0 { |
|
||||||
return nil |
|
||||||
} |
|
||||||
|
|
||||||
// Initialize counter at 1
|
|
||||||
nquads := fmt.Sprintf(` |
|
||||||
_:counter <dgraph.type> "Marker" . |
|
||||||
_:counter <marker.key> %q . |
|
||||||
_:counter <marker.value> "1" . |
|
||||||
`, serialCounterKey) |
|
||||||
|
|
||||||
mutation := &api.Mutation{ |
|
||||||
SetNquads: []byte(nquads), |
|
||||||
CommitNow: true, |
|
||||||
} |
|
||||||
|
|
||||||
if _, err = d.Mutate(context.Background(), mutation); err != nil { |
|
||||||
return fmt.Errorf("failed to initialize serial counter: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
d.Logger.Infof("initialized serial counter") |
|
||||||
return nil |
|
||||||
} |
|
||||||
@ -1,188 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"encoding/json" |
|
||||||
"fmt" |
|
||||||
"time" |
|
||||||
|
|
||||||
"next.orly.dev/pkg/database" |
|
||||||
"git.mleku.dev/mleku/nostr/encoders/hex" |
|
||||||
) |
|
||||||
|
|
||||||
// Subscription and payment methods
|
|
||||||
// Simplified implementation using marker-based storage
|
|
||||||
// For production, these should use proper graph nodes with relationships
|
|
||||||
|
|
||||||
// GetSubscription retrieves subscription information for a pubkey
|
|
||||||
func (d *D) GetSubscription(pubkey []byte) (*database.Subscription, error) { |
|
||||||
key := "sub_" + hex.Enc(pubkey) |
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err != nil { |
|
||||||
return nil, err |
|
||||||
} |
|
||||||
|
|
||||||
var sub database.Subscription |
|
||||||
if err := json.Unmarshal(data, &sub); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to unmarshal subscription: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return &sub, nil |
|
||||||
} |
|
||||||
|
|
||||||
// IsSubscriptionActive checks if a pubkey has an active subscription
|
|
||||||
func (d *D) IsSubscriptionActive(pubkey []byte) (bool, error) { |
|
||||||
sub, err := d.GetSubscription(pubkey) |
|
||||||
if err != nil { |
|
||||||
return false, nil // No subscription = not active
|
|
||||||
} |
|
||||||
|
|
||||||
return sub.PaidUntil.After(time.Now()), nil |
|
||||||
} |
|
||||||
|
|
||||||
// ExtendSubscription extends a subscription by the specified number of days
|
|
||||||
func (d *D) ExtendSubscription(pubkey []byte, days int) error { |
|
||||||
key := "sub_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
// Get existing subscription or create new
|
|
||||||
var sub database.Subscription |
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err == nil { |
|
||||||
if err := json.Unmarshal(data, &sub); err != nil { |
|
||||||
return fmt.Errorf("failed to unmarshal subscription: %w", err) |
|
||||||
} |
|
||||||
} else { |
|
||||||
// New subscription - set trial period
|
|
||||||
sub.TrialEnd = time.Now() |
|
||||||
sub.PaidUntil = time.Now() |
|
||||||
} |
|
||||||
|
|
||||||
// Extend expiration
|
|
||||||
if sub.PaidUntil.Before(time.Now()) { |
|
||||||
sub.PaidUntil = time.Now() |
|
||||||
} |
|
||||||
sub.PaidUntil = sub.PaidUntil.Add(time.Duration(days) * 24 * time.Hour) |
|
||||||
|
|
||||||
// Save
|
|
||||||
data, err = json.Marshal(sub) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal subscription: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker(key, data) |
|
||||||
} |
|
||||||
|
|
||||||
// RecordPayment records a payment for subscription extension
|
|
||||||
func (d *D) RecordPayment( |
|
||||||
pubkey []byte, amount int64, invoice, preimage string, |
|
||||||
) error { |
|
||||||
// Store payment in payments list
|
|
||||||
key := "payments_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
var payments []database.Payment |
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err == nil { |
|
||||||
if err := json.Unmarshal(data, &payments); err != nil { |
|
||||||
return fmt.Errorf("failed to unmarshal payments: %w", err) |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
payment := database.Payment{ |
|
||||||
Amount: amount, |
|
||||||
Timestamp: time.Now(), |
|
||||||
Invoice: invoice, |
|
||||||
Preimage: preimage, |
|
||||||
} |
|
||||||
|
|
||||||
payments = append(payments, payment) |
|
||||||
|
|
||||||
data, err = json.Marshal(payments) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal payments: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker(key, data) |
|
||||||
} |
|
||||||
|
|
||||||
// GetPaymentHistory retrieves payment history for a pubkey
|
|
||||||
func (d *D) GetPaymentHistory(pubkey []byte) ([]database.Payment, error) { |
|
||||||
key := "payments_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err != nil { |
|
||||||
return nil, nil // No payments = empty list
|
|
||||||
} |
|
||||||
|
|
||||||
var payments []database.Payment |
|
||||||
if err := json.Unmarshal(data, &payments); err != nil { |
|
||||||
return nil, fmt.Errorf("failed to unmarshal payments: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return payments, nil |
|
||||||
} |
|
||||||
|
|
||||||
// ExtendBlossomSubscription extends a Blossom storage subscription
|
|
||||||
func (d *D) ExtendBlossomSubscription( |
|
||||||
pubkey []byte, tier string, storageMB int64, daysExtended int, |
|
||||||
) error { |
|
||||||
key := "blossom_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
// Simple implementation - just store tier and expiry
|
|
||||||
data := map[string]interface{}{ |
|
||||||
"tier": tier, |
|
||||||
"storageMB": storageMB, |
|
||||||
"extended": daysExtended, |
|
||||||
"updated": time.Now(), |
|
||||||
} |
|
||||||
|
|
||||||
jsonData, err := json.Marshal(data) |
|
||||||
if err != nil { |
|
||||||
return fmt.Errorf("failed to marshal blossom subscription: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
return d.SetMarker(key, jsonData) |
|
||||||
} |
|
||||||
|
|
||||||
// GetBlossomStorageQuota retrieves the storage quota for a pubkey
|
|
||||||
func (d *D) GetBlossomStorageQuota(pubkey []byte) (quotaMB int64, err error) { |
|
||||||
key := "blossom_" + hex.Enc(pubkey) |
|
||||||
|
|
||||||
data, err := d.GetMarker(key) |
|
||||||
if err != nil { |
|
||||||
return 0, nil // No subscription = 0 quota
|
|
||||||
} |
|
||||||
|
|
||||||
var result map[string]interface{} |
|
||||||
if err := json.Unmarshal(data, &result); err != nil { |
|
||||||
return 0, fmt.Errorf("failed to unmarshal blossom data: %w", err) |
|
||||||
} |
|
||||||
|
|
||||||
// Default quota based on tier - simplified
|
|
||||||
if tier, ok := result["tier"].(string); ok { |
|
||||||
switch tier { |
|
||||||
case "basic": |
|
||||||
return 100, nil |
|
||||||
case "premium": |
|
||||||
return 1000, nil |
|
||||||
default: |
|
||||||
return 10, nil |
|
||||||
} |
|
||||||
} |
|
||||||
|
|
||||||
return 0, nil |
|
||||||
} |
|
||||||
|
|
||||||
// IsFirstTimeUser checks if a pubkey is a first-time user
|
|
||||||
func (d *D) IsFirstTimeUser(pubkey []byte) (bool, error) { |
|
||||||
// Check if they have any subscription or payment history
|
|
||||||
sub, _ := d.GetSubscription(pubkey) |
|
||||||
if sub != nil { |
|
||||||
return false, nil |
|
||||||
} |
|
||||||
|
|
||||||
payments, _ := d.GetPaymentHistory(pubkey) |
|
||||||
if len(payments) > 0 { |
|
||||||
return false, nil |
|
||||||
} |
|
||||||
|
|
||||||
return true, nil |
|
||||||
} |
|
||||||
@ -1,30 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"io" |
|
||||||
"os" |
|
||||||
"testing" |
|
||||||
|
|
||||||
"lol.mleku.dev" |
|
||||||
"lol.mleku.dev/log" |
|
||||||
) |
|
||||||
|
|
||||||
func TestMain(m *testing.M) { |
|
||||||
// Disable all logging during tests unless explicitly enabled
|
|
||||||
if os.Getenv("TEST_LOG") == "" { |
|
||||||
// Set log level to Off to suppress all logs
|
|
||||||
lol.SetLogLevel("off") |
|
||||||
// Also redirect output to discard
|
|
||||||
lol.Writer = io.Discard |
|
||||||
// Disable all log printers
|
|
||||||
log.T = lol.GetNullPrinter() |
|
||||||
log.D = lol.GetNullPrinter() |
|
||||||
log.I = lol.GetNullPrinter() |
|
||||||
log.W = lol.GetNullPrinter() |
|
||||||
log.E = lol.GetNullPrinter() |
|
||||||
log.F = lol.GetNullPrinter() |
|
||||||
} |
|
||||||
|
|
||||||
// Run tests
|
|
||||||
os.Exit(m.Run()) |
|
||||||
} |
|
||||||
@ -1,10 +0,0 @@ |
|||||||
package dgraph |
|
||||||
|
|
||||||
import ( |
|
||||||
"encoding/json" |
|
||||||
) |
|
||||||
|
|
||||||
// unmarshalJSON is a helper to unmarshal JSON with error handling
|
|
||||||
func unmarshalJSON(data []byte, v interface{}) error { |
|
||||||
return json.Unmarshal(data, v) |
|
||||||
} |
|
||||||
@ -0,0 +1,103 @@ |
|||||||
|
// Package neo4j provides hex utilities for normalizing pubkeys and event IDs.
|
||||||
|
//
|
||||||
|
// The nostr library applies binary optimization to e/p tags, storing 64-character
|
||||||
|
// hex strings as 33-byte binary (32 bytes + null terminator). This file provides
|
||||||
|
// utilities to ensure all pubkeys and event IDs stored in Neo4j are in consistent
|
||||||
|
// lowercase hex format.
|
||||||
|
package neo4j |
||||||
|
|
||||||
|
import ( |
||||||
|
"strings" |
||||||
|
|
||||||
|
"git.mleku.dev/mleku/nostr/encoders/hex" |
||||||
|
"git.mleku.dev/mleku/nostr/encoders/tag" |
||||||
|
) |
||||||
|
|
||||||
|
// Tag binary encoding constants (matching the nostr library)
|
||||||
|
const ( |
||||||
|
// BinaryEncodedLen is the length of a binary-encoded 32-byte hash with null terminator
|
||||||
|
BinaryEncodedLen = 33 |
||||||
|
// HexEncodedLen is the length of a hex-encoded 32-byte hash (pubkey or event ID)
|
||||||
|
HexEncodedLen = 64 |
||||||
|
// HashLen is the raw length of a hash (pubkey/event ID)
|
||||||
|
HashLen = 32 |
||||||
|
) |
||||||
|
|
||||||
|
// IsBinaryEncoded checks if a value is stored in the nostr library's binary-optimized format
|
||||||
|
func IsBinaryEncoded(val []byte) bool { |
||||||
|
return len(val) == BinaryEncodedLen && val[HashLen] == 0 |
||||||
|
} |
||||||
|
|
||||||
|
// NormalizePubkeyHex ensures a pubkey/event ID is in lowercase hex format.
|
||||||
|
// It handles:
|
||||||
|
// - Binary-encoded values (33 bytes with null terminator) -> converts to lowercase hex
|
||||||
|
// - Uppercase hex strings -> converts to lowercase
|
||||||
|
// - Already lowercase hex -> returns as-is
|
||||||
|
//
|
||||||
|
// This should be used for all pubkeys and event IDs before storing in Neo4j
|
||||||
|
// to prevent duplicate nodes due to case differences.
|
||||||
|
func NormalizePubkeyHex(val []byte) string { |
||||||
|
// Handle binary-encoded values from the nostr library
|
||||||
|
if IsBinaryEncoded(val) { |
||||||
|
// Convert binary to lowercase hex
|
||||||
|
return hex.Enc(val[:HashLen]) |
||||||
|
} |
||||||
|
|
||||||
|
// Handle hex strings (may be uppercase from external sources)
|
||||||
|
if len(val) == HexEncodedLen { |
||||||
|
return strings.ToLower(string(val)) |
||||||
|
} |
||||||
|
|
||||||
|
// For other lengths (possibly prefixes), lowercase the hex
|
||||||
|
return strings.ToLower(string(val)) |
||||||
|
} |
||||||
|
|
||||||
|
// ExtractPTagValue extracts a pubkey from a p-tag, handling binary encoding.
|
||||||
|
// Returns lowercase hex string suitable for Neo4j storage.
|
||||||
|
// Returns empty string if the tag doesn't have a valid value.
|
||||||
|
func ExtractPTagValue(t *tag.T) string { |
||||||
|
if t == nil || len(t.T) < 2 { |
||||||
|
return "" |
||||||
|
} |
||||||
|
|
||||||
|
// Use ValueHex() which properly handles both binary and hex formats
|
||||||
|
hexVal := t.ValueHex() |
||||||
|
if len(hexVal) == 0 { |
||||||
|
return "" |
||||||
|
} |
||||||
|
|
||||||
|
// Ensure lowercase (ValueHex returns the library's encoding which is lowercase,
|
||||||
|
// but we normalize anyway for safety with external data)
|
||||||
|
return strings.ToLower(string(hexVal)) |
||||||
|
} |
||||||
|
|
||||||
|
// ExtractETagValue extracts an event ID from an e-tag, handling binary encoding.
|
||||||
|
// Returns lowercase hex string suitable for Neo4j storage.
|
||||||
|
// Returns empty string if the tag doesn't have a valid value.
|
||||||
|
func ExtractETagValue(t *tag.T) string { |
||||||
|
if t == nil || len(t.T) < 2 { |
||||||
|
return "" |
||||||
|
} |
||||||
|
|
||||||
|
// Use ValueHex() which properly handles both binary and hex formats
|
||||||
|
hexVal := t.ValueHex() |
||||||
|
if len(hexVal) == 0 { |
||||||
|
return "" |
||||||
|
} |
||||||
|
|
||||||
|
// Ensure lowercase
|
||||||
|
return strings.ToLower(string(hexVal)) |
||||||
|
} |
||||||
|
|
||||||
|
// IsValidHexPubkey checks if a string is a valid 64-character hex pubkey
|
||||||
|
func IsValidHexPubkey(s string) bool { |
||||||
|
if len(s) != HexEncodedLen { |
||||||
|
return false |
||||||
|
} |
||||||
|
for _, c := range s { |
||||||
|
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) { |
||||||
|
return false |
||||||
|
} |
||||||
|
} |
||||||
|
return true |
||||||
|
} |
||||||
Loading…
Reference in new issue