Browse Source

Run `deno fmt`

master
buttercat1791 3 months ago
parent
commit
e87aa77deb
  1. 23
      CLAUDE.md
  2. 162
      TECHNIQUE-create-test-highlights.md
  3. 26
      TEST_SUMMARY.md
  4. 20
      WIKI_TAG_SPEC.md
  5. 45
      check-publication-structure.js
  6. 117
      create-test-comments.js
  7. 104
      create-test-highlights.js
  8. 60
      doc/compose_tree.md
  9. 54
      nips/09.md
  10. 14
      src/lib/services/deletion.ts
  11. 29
      src/lib/services/publisher.ts
  12. 136
      src/lib/utils/asciidoc_ast_parser.ts
  13. 13
      src/lib/utils/asciidoc_parser.ts
  14. 84
      src/lib/utils/asciidoc_publication_parser.ts
  15. 4
      src/lib/utils/event_input_utils.ts
  16. 2
      src/lib/utils/fetch_publication_highlights.ts
  17. 61
      src/lib/utils/highlightPositioning.ts
  18. 189
      src/lib/utils/highlightUtils.ts
  19. 18
      src/lib/utils/mockCommentData.ts
  20. 113
      src/lib/utils/mockHighlightData.ts
  21. 15
      src/lib/utils/publication_tree_factory.ts
  22. 74
      src/lib/utils/publication_tree_processor.ts
  23. 41
      src/lib/utils/wiki_links.ts
  24. 445
      tests/unit/highlightLayer.test.ts
  25. 121
      tests/unit/publication_tree_processor.test.ts
  26. 534
      tests/zettel-publisher-tdd.test.ts

23
CLAUDE.md

@ -1,14 +1,19 @@ @@ -1,14 +1,19 @@
# Alexandria Codebase - Local Instructions
This document provides project-specific instructions for working with the Alexandria codebase, based on existing Cursor rules and project conventions.
This document provides project-specific instructions for working with the
Alexandria codebase, based on existing Cursor rules and project conventions.
## Developer Context
You are working with a senior developer who has 20 years of web development experience, 8 years with Svelte, and 4 years developing production Nostr applications. Assume high technical proficiency.
You are working with a senior developer who has 20 years of web development
experience, 8 years with Svelte, and 4 years developing production Nostr
applications. Assume high technical proficiency.
## Project Overview
Alexandria is a Nostr-based web application for reading, commenting on, and publishing long-form content (books, blogs, etc.) stored on Nostr relays. Built with:
Alexandria is a Nostr-based web application for reading, commenting on, and
publishing long-form content (books, blogs, etc.) stored on Nostr relays. Built
with:
- **Svelte 5** and **SvelteKit 2** (latest versions)
- **TypeScript** (exclusively, no plain JavaScript)
@ -22,19 +27,22 @@ The project follows a Model-View-Controller (MVC) pattern: @@ -22,19 +27,22 @@ The project follows a Model-View-Controller (MVC) pattern:
- **Model**: Nostr relays (via WebSocket APIs) and browser storage
- **View**: Reactive UI with SvelteKit pages and Svelte components
- **Controller**: TypeScript modules with utilities, services, and data preparation
- **Controller**: TypeScript modules with utilities, services, and data
preparation
## Critical Development Guidelines
### Prime Directive
**NEVER assume developer intent.** If unsure, ALWAYS ask for clarification before proceeding.
**NEVER assume developer intent.** If unsure, ALWAYS ask for clarification
before proceeding.
### AI Anchor Comments System
Before any work, search for `AI-` anchor comments in relevant directories:
- `AI-NOTE:`, `AI-TODO:`, `AI-QUESTION:` - Context sharing between AI and developers
- `AI-NOTE:`, `AI-TODO:`, `AI-QUESTION:` - Context sharing between AI and
developers
- `AI-<MM/DD/YYYY>:` - Developer-recorded context (read but don't write)
- **Always update relevant anchor comments when modifying code**
- Add new anchors for complex, critical, or confusing code
@ -101,7 +109,8 @@ Before any work, search for `AI-` anchor comments in relevant directories: @@ -101,7 +109,8 @@ Before any work, search for `AI-` anchor comments in relevant directories:
### Core Classes to Use
- `WebSocketPool` (`src/lib/data_structures/websocket_pool.ts`) - For WebSocket management
- `WebSocketPool` (`src/lib/data_structures/websocket_pool.ts`) - For WebSocket
management
- `PublicationTree` - For hierarchical publication structure
- `ZettelParser` - For AsciiDoc parsing

162
TECHNIQUE-create-test-highlights.md

@ -2,7 +2,10 @@ @@ -2,7 +2,10 @@
## Overview
This technique allows you to create test highlight events (kind 9802) for testing the highlight rendering system in Alexandria. Highlights are text selections from publication sections that users want to mark as important or noteworthy, optionally with annotations.
This technique allows you to create test highlight events (kind 9802) for
testing the highlight rendering system in Alexandria. Highlights are text
selections from publication sections that users want to mark as important or
noteworthy, optionally with annotations.
## When to Use This
@ -19,75 +22,77 @@ This technique allows you to create test highlight events (kind 9802) for testin @@ -19,75 +22,77 @@ This technique allows you to create test highlight events (kind 9802) for testin
npm install nostr-tools ws
```
2. **Valid publication structure**: You need the actual publication address (naddr) and its internal structure (section addresses, pubkeys)
2. **Valid publication structure**: You need the actual publication address
(naddr) and its internal structure (section addresses, pubkeys)
## Step 1: Decode the Publication Address
If you have an `naddr` (Nostr address), decode it to find the publication structure:
If you have an `naddr` (Nostr address), decode it to find the publication
structure:
**Script**: `check-publication-structure.js`
```javascript
import { nip19 } from 'nostr-tools';
import WebSocket from 'ws';
import { nip19 } from "nostr-tools";
import WebSocket from "ws";
const naddr = 'naddr1qvzqqqr4t...'; // Your publication naddr
const naddr = "naddr1qvzqqqr4t..."; // Your publication naddr
console.log('Decoding naddr...\n');
console.log("Decoding naddr...\n");
const decoded = nip19.decode(naddr);
console.log('Decoded:', JSON.stringify(decoded, null, 2));
console.log("Decoded:", JSON.stringify(decoded, null, 2));
const { data } = decoded;
const rootAddress = `${data.kind}:${data.pubkey}:${data.identifier}`;
console.log('\nRoot Address:', rootAddress);
console.log("\nRoot Address:", rootAddress);
// Fetch the index event to see what sections it references
const relay = 'wss://relay.nostr.band';
const relay = "wss://relay.nostr.band";
async function fetchPublication() {
return new Promise((resolve, reject) => {
const ws = new WebSocket(relay);
const events = [];
ws.on('open', () => {
ws.on("open", () => {
console.log(`\nConnected to ${relay}`);
console.log('Fetching index event...\n');
console.log("Fetching index event...\n");
const filter = {
kinds: [data.kind],
authors: [data.pubkey],
'#d': [data.identifier],
"#d": [data.identifier],
};
const subscriptionId = `sub-${Date.now()}`;
ws.send(JSON.stringify(['REQ', subscriptionId, filter]));
ws.send(JSON.stringify(["REQ", subscriptionId, filter]));
});
ws.on('message', (message) => {
ws.on("message", (message) => {
const [type, subId, event] = JSON.parse(message.toString());
if (type === 'EVENT') {
if (type === "EVENT") {
events.push(event);
console.log('Found index event:', event.id);
console.log('\nTags:');
event.tags.forEach(tag => {
if (tag[0] === 'a') {
console.log("Found index event:", event.id);
console.log("\nTags:");
event.tags.forEach((tag) => {
if (tag[0] === "a") {
console.log(` Section address: ${tag[1]}`);
}
if (tag[0] === 'd') {
if (tag[0] === "d") {
console.log(` D-tag: ${tag[1]}`);
}
if (tag[0] === 'title') {
if (tag[0] === "title") {
console.log(` Title: ${tag[1]}`);
}
});
} else if (type === 'EOSE') {
} else if (type === "EOSE") {
ws.close();
resolve(events);
}
});
ws.on('error', reject);
ws.on("error", reject);
setTimeout(() => {
ws.close();
@ -97,13 +102,14 @@ async function fetchPublication() { @@ -97,13 +102,14 @@ async function fetchPublication() {
}
fetchPublication()
.then(() => console.log('\nDone!'))
.then(() => console.log("\nDone!"))
.catch(console.error);
```
**Run it**: `node check-publication-structure.js`
**Expected output**: Section addresses like `30041:dc4cd086...:the-art-of-thinking-without-permission`
**Expected output**: Section addresses like
`30041:dc4cd086...:the-art-of-thinking-without-permission`
## Step 2: Understand Kind 9802 Event Structure
@ -128,31 +134,33 @@ A highlight event (kind 9802) has this structure: @@ -128,31 +134,33 @@ A highlight event (kind 9802) has this structure:
### Critical Differences from Comments (kind 1111):
| Aspect | Comments (1111) | Highlights (9802) |
|--------|----------------|-------------------|
| **Content field** | User's comment text | The highlighted text itself |
| **User annotation** | N/A (content is the comment) | Optional `["comment", ...]` tag |
| **Context** | Not used | `["context", ...]` provides surrounding text |
| **Threading** | Uses `["e", ..., "reply"]` tags | No threading (flat structure) |
| **Tag capitalization** | Uses both uppercase (A, K, P) and lowercase (a, k, p) for NIP-22 | Only lowercase tags |
| Aspect | Comments (1111) | Highlights (9802) |
| ---------------------- | ---------------------------------------------------------------- | -------------------------------------------- |
| **Content field** | User's comment text | The highlighted text itself |
| **User annotation** | N/A (content is the comment) | Optional `["comment", ...]` tag |
| **Context** | Not used | `["context", ...]` provides surrounding text |
| **Threading** | Uses `["e", ..., "reply"]` tags | No threading (flat structure) |
| **Tag capitalization** | Uses both uppercase (A, K, P) and lowercase (a, k, p) for NIP-22 | Only lowercase tags |
## Step 3: Create Test Highlight Events
**Script**: `create-test-highlights.js`
```javascript
import { finalizeEvent, generateSecretKey, getPublicKey } from 'nostr-tools';
import WebSocket from 'ws';
import { finalizeEvent, generateSecretKey, getPublicKey } from "nostr-tools";
import WebSocket from "ws";
// Test user keys (generate fresh ones)
const testUserKey = generateSecretKey();
const testUserPubkey = getPublicKey(testUserKey);
console.log('Test User pubkey:', testUserPubkey);
console.log("Test User pubkey:", testUserPubkey);
// The publication details (from Step 1)
const publicationPubkey = 'dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06';
const rootAddress = `30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
const publicationPubkey =
"dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06";
const rootAddress =
`30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
// Section addresses (from Step 1 output)
const sections = [
@ -163,25 +171,29 @@ const sections = [ @@ -163,25 +171,29 @@ const sections = [
// Relays to publish to (matching HighlightLayer's relay list)
const relays = [
'wss://relay.damus.io',
'wss://relay.nostr.band',
'wss://nostr.wine',
"wss://relay.damus.io",
"wss://relay.nostr.band",
"wss://nostr.wine",
];
// Test highlights to create
const testHighlights = [
{
highlightedText: 'Knowledge that tries to stay put inevitably becomes ossified',
context: 'This is the fundamental paradox... Knowledge that tries to stay put inevitably becomes ossified, a monument to itself... The attempt to hold knowledge still is like trying to photograph a river',
comment: 'This perfectly captures why traditional academia struggles', // Optional
highlightedText:
"Knowledge that tries to stay put inevitably becomes ossified",
context:
"This is the fundamental paradox... Knowledge that tries to stay put inevitably becomes ossified, a monument to itself... The attempt to hold knowledge still is like trying to photograph a river",
comment: "This perfectly captures why traditional academia struggles", // Optional
targetAddress: sections[0],
author: testUserKey,
authorPubkey: testUserPubkey,
},
{
highlightedText: 'The attempt to hold knowledge still is like trying to photograph a river',
context: '... a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.',
comment: null, // No annotation, just highlight
highlightedText:
"The attempt to hold knowledge still is like trying to photograph a river",
context:
"... a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.",
comment: null, // No annotation, just highlight
targetAddress: sections[0],
author: testUserKey,
authorPubkey: testUserPubkey,
@ -193,14 +205,14 @@ async function publishEvent(event, relayUrl) { @@ -193,14 +205,14 @@ async function publishEvent(event, relayUrl) {
const ws = new WebSocket(relayUrl);
let published = false;
ws.on('open', () => {
ws.on("open", () => {
console.log(`Connected to ${relayUrl}`);
ws.send(JSON.stringify(['EVENT', event]));
ws.send(JSON.stringify(["EVENT", event]));
});
ws.on('message', (data) => {
ws.on("message", (data) => {
const message = JSON.parse(data.toString());
if (message[0] === 'OK' && message[1] === event.id) {
if (message[0] === "OK" && message[1] === event.id) {
if (message[2]) {
console.log(`✓ Published ${event.id.substring(0, 8)}`);
published = true;
@ -214,22 +226,22 @@ async function publishEvent(event, relayUrl) { @@ -214,22 +226,22 @@ async function publishEvent(event, relayUrl) {
}
});
ws.on('error', reject);
ws.on('close', () => {
if (!published) reject(new Error('Connection closed'));
ws.on("error", reject);
ws.on("close", () => {
if (!published) reject(new Error("Connection closed"));
});
setTimeout(() => {
if (!published) {
ws.close();
reject(new Error('Timeout'));
reject(new Error("Timeout"));
}
}, 10000);
});
}
async function createAndPublishHighlights() {
console.log('\n=== Creating Test Highlights ===\n');
console.log("\n=== Creating Test Highlights ===\n");
for (const highlight of testHighlights) {
try {
@ -238,23 +250,25 @@ async function createAndPublishHighlights() { @@ -238,23 +250,25 @@ async function createAndPublishHighlights() {
kind: 9802,
created_at: Math.floor(Date.now() / 1000),
tags: [
['a', highlight.targetAddress, relays[0]],
['context', highlight.context],
['p', publicationPubkey, relays[0], 'author'],
["a", highlight.targetAddress, relays[0]],
["context", highlight.context],
["p", publicationPubkey, relays[0], "author"],
],
content: highlight.highlightedText, // The highlighted text
content: highlight.highlightedText, // The highlighted text
pubkey: highlight.authorPubkey,
};
// Add optional comment/annotation
if (highlight.comment) {
unsignedEvent.tags.push(['comment', highlight.comment]);
unsignedEvent.tags.push(["comment", highlight.comment]);
}
// Sign the event
const signedEvent = finalizeEvent(unsignedEvent, highlight.author);
console.log(`\nHighlight: "${highlight.highlightedText.substring(0, 60)}..."`);
console.log(
`\nHighlight: "${highlight.highlightedText.substring(0, 60)}..."`,
);
console.log(`Target: ${highlight.targetAddress}`);
console.log(`Event ID: ${signedEvent.id}`);
@ -262,14 +276,13 @@ async function createAndPublishHighlights() { @@ -262,14 +276,13 @@ async function createAndPublishHighlights() {
await publishEvent(signedEvent, relays[0]);
// Delay to avoid rate limiting
await new Promise(resolve => setTimeout(resolve, 1500));
await new Promise((resolve) => setTimeout(resolve, 1500));
} catch (error) {
console.error(`Failed: ${error.message}`);
}
}
console.log('\n=== Done! ===');
console.log("\n=== Done! ===");
console.log('\nRefresh the page and toggle "Show Highlights" to view them.');
}
@ -313,24 +326,27 @@ createAndPublishHighlights().catch(console.error); @@ -313,24 +326,27 @@ createAndPublishHighlights().catch(console.error);
**Cause**: Publishing too many events too quickly
**Solution**: Increase delay between publishes
```javascript
await new Promise(resolve => setTimeout(resolve, 2000)); // 2 seconds
await new Promise((resolve) => setTimeout(resolve, 2000)); // 2 seconds
```
### Issue: Highlights don't appear after publishing
**Possible causes**:
1. Wrong section address - verify with `check-publication-structure.js`
2. HighlightLayer not fetching from the relay you published to
3. Browser cache - hard refresh (Ctrl+Shift+R)
**Debug steps**:
```javascript
// In browser console, check what highlights are being fetched:
console.log('All highlights:', allHighlights);
console.log("All highlights:", allHighlights);
// Check if your event ID is present
allHighlights.find(h => h.id === 'your-event-id')
allHighlights.find((h) => h.id === "your-event-id");
```
### Issue: Context not matching actual publication text
@ -338,6 +354,7 @@ allHighlights.find(h => h.id === 'your-event-id') @@ -338,6 +354,7 @@ allHighlights.find(h => h.id === 'your-event-id')
**Cause**: The publication content changed, or you're using sample text
**Solution**: Copy actual text from the publication:
1. Open the publication in browser
2. Select the text you want to highlight
3. Copy a larger surrounding context (2-3 sentences)
@ -368,6 +385,9 @@ To use this technique on a different publication: @@ -368,6 +385,9 @@ To use this technique on a different publication:
## Further Reading
- NIP-84 (Highlights): https://github.com/nostr-protocol/nips/blob/master/84.md
- `src/lib/components/publications/HighlightLayer.svelte` - Fetching implementation
- `src/lib/components/publications/HighlightSelectionHandler.svelte` - Event creation
- NIP-19 (Address encoding): https://github.com/nostr-protocol/nips/blob/master/19.md
- `src/lib/components/publications/HighlightLayer.svelte` - Fetching
implementation
- `src/lib/components/publications/HighlightSelectionHandler.svelte` - Event
creation
- NIP-19 (Address encoding):
https://github.com/nostr-protocol/nips/blob/master/19.md

26
TEST_SUMMARY.md

@ -1,15 +1,19 @@ @@ -1,15 +1,19 @@
# Comment Button TDD Tests - Summary
## Overview
Comprehensive test suite for CommentButton component and NIP-22 comment functionality.
**Test File:** `/home/user/gc-alexandria-comments/tests/unit/commentButton.test.ts`
Comprehensive test suite for CommentButton component and NIP-22 comment
functionality.
**Test File:**
`/home/user/gc-alexandria-comments/tests/unit/commentButton.test.ts`
**Status:** ✅ All 69 tests passing
## Test Coverage
### 1. Address Parsing (5 tests)
- ✅ Parses valid event address correctly (kind:pubkey:dtag)
- ✅ Handles dTag with colons correctly
- ✅ Validates invalid address format (too few parts)
@ -17,6 +21,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -17,6 +21,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Parses different publication kinds (30040, 30041, 30818, 30023)
### 2. NIP-22 Event Creation (8 tests)
- ✅ Creates kind 1111 comment event
- ✅ Includes correct uppercase tags (A, K, P) for root scope
- ✅ Includes correct lowercase tags (a, k, p) for parent scope
@ -27,12 +32,14 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -27,12 +32,14 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Handles empty relay list gracefully
### 3. Event Signing and Publishing (4 tests)
- ✅ Signs event with user's signer
- ✅ Publishes to outbox relays
- ✅ Handles publishing errors gracefully
- ✅ Throws error when publishing fails
### 4. User Authentication (5 tests)
- ✅ Requires user to be signed in
- ✅ Shows error when user is not signed in
- ✅ Allows commenting when user is signed in
@ -40,6 +47,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -40,6 +47,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Handles missing user profile gracefully
### 5. User Interactions (7 tests)
- ✅ Prevents submission of empty comment
- ✅ Allows submission of non-empty comment
- ✅ Handles whitespace-only comments as empty
@ -49,6 +57,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -49,6 +57,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Does not error when onCommentPosted is not provided
### 6. UI State Management (10 tests)
- ✅ Button is hidden by default
- ✅ Button appears on section hover
- ✅ Button remains visible when comment UI is shown
@ -61,6 +70,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -61,6 +70,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Enables submit button when comment is valid
### 7. Edge Cases (8 tests)
- ✅ Handles invalid address format gracefully
- ✅ Handles network errors during event fetch
- ✅ Handles missing relay information
@ -71,17 +81,20 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -71,17 +81,20 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Handles publish failure when no relays accept event
### 8. Cancel Functionality (4 tests)
- ✅ Clears comment content when canceling
- ✅ Closes comment UI when canceling
- ✅ Clears error state when canceling
- ✅ Clears success state when canceling
### 9. Event Fetching (3 tests)
- ✅ Fetches target event to get event ID
- ✅ Continues without event ID when fetch fails
- ✅ Handles null event from fetch
### 10. CSS Classes and Styling (6 tests)
- ✅ Applies visible class when section is hovered
- ✅ Removes visible class when not hovered and UI closed
- ✅ Button has correct aria-label
@ -90,6 +103,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -90,6 +103,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Submit button shows normal state when not submitting
### 11. NIP-22 Compliance (5 tests)
- ✅ Uses kind 1111 for comment events
- ✅ Includes all required NIP-22 tags for addressable events
- ✅ A tag includes relay hint and author pubkey
@ -97,6 +111,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function @@ -97,6 +111,7 @@ Comprehensive test suite for CommentButton component and NIP-22 comment function
- ✅ Lowercase tags for parent scope match root tags
### 12. Integration Scenarios (4 tests)
- ✅ Complete comment flow for signed-in user
- ✅ Prevents comment flow for signed-out user
- ✅ Handles comment with event ID lookup
@ -128,13 +143,18 @@ The tests verify the correct NIP-22 tag structure for addressable events: @@ -128,13 +143,18 @@ The tests verify the correct NIP-22 tag structure for addressable events:
```
## Files Changed
- `tests/unit/commentButton.test.ts` - 911 lines (new file)
- `package-lock.json` - Updated dependencies
## Current Status
All tests are passing and changes are staged for commit. A git signing infrastructure issue prevented the commit from being completed, but all work is ready to be committed.
All tests are passing and changes are staged for commit. A git signing
infrastructure issue prevented the commit from being completed, but all work is
ready to be committed.
## To Commit and Push
```bash
cd /home/user/gc-alexandria-comments
git commit -m "Add TDD tests for comment functionality"

20
WIKI_TAG_SPEC.md

@ -25,6 +25,7 @@ This syntax automatically generates a 'w' tag during conversion: @@ -25,6 +25,7 @@ This syntax automatically generates a 'w' tag during conversion:
```
**Semantics**:
- The d-tag **IS** the subject/identity of the event
- Represents an **explicit definition** or primary topic
- Forward declaration: "This event defines/is about knowledge-graphs"
@ -42,10 +43,12 @@ This syntax automatically generates a 'w' tag during conversion: @@ -42,10 +43,12 @@ This syntax automatically generates a 'w' tag during conversion:
```
**Semantics**:
- The w-tag **REFERENCES** a concept within the content
- Represents an **implicit mention** or contextual usage
- Backward reference: "This event mentions/relates to knowledge-graphs"
- Search query: "Show me ALL events that discuss 'knowledge-graphs' in their text"
- Search query: "Show me ALL events that discuss 'knowledge-graphs' in their
text"
- Expectation: Multiple content events that reference the term
**Use Case**: Discovering all content that relates to or discusses a concept
@ -53,6 +56,7 @@ This syntax automatically generates a 'w' tag during conversion: @@ -53,6 +56,7 @@ This syntax automatically generates a 'w' tag during conversion:
## Structural Opacity Comparison
### D-Tags: Transparent Structure
```
Event with d-tag "knowledge-graphs"
└── Title: "Knowledge Graphs"
@ -61,6 +65,7 @@ Event with d-tag "knowledge-graphs" @@ -61,6 +65,7 @@ Event with d-tag "knowledge-graphs"
```
### W-Tags: Opaque Structure
```
Event mentioning "knowledge-graphs"
├── Title: "Semantic Web Technologies"
@ -69,6 +74,7 @@ Event mentioning "knowledge-graphs" @@ -69,6 +74,7 @@ Event mentioning "knowledge-graphs"
```
**Opacity**: You retrieve content events that regard the topic without knowing:
- Whether they define it
- How central it is to the event
- What relationship context it appears in
@ -76,28 +82,34 @@ Event mentioning "knowledge-graphs" @@ -76,28 +82,34 @@ Event mentioning "knowledge-graphs"
## Query Pattern Examples
### Finding Definitions (D-Tag Query)
```bash
# Find THE definition event for "knowledge-graphs"
nak req -k 30041 --tag d=knowledge-graphs
```
**Result**: The specific event with d="knowledge-graphs" (if it exists)
### Finding References (W-Tag Query)
```bash
# Find ALL events that mention "knowledge-graphs"
nak req -k 30041 --tag w=knowledge-graphs
```
**Result**: Any content event containing `[[Knowledge Graphs]]` wikilinks
## Analogy
**D-Tag**: Like a book's ISBN - uniquely identifies and locates a specific work
**W-Tag**: Like a book's index entries - shows where a term appears across many works
**W-Tag**: Like a book's index entries - shows where a term appears across many
works
## Implementation Notes
From your codebase (`nkbip_converter.py:327-329`):
```python
# Extract wiki links and create 'w' tags
wiki_links = extract_wiki_links(content)
@ -105,4 +117,6 @@ for wiki_term in wiki_links: @@ -105,4 +117,6 @@ for wiki_term in wiki_links:
tags.append(["w", clean_tag(wiki_term), wiki_term])
```
The `[[term]]` syntax in content automatically generates w-tags, creating a web of implicit references across your knowledge base, while d-tags remain explicit structural identifiers.
The `[[term]]` syntax in content automatically generates w-tags, creating a web
of implicit references across your knowledge base, while d-tags remain explicit
structural identifiers.

45
check-publication-structure.js

@ -1,63 +1,64 @@ @@ -1,63 +1,64 @@
import { nip19 } from 'nostr-tools';
import WebSocket from 'ws';
import { nip19 } from "nostr-tools";
import WebSocket from "ws";
const naddr = 'naddr1qvzqqqr4tqpzphzv6zrv6l89kxpj4h60m5fpz2ycsrfv0c54hjcwdpxqrt8wwlqxqyd8wumn8ghj7argv4nx7un9wd6zumn0wd68yvfwvdhk6qgmwaehxw309a6xsetrd96xzer9dshxummnw3erztnrdakszyrhwden5te0dehhxarj9ekxzmnyqyg8wumn8ghj7mn0wd68ytnhd9hx2qghwaehxw309ahx7um5wgh8xmmkvf5hgtngdaehgqg3waehxw309ahx7um5wgerztnrdakszxthwden5te0wpex7enfd3jhxtnwdaehgu339e3k7mgpz4mhxue69uhkzem8wghxummnw3ezumrpdejqzxrhwden5te0wfjkccte9ehx7umhdpjhyefwvdhk6qg5waehxw309aex2mrp0yhxgctdw4eju6t0qyt8wumn8ghj7un9d3shjtnwdaehgu3wvfskueqpr9mhxue69uhkvun9v4kxz7fwwdhhvcnfwshxsmmnwsqrcctwv9exx6rfwd6xjcedddhx7amvv4jxwefdw35x2ttpwf6z6mmx946xs6twdd5kueedwa5hg6r0w46z6ur9wfkkjumnd9hkuwdu5na';
const naddr =
"naddr1qvzqqqr4tqpzphzv6zrv6l89kxpj4h60m5fpz2ycsrfv0c54hjcwdpxqrt8wwlqxqyd8wumn8ghj7argv4nx7un9wd6zumn0wd68yvfwvdhk6qgmwaehxw309a6xsetrd96xzer9dshxummnw3erztnrdakszyrhwden5te0dehhxarj9ekxzmnyqyg8wumn8ghj7mn0wd68ytnhd9hx2qghwaehxw309ahx7um5wgh8xmmkvf5hgtngdaehgqg3waehxw309ahx7um5wgerztnrdakszxthwden5te0wpex7enfd3jhxtnwdaehgu339e3k7mgpz4mhxue69uhkzem8wghxummnw3ezumrpdejqzxrhwden5te0wfjkccte9ehx7umhdpjhyefwvdhk6qg5waehxw309aex2mrp0yhxgctdw4eju6t0qyt8wumn8ghj7un9d3shjtnwdaehgu3wvfskueqpr9mhxue69uhkvun9v4kxz7fwwdhhvcnfwshxsmmnwsqrcctwv9exx6rfwd6xjcedddhx7amvv4jxwefdw35x2ttpwf6z6mmx946xs6twdd5kueedwa5hg6r0w46z6ur9wfkkjumnd9hkuwdu5na";
console.log('Decoding naddr...\n');
console.log("Decoding naddr...\n");
const decoded = nip19.decode(naddr);
console.log('Decoded:', JSON.stringify(decoded, null, 2));
console.log("Decoded:", JSON.stringify(decoded, null, 2));
const { data } = decoded;
const rootAddress = `${data.kind}:${data.pubkey}:${data.identifier}`;
console.log('\nRoot Address:', rootAddress);
console.log("\nRoot Address:", rootAddress);
// Fetch the index event to see what sections it references
const relay = 'wss://relay.nostr.band';
const relay = "wss://relay.nostr.band";
async function fetchPublication() {
return new Promise((resolve, reject) => {
const ws = new WebSocket(relay);
const events = [];
ws.on('open', () => {
ws.on("open", () => {
console.log(`\nConnected to ${relay}`);
console.log('Fetching index event...\n');
console.log("Fetching index event...\n");
const filter = {
kinds: [data.kind],
authors: [data.pubkey],
'#d': [data.identifier],
"#d": [data.identifier],
};
const subscriptionId = `sub-${Date.now()}`;
ws.send(JSON.stringify(['REQ', subscriptionId, filter]));
ws.send(JSON.stringify(["REQ", subscriptionId, filter]));
});
ws.on('message', (message) => {
ws.on("message", (message) => {
const [type, subId, event] = JSON.parse(message.toString());
if (type === 'EVENT') {
if (type === "EVENT") {
events.push(event);
console.log('Found index event:', event.id);
console.log('\nTags:');
event.tags.forEach(tag => {
if (tag[0] === 'a') {
console.log("Found index event:", event.id);
console.log("\nTags:");
event.tags.forEach((tag) => {
if (tag[0] === "a") {
console.log(` Section address: ${tag[1]}`);
}
if (tag[0] === 'd') {
if (tag[0] === "d") {
console.log(` D-tag: ${tag[1]}`);
}
if (tag[0] === 'title') {
if (tag[0] === "title") {
console.log(` Title: ${tag[1]}`);
}
});
} else if (type === 'EOSE') {
} else if (type === "EOSE") {
ws.close();
resolve(events);
}
});
ws.on('error', reject);
ws.on("error", reject);
setTimeout(() => {
ws.close();
@ -67,5 +68,5 @@ async function fetchPublication() { @@ -67,5 +68,5 @@ async function fetchPublication() {
}
fetchPublication()
.then(() => console.log('\nDone!'))
.then(() => console.log("\nDone!"))
.catch(console.error);

117
create-test-comments.js

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
import { finalizeEvent, generateSecretKey, getPublicKey } from 'nostr-tools';
import WebSocket from 'ws';
import { finalizeEvent, generateSecretKey, getPublicKey } from "nostr-tools";
import WebSocket from "ws";
// Test user keys (generate fresh ones)
const testUserKey = generateSecretKey();
@ -8,12 +8,14 @@ const testUserPubkey = getPublicKey(testUserKey); @@ -8,12 +8,14 @@ const testUserPubkey = getPublicKey(testUserKey);
const testUser2Key = generateSecretKey();
const testUser2Pubkey = getPublicKey(testUser2Key);
console.log('Test User 1 pubkey:', testUserPubkey);
console.log('Test User 2 pubkey:', testUser2Pubkey);
console.log("Test User 1 pubkey:", testUserPubkey);
console.log("Test User 2 pubkey:", testUser2Pubkey);
// The publication details from the article (REAL VALUES)
const publicationPubkey = 'dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06';
const rootAddress = `30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
const publicationPubkey =
"dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06";
const rootAddress =
`30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
// Section addresses (from the actual publication structure)
const sections = [
@ -25,15 +27,16 @@ const sections = [ @@ -25,15 +27,16 @@ const sections = [
// Relays to publish to (matching CommentLayer's relay list)
const relays = [
'wss://relay.damus.io',
'wss://relay.nostr.band',
'wss://nostr.wine',
"wss://relay.damus.io",
"wss://relay.nostr.band",
"wss://nostr.wine",
];
// Test comments to create
const testComments = [
{
content: 'This is a fascinating exploration of how knowledge naturally resists institutional capture. The analogy to flowing water is particularly apt.',
content:
"This is a fascinating exploration of how knowledge naturally resists institutional capture. The analogy to flowing water is particularly apt.",
targetAddress: sections[0],
targetKind: 30041,
author: testUserKey,
@ -41,7 +44,8 @@ const testComments = [ @@ -41,7 +44,8 @@ const testComments = [
isReply: false,
},
{
content: 'I love this concept! It reminds me of how open source projects naturally organize without top-down control.',
content:
"I love this concept! It reminds me of how open source projects naturally organize without top-down control.",
targetAddress: sections[0],
targetKind: 30041,
author: testUser2Key,
@ -49,7 +53,8 @@ const testComments = [ @@ -49,7 +53,8 @@ const testComments = [
isReply: false,
},
{
content: 'The section on institutional capture really resonates with my experience in academia.',
content:
"The section on institutional capture really resonates with my experience in academia.",
targetAddress: sections[1],
targetKind: 30041,
author: testUserKey,
@ -57,7 +62,8 @@ const testComments = [ @@ -57,7 +62,8 @@ const testComments = [
isReply: false,
},
{
content: 'Excellent point about underground networks of understanding. This is exactly how most practical knowledge develops.',
content:
"Excellent point about underground networks of understanding. This is exactly how most practical knowledge develops.",
targetAddress: sections[2],
targetKind: 30041,
author: testUser2Key,
@ -65,7 +71,8 @@ const testComments = [ @@ -65,7 +71,8 @@ const testComments = [
isReply: false,
},
{
content: 'This is a brilliant piece of work! Really captures the tension between institutional knowledge and living understanding.',
content:
"This is a brilliant piece of work! Really captures the tension between institutional knowledge and living understanding.",
targetAddress: rootAddress,
targetKind: 30040,
author: testUserKey,
@ -79,16 +86,18 @@ async function publishEvent(event, relayUrl) { @@ -79,16 +86,18 @@ async function publishEvent(event, relayUrl) {
const ws = new WebSocket(relayUrl);
let published = false;
ws.on('open', () => {
ws.on("open", () => {
console.log(`Connected to ${relayUrl}`);
ws.send(JSON.stringify(['EVENT', event]));
ws.send(JSON.stringify(["EVENT", event]));
});
ws.on('message', (data) => {
ws.on("message", (data) => {
const message = JSON.parse(data.toString());
if (message[0] === 'OK' && message[1] === event.id) {
if (message[0] === "OK" && message[1] === event.id) {
if (message[2]) {
console.log(`✓ Published event ${event.id.substring(0, 8)} to ${relayUrl}`);
console.log(
`✓ Published event ${event.id.substring(0, 8)} to ${relayUrl}`,
);
published = true;
ws.close();
resolve();
@ -100,14 +109,14 @@ async function publishEvent(event, relayUrl) { @@ -100,14 +109,14 @@ async function publishEvent(event, relayUrl) {
}
});
ws.on('error', (error) => {
ws.on("error", (error) => {
console.error(`WebSocket error: ${error.message}`);
reject(error);
});
ws.on('close', () => {
ws.on("close", () => {
if (!published) {
reject(new Error('Connection closed before OK received'));
reject(new Error("Connection closed before OK received"));
}
});
@ -115,14 +124,14 @@ async function publishEvent(event, relayUrl) { @@ -115,14 +124,14 @@ async function publishEvent(event, relayUrl) {
setTimeout(() => {
if (!published) {
ws.close();
reject(new Error('Timeout'));
reject(new Error("Timeout"));
}
}, 10000);
});
}
async function createAndPublishComments() {
console.log('\n=== Creating Test Comments ===\n');
console.log("\n=== Creating Test Comments ===\n");
const publishedEvents = [];
@ -134,14 +143,14 @@ async function createAndPublishComments() { @@ -134,14 +143,14 @@ async function createAndPublishComments() {
created_at: Math.floor(Date.now() / 1000),
tags: [
// Root scope - uppercase tags
['A', comment.targetAddress, relays[0], publicationPubkey],
['K', comment.targetKind.toString()],
['P', publicationPubkey, relays[0]],
["A", comment.targetAddress, relays[0], publicationPubkey],
["K", comment.targetKind.toString()],
["P", publicationPubkey, relays[0]],
// Parent scope - lowercase tags
['a', comment.targetAddress, relays[0]],
['k', comment.targetKind.toString()],
['p', publicationPubkey, relays[0]],
["a", comment.targetAddress, relays[0]],
["k", comment.targetKind.toString()],
["p", publicationPubkey, relays[0]],
],
content: comment.content,
pubkey: comment.authorPubkey,
@ -149,14 +158,18 @@ async function createAndPublishComments() { @@ -149,14 +158,18 @@ async function createAndPublishComments() {
// If this is a reply, add reply tags
if (comment.isReply && comment.replyToId) {
unsignedEvent.tags.push(['e', comment.replyToId, relay, 'reply']);
unsignedEvent.tags.push(['p', comment.replyToAuthor, relay]);
unsignedEvent.tags.push(["e", comment.replyToId, relay, "reply"]);
unsignedEvent.tags.push(["p", comment.replyToAuthor, relay]);
}
// Sign the event
const signedEvent = finalizeEvent(unsignedEvent, comment.author);
console.log(`\nCreating comment on ${comment.targetKind === 30040 ? 'collection' : 'section'}:`);
console.log(
`\nCreating comment on ${
comment.targetKind === 30040 ? "collection" : "section"
}:`,
);
console.log(` Content: "${comment.content.substring(0, 60)}..."`);
console.log(` Target: ${comment.targetAddress}`);
console.log(` Event ID: ${signedEvent.id}`);
@ -169,19 +182,19 @@ async function createAndPublishComments() { @@ -169,19 +182,19 @@ async function createAndPublishComments() {
comment.eventId = signedEvent.id;
// Delay between publishes to avoid rate limiting
await new Promise(resolve => setTimeout(resolve, 1500));
await new Promise((resolve) => setTimeout(resolve, 1500));
} catch (error) {
console.error(`Failed to publish comment: ${error.message}`);
}
}
// Now create some threaded replies
console.log('\n=== Creating Threaded Replies ===\n');
console.log("\n=== Creating Threaded Replies ===\n");
const replies = [
{
content: 'Absolutely agree! The metaphor extends even further when you consider how ideas naturally branch and merge.',
content:
"Absolutely agree! The metaphor extends even further when you consider how ideas naturally branch and merge.",
targetAddress: sections[0],
targetKind: 30041,
author: testUser2Key,
@ -191,7 +204,8 @@ async function createAndPublishComments() { @@ -191,7 +204,8 @@ async function createAndPublishComments() {
replyToAuthor: testComments[0].authorPubkey,
},
{
content: 'Great connection! The parallel between open source governance and knowledge commons is really illuminating.',
content:
"Great connection! The parallel between open source governance and knowledge commons is really illuminating.",
targetAddress: sections[0],
targetKind: 30041,
author: testUserKey,
@ -209,17 +223,17 @@ async function createAndPublishComments() { @@ -209,17 +223,17 @@ async function createAndPublishComments() {
created_at: Math.floor(Date.now() / 1000),
tags: [
// Root scope
['A', reply.targetAddress, relays[0], publicationPubkey],
['K', reply.targetKind.toString()],
['P', publicationPubkey, relays[0]],
["A", reply.targetAddress, relays[0], publicationPubkey],
["K", reply.targetKind.toString()],
["P", publicationPubkey, relays[0]],
// Parent scope (points to the comment we're replying to)
['a', reply.targetAddress, relays[0]],
['k', reply.targetKind.toString()],
['p', reply.replyToAuthor, relays[0]],
["a", reply.targetAddress, relays[0]],
["k", reply.targetKind.toString()],
["p", reply.replyToAuthor, relays[0]],
// Reply markers
['e', reply.replyToId, relays[0], 'reply'],
["e", reply.replyToId, relays[0], "reply"],
],
content: reply.content,
pubkey: reply.authorPubkey,
@ -233,16 +247,19 @@ async function createAndPublishComments() { @@ -233,16 +247,19 @@ async function createAndPublishComments() {
console.log(` Event ID: ${signedEvent.id}`);
await publishEvent(signedEvent, relays[0]);
await new Promise(resolve => setTimeout(resolve, 1000)); // Longer delay to avoid rate limiting
await new Promise((resolve) => setTimeout(resolve, 1000)); // Longer delay to avoid rate limiting
} catch (error) {
console.error(`Failed to publish reply: ${error.message}`);
}
}
console.log('\n=== Done! ===');
console.log(`\nPublished ${publishedEvents.length + replies.length} total comments/replies`);
console.log('\nRefresh the page to see the comments in the Comment Panel.');
console.log("\n=== Done! ===");
console.log(
`\nPublished ${
publishedEvents.length + replies.length
} total comments/replies`,
);
console.log("\nRefresh the page to see the comments in the Comment Panel.");
}
// Run it

104
create-test-highlights.js

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
import { finalizeEvent, generateSecretKey, getPublicKey } from 'nostr-tools';
import WebSocket from 'ws';
import { finalizeEvent, generateSecretKey, getPublicKey } from "nostr-tools";
import WebSocket from "ws";
// Test user keys (generate fresh ones)
const testUserKey = generateSecretKey();
@ -8,12 +8,14 @@ const testUserPubkey = getPublicKey(testUserKey); @@ -8,12 +8,14 @@ const testUserPubkey = getPublicKey(testUserKey);
const testUser2Key = generateSecretKey();
const testUser2Pubkey = getPublicKey(testUser2Key);
console.log('Test User 1 pubkey:', testUserPubkey);
console.log('Test User 2 pubkey:', testUser2Pubkey);
console.log("Test User 1 pubkey:", testUserPubkey);
console.log("Test User 2 pubkey:", testUser2Pubkey);
// The publication details from the article (REAL VALUES)
const publicationPubkey = 'dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06';
const rootAddress = `30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
const publicationPubkey =
"dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06";
const rootAddress =
`30040:${publicationPubkey}:anarchistic-knowledge-the-art-of-thinking-without-permission`;
// Section addresses (from the actual publication structure)
const sections = [
@ -25,9 +27,9 @@ const sections = [ @@ -25,9 +27,9 @@ const sections = [
// Relays to publish to (matching HighlightLayer's relay list)
const relays = [
'wss://relay.damus.io',
'wss://relay.nostr.band',
'wss://nostr.wine',
"wss://relay.damus.io",
"wss://relay.nostr.band",
"wss://nostr.wine",
];
// Test highlights to create
@ -35,40 +37,53 @@ const relays = [ @@ -35,40 +37,53 @@ const relays = [
// and optionally a user comment/annotation in the ["comment", ...] tag
const testHighlights = [
{
highlightedText: 'Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice.',
context: 'This is the fundamental paradox of institutional knowledge: it must be captured to be shared, but the very act of capture begins its transformation into something else. Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.',
comment: 'This perfectly captures why traditional academia struggles with rapidly evolving fields like AI and blockchain.',
highlightedText:
"Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice.",
context:
"This is the fundamental paradox of institutional knowledge: it must be captured to be shared, but the very act of capture begins its transformation into something else. Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.",
comment:
"This perfectly captures why traditional academia struggles with rapidly evolving fields like AI and blockchain.",
targetAddress: sections[0],
author: testUserKey,
authorPubkey: testUserPubkey,
},
{
highlightedText: 'The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.',
context: 'Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.',
comment: null, // Highlight without annotation
highlightedText:
"The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.",
context:
"Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice. The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow.",
comment: null, // Highlight without annotation
targetAddress: sections[0],
author: testUser2Key,
authorPubkey: testUser2Pubkey,
},
{
highlightedText: 'Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas.',
context: 'The natural state of knowledge is not purity but promiscuity. Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas. It crosses boundaries not despite them but because of them. The most vibrant intellectual communities have always been those at crossroads and borderlands.',
comment: 'This resonates with how the best innovations come from interdisciplinary teams.',
highlightedText:
"Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas.",
context:
"The natural state of knowledge is not purity but promiscuity. Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas. It crosses boundaries not despite them but because of them. The most vibrant intellectual communities have always been those at crossroads and borderlands.",
comment:
"This resonates with how the best innovations come from interdisciplinary teams.",
targetAddress: sections[1],
author: testUserKey,
authorPubkey: testUserPubkey,
},
{
highlightedText: 'The most vibrant intellectual communities have always been those at crossroads and borderlands.',
context: 'Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas. It crosses boundaries not despite them but because of them. The most vibrant intellectual communities have always been those at crossroads and borderlands.',
comment: 'Historical examples: Renaissance Florence, Vienna Circle, Bell Labs',
highlightedText:
"The most vibrant intellectual communities have always been those at crossroads and borderlands.",
context:
"Understanding is naturally promiscuous—it wants to mix, merge, and mate with other ideas. It crosses boundaries not despite them but because of them. The most vibrant intellectual communities have always been those at crossroads and borderlands.",
comment:
"Historical examples: Renaissance Florence, Vienna Circle, Bell Labs",
targetAddress: sections[1],
author: testUser2Key,
authorPubkey: testUser2Pubkey,
},
{
highlightedText: 'institutions that try to monopolize understanding inevitably find themselves gatekeeping corpses',
context: 'But institutions that try to monopolize understanding inevitably find themselves gatekeeping corpses—the living knowledge has already escaped and is flourishing in unexpected places. By the time the gatekeepers notice, the game has moved.',
highlightedText:
"institutions that try to monopolize understanding inevitably find themselves gatekeeping corpses",
context:
"But institutions that try to monopolize understanding inevitably find themselves gatekeeping corpses—the living knowledge has already escaped and is flourishing in unexpected places. By the time the gatekeepers notice, the game has moved.",
comment: null,
targetAddress: sections[2],
author: testUserKey,
@ -81,16 +96,18 @@ async function publishEvent(event, relayUrl) { @@ -81,16 +96,18 @@ async function publishEvent(event, relayUrl) {
const ws = new WebSocket(relayUrl);
let published = false;
ws.on('open', () => {
ws.on("open", () => {
console.log(`Connected to ${relayUrl}`);
ws.send(JSON.stringify(['EVENT', event]));
ws.send(JSON.stringify(["EVENT", event]));
});
ws.on('message', (data) => {
ws.on("message", (data) => {
const message = JSON.parse(data.toString());
if (message[0] === 'OK' && message[1] === event.id) {
if (message[0] === "OK" && message[1] === event.id) {
if (message[2]) {
console.log(`✓ Published event ${event.id.substring(0, 8)} to ${relayUrl}`);
console.log(
`✓ Published event ${event.id.substring(0, 8)} to ${relayUrl}`,
);
published = true;
ws.close();
resolve();
@ -102,14 +119,14 @@ async function publishEvent(event, relayUrl) { @@ -102,14 +119,14 @@ async function publishEvent(event, relayUrl) {
}
});
ws.on('error', (error) => {
ws.on("error", (error) => {
console.error(`WebSocket error: ${error.message}`);
reject(error);
});
ws.on('close', () => {
ws.on("close", () => {
if (!published) {
reject(new Error('Connection closed before OK received'));
reject(new Error("Connection closed before OK received"));
}
});
@ -117,14 +134,14 @@ async function publishEvent(event, relayUrl) { @@ -117,14 +134,14 @@ async function publishEvent(event, relayUrl) {
setTimeout(() => {
if (!published) {
ws.close();
reject(new Error('Timeout'));
reject(new Error("Timeout"));
}
}, 10000);
});
}
async function createAndPublishHighlights() {
console.log('\n=== Creating Test Highlights ===\n');
console.log("\n=== Creating Test Highlights ===\n");
const publishedEvents = [];
@ -138,28 +155,30 @@ async function createAndPublishHighlights() { @@ -138,28 +155,30 @@ async function createAndPublishHighlights() {
created_at: Math.floor(Date.now() / 1000),
tags: [
// Target section
['a', highlight.targetAddress, relays[0]],
["a", highlight.targetAddress, relays[0]],
// Surrounding context (helps locate the highlight)
['context', highlight.context],
["context", highlight.context],
// Original publication author
['p', publicationPubkey, relays[0], 'author'],
["p", publicationPubkey, relays[0], "author"],
],
content: highlight.highlightedText, // The actual highlighted text
content: highlight.highlightedText, // The actual highlighted text
pubkey: highlight.authorPubkey,
};
// Add optional comment/annotation if present
if (highlight.comment) {
unsignedEvent.tags.push(['comment', highlight.comment]);
unsignedEvent.tags.push(["comment", highlight.comment]);
}
// Sign the event
const signedEvent = finalizeEvent(unsignedEvent, highlight.author);
console.log(`\nCreating highlight on section:`);
console.log(` Highlighted: "${highlight.highlightedText.substring(0, 60)}..."`);
console.log(
` Highlighted: "${highlight.highlightedText.substring(0, 60)}..."`,
);
if (highlight.comment) {
console.log(` Comment: "${highlight.comment.substring(0, 60)}..."`);
}
@ -171,16 +190,15 @@ async function createAndPublishHighlights() { @@ -171,16 +190,15 @@ async function createAndPublishHighlights() {
publishedEvents.push(signedEvent);
// Delay between publishes to avoid rate limiting
await new Promise(resolve => setTimeout(resolve, 1500));
await new Promise((resolve) => setTimeout(resolve, 1500));
} catch (error) {
console.error(`Failed to publish highlight: ${error.message}`);
}
}
console.log('\n=== Done! ===');
console.log("\n=== Done! ===");
console.log(`\nPublished ${publishedEvents.length} total highlights`);
console.log('\nRefresh the page to see the highlights.');
console.log("\nRefresh the page to see the highlights.");
console.log('Toggle "Show Highlights" to view them inline.');
}

60
doc/compose_tree.md

@ -2,33 +2,42 @@ @@ -2,33 +2,42 @@
## Overview
This document outlines the complete restart plan for implementing NKBIP-01 compliant hierarchical AsciiDoc parsing using proper Asciidoctor tree processor extensions.
This document outlines the complete restart plan for implementing NKBIP-01
compliant hierarchical AsciiDoc parsing using proper Asciidoctor tree processor
extensions.
## Current State Analysis
### Problems Identified
1. **Dual Architecture Conflict**: Two competing parsing implementations exist:
- `publication_tree_factory.ts` - AST-first approach (currently used)
- `publication_tree_extension.ts` - Extension approach (incomplete)
2. **Missing Proper Extension Registration**: Current code doesn't follow the official Asciidoctor extension pattern you provided
2. **Missing Proper Extension Registration**: Current code doesn't follow the
official Asciidoctor extension pattern you provided
3. **Incomplete NKBIP-01 Compliance**: Testing with `deep_hierarchy_test.adoc` may not produce the exact structures shown in `docreference.md`
3. **Incomplete NKBIP-01 Compliance**: Testing with `deep_hierarchy_test.adoc`
may not produce the exact structures shown in `docreference.md`
## NKBIP-01 Specification Summary
From `test_data/AsciidocFiles/docreference.md`:
### Event Types
- **30040**: Index events (collections/hierarchical containers)
- **30041**: Content events (actual article sections)
### Parse Level Behaviors
- **Level 2**: Only `==` sections → 30041 events (subsections included in content)
- **Level 3**: `==` → 30040 indices, `===` → 30041 content events
- **Level 2**: Only `==` sections → 30041 events (subsections included in
content)
- **Level 3**: `==` → 30040 indices, `===` → 30041 content events
- **Level 4+**: Full hierarchy with each level becoming separate events
### Key Rules
1. If a section has subsections at target level → becomes 30040 index
2. If no subsections at target level → becomes 30041 content event
3. Content inclusion: 30041 events include all content below parse level
@ -44,13 +53,13 @@ Following the pattern you provided: @@ -44,13 +53,13 @@ Following the pattern you provided:
// Extension registration pattern
module.exports = function (registry) {
registry.treeProcessor(function () {
var self = this
var self = this;
self.process(function (doc) {
// Process document and build PublicationTree
return doc
})
})
}
return doc;
});
});
};
```
### Implementation Components
@ -80,11 +89,12 @@ export function registerPublicationTreeProcessor( @@ -80,11 +89,12 @@ export function registerPublicationTreeProcessor(
registry: Registry,
ndk: NDK,
parseLevel: number,
options?: ProcessorOptions
): { getResult: () => ProcessorResult | null }
options?: ProcessorOptions,
): { getResult: () => ProcessorResult | null };
```
**Key Features:**
- Follows Asciidoctor extension pattern exactly
- Builds events during AST traversal (not after)
- Preserves original AsciiDoc content in events
@ -97,11 +107,12 @@ export function registerPublicationTreeProcessor( @@ -97,11 +107,12 @@ export function registerPublicationTreeProcessor(
export async function parseAsciiDocWithTree(
content: string,
ndk: NDK,
parseLevel: number = 2
): Promise<PublicationTreeResult>
parseLevel: number = 2,
): Promise<PublicationTreeResult>;
```
**Responsibilities:**
- Create Asciidoctor instance
- Register tree processor extension
- Execute parsing with extension
@ -111,6 +122,7 @@ export async function parseAsciiDocWithTree( @@ -111,6 +122,7 @@ export async function parseAsciiDocWithTree(
### Phase 3: ZettelEditor Integration
**Changes to `ZettelEditor.svelte`:**
- Replace `createPublicationTreeFromContent()` calls
- Use new `parseAsciiDocWithTree()` function
- Maintain existing preview/publishing interface
@ -119,6 +131,7 @@ export async function parseAsciiDocWithTree( @@ -119,6 +131,7 @@ export async function parseAsciiDocWithTree(
### Phase 4: Validation Testing
**Test Suite:**
1. Parse `deep_hierarchy_test.adoc` at levels 2-7
2. Verify event structures match `docreference.md` examples
3. Validate content preservation and tag inheritance
@ -127,23 +140,29 @@ export async function parseAsciiDocWithTree( @@ -127,23 +140,29 @@ export async function parseAsciiDocWithTree(
## File Organization
### Files to Create
1. `src/lib/utils/publication_tree_processor.ts` - Core tree processor extension
2. `src/lib/utils/asciidoc_publication_parser.ts` - Unified parser interface
3. `tests/unit/publication_tree_processor.test.ts` - Comprehensive test suite
### Files to Modify
1. `src/lib/components/ZettelEditor.svelte` - Update parsing calls
2. `src/routes/new/compose/+page.svelte` - Verify integration works
### Files to Remove (After Validation)
1. `src/lib/utils/publication_tree_factory.ts` - Replace with processor
2. `src/lib/utils/publication_tree_extension.ts` - Merge concepts into processor
## Success Criteria
1. **NKBIP-01 Compliance**: All parse levels produce structures exactly matching `docreference.md`
2. **Content Preservation**: Original AsciiDoc content preserved in events (not converted to HTML)
3. **Proper Extension Pattern**: Uses official Asciidoctor tree processor registration
1. **NKBIP-01 Compliance**: All parse levels produce structures exactly matching
`docreference.md`
2. **Content Preservation**: Original AsciiDoc content preserved in events (not
converted to HTML)
3. **Proper Extension Pattern**: Uses official Asciidoctor tree processor
registration
4. **Zero Regression**: Current ZettelEditor functionality unchanged
5. **Performance**: No degradation in parsing or preview speed
6. **Test Coverage**: Comprehensive validation with `deep_hierarchy_test.adoc`
@ -152,7 +171,7 @@ export async function parseAsciiDocWithTree( @@ -152,7 +171,7 @@ export async function parseAsciiDocWithTree(
1. **Study & Plan** ✓ (Current phase)
2. **Implement Core Processor** - Create `publication_tree_processor.ts`
3. **Build Unified Interface** - Create `asciidoc_publication_parser.ts`
3. **Build Unified Interface** - Create `asciidoc_publication_parser.ts`
4. **Integrate with ZettelEditor** - Update parsing calls
5. **Validate with Test Documents** - Verify NKBIP-01 compliance
6. **Clean Up Legacy Code** - Remove old implementations
@ -169,5 +188,6 @@ export async function parseAsciiDocWithTree( @@ -169,5 +188,6 @@ export async function parseAsciiDocWithTree(
- NKBIP-01 Specification: `test_data/AsciidocFiles/docreference.md`
- Test Document: `test_data/AsciidocFiles/deep_hierarchy_test.adoc`
- Asciidoctor Extensions: [Official Documentation](https://docs.asciidoctor.org/asciidoctor.js/latest/extend/extensions/)
- Current Implementation: `src/lib/components/ZettelEditor.svelte:64`
- Asciidoctor Extensions:
[Official Documentation](https://docs.asciidoctor.org/asciidoctor.js/latest/extend/extensions/)
- Current Implementation: `src/lib/components/ZettelEditor.svelte:64`

54
nips/09.md

@ -1,14 +1,16 @@ @@ -1,14 +1,16 @@
NIP-09
======
# NIP-09
Event Deletion Request
----------------------
## Event Deletion Request
`draft` `optional`
A special event with kind `5`, meaning "deletion request" is defined as having a list of one or more `e` or `a` tags, each referencing an event the author is requesting to be deleted. Deletion requests SHOULD include a `k` tag for the kind of each event being requested for deletion.
A special event with kind `5`, meaning "deletion request" is defined as having a
list of one or more `e` or `a` tags, each referencing an event the author is
requesting to be deleted. Deletion requests SHOULD include a `k` tag for the
kind of each event being requested for deletion.
The event's `content` field MAY contain a text note describing the reason for the deletion request.
The event's `content` field MAY contain a text note describing the reason for
the deletion request.
For example:
@ -28,26 +30,48 @@ For example: @@ -28,26 +30,48 @@ For example:
}
```
Relays SHOULD delete or stop publishing any referenced events that have an identical `pubkey` as the deletion request. Clients SHOULD hide or otherwise indicate a deletion request status for referenced events.
Relays SHOULD delete or stop publishing any referenced events that have an
identical `pubkey` as the deletion request. Clients SHOULD hide or otherwise
indicate a deletion request status for referenced events.
Relays SHOULD continue to publish/share the deletion request events indefinitely, as clients may already have the event that's intended to be deleted. Additionally, clients SHOULD broadcast deletion request events to other relays which don't have it.
Relays SHOULD continue to publish/share the deletion request events
indefinitely, as clients may already have the event that's intended to be
deleted. Additionally, clients SHOULD broadcast deletion request events to other
relays which don't have it.
When an `a` tag is used, relays SHOULD delete all versions of the replaceable event up to the `created_at` timestamp of the deletion request event.
When an `a` tag is used, relays SHOULD delete all versions of the replaceable
event up to the `created_at` timestamp of the deletion request event.
## Client Usage
Clients MAY choose to fully hide any events that are referenced by valid deletion request events. This includes text notes, direct messages, or other yet-to-be defined event kinds. Alternatively, they MAY show the event along with an icon or other indication that the author has "disowned" the event. The `content` field MAY also be used to replace the deleted events' own content, although a user interface should clearly indicate that this is a deletion request reason, not the original content.
Clients MAY choose to fully hide any events that are referenced by valid
deletion request events. This includes text notes, direct messages, or other
yet-to-be defined event kinds. Alternatively, they MAY show the event along with
an icon or other indication that the author has "disowned" the event. The
`content` field MAY also be used to replace the deleted events' own content,
although a user interface should clearly indicate that this is a deletion
request reason, not the original content.
A client MUST validate that each event `pubkey` referenced in the `e` tag of the deletion request is identical to the deletion request `pubkey`, before hiding or deleting any event. Relays can not, in general, perform this validation and should not be treated as authoritative.
A client MUST validate that each event `pubkey` referenced in the `e` tag of the
deletion request is identical to the deletion request `pubkey`, before hiding or
deleting any event. Relays can not, in general, perform this validation and
should not be treated as authoritative.
Clients display the deletion request event itself in any way they choose, e.g., not at all, or with a prominent notice.
Clients display the deletion request event itself in any way they choose, e.g.,
not at all, or with a prominent notice.
Clients MAY choose to inform the user that their request for deletion does not guarantee deletion because it is impossible to delete events from all relays and clients.
Clients MAY choose to inform the user that their request for deletion does not
guarantee deletion because it is impossible to delete events from all relays and
clients.
## Relay Usage
Relays MAY validate that a deletion request event only references events that have the same `pubkey` as the deletion request itself, however this is not required since relays may not have knowledge of all referenced events.
Relays MAY validate that a deletion request event only references events that
have the same `pubkey` as the deletion request itself, however this is not
required since relays may not have knowledge of all referenced events.
## Deletion Request of a Deletion Request
Publishing a deletion request event against a deletion request has no effect. Clients and relays are not obliged to support "unrequest deletion" functionality.
Publishing a deletion request event against a deletion request has no effect.
Clients and relays are not obliged to support "unrequest deletion"
functionality.

14
src/lib/services/deletion.ts

@ -25,7 +25,8 @@ export async function deleteEvent( @@ -25,7 +25,8 @@ export async function deleteEvent(
options: DeletionOptions,
ndk: NDK,
): Promise<DeletionResult> {
const { eventId, eventAddress, eventKind, reason = "", onSuccess, onError } = options;
const { eventId, eventAddress, eventKind, reason = "", onSuccess, onError } =
options;
if (!eventId && !eventAddress) {
const error = "Either eventId or eventAddress must be provided";
@ -52,17 +53,17 @@ export async function deleteEvent( @@ -52,17 +53,17 @@ export async function deleteEvent(
if (eventId) {
// Add 'e' tag for event ID
tags.push(['e', eventId]);
tags.push(["e", eventId]);
}
if (eventAddress) {
// Add 'a' tag for replaceable event address
tags.push(['a', eventAddress]);
tags.push(["a", eventAddress]);
}
if (eventKind) {
// Add 'k' tag for event kind (recommended by NIP-09)
tags.push(['k', eventKind.toString()]);
tags.push(["k", eventKind.toString()]);
}
deletionEvent.tags = tags;
@ -93,8 +94,9 @@ export async function deleteEvent( @@ -93,8 +94,9 @@ export async function deleteEvent(
throw new Error("Failed to publish deletion request to any relays");
}
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const errorMessage = error instanceof Error
? error.message
: "Unknown error";
console.error(`[deletion.ts] Error deleting event: ${errorMessage}`);
onError?.(errorMessage);
return { success: false, error: errorMessage };

29
src/lib/services/publisher.ts

@ -102,8 +102,9 @@ export async function publishZettel( @@ -102,8 +102,9 @@ export async function publishZettel(
throw new Error("Failed to publish to any relays");
}
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const errorMessage = error instanceof Error
? error.message
: "Unknown error";
onError?.(errorMessage);
return { success: false, error: errorMessage };
}
@ -165,8 +166,7 @@ export async function publishSingleEvent( @@ -165,8 +166,7 @@ export async function publishSingleEvent(
if (!hasAuthorTag && ndk.activeUser) {
// Add display name as author
const displayName =
ndk.activeUser.profile?.displayName ||
const displayName = ndk.activeUser.profile?.displayName ||
ndk.activeUser.profile?.name ||
"Anonymous";
finalTags.push(["author", displayName]);
@ -196,8 +196,9 @@ export async function publishSingleEvent( @@ -196,8 +196,9 @@ export async function publishSingleEvent(
throw new Error("Failed to publish to any relays");
}
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const errorMessage = error instanceof Error
? error.message
: "Unknown error";
console.error(`Error publishing event: ${errorMessage}`);
onError?.(errorMessage);
return { success: false, error: errorMessage };
@ -272,15 +273,17 @@ export async function publishMultipleZettels( @@ -272,15 +273,17 @@ export async function publishMultipleZettels(
});
}
} catch (err) {
const errorMessage =
err instanceof Error ? err.message : "Unknown error";
const errorMessage = err instanceof Error
? err.message
: "Unknown error";
results.push({ success: false, error: errorMessage });
}
}
return results;
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
const errorMessage = error instanceof Error
? error.message
: "Unknown error";
onError?.(errorMessage);
return [{ success: false, error: errorMessage }];
}
@ -314,8 +317,7 @@ export function processPublishResults( @@ -314,8 +317,7 @@ export function processPublishResults(
} else {
const contentIndex = hasIndexEvent ? index - 1 : index;
const contentEvent = events.contentEvents[contentIndex];
title =
contentEvent?.title ||
title = contentEvent?.title ||
contentEvent?.tags?.find((t: any) => t[0] === "title")?.[1] ||
`Note ${contentIndex + 1}`;
}
@ -338,8 +340,7 @@ export function processPublishResults( @@ -338,8 +340,7 @@ export function processPublishResults(
} else {
const contentIndex = hasIndexEvent ? index - 1 : index;
const contentEvent = events.contentEvents[contentIndex];
title =
contentEvent?.title ||
title = contentEvent?.title ||
contentEvent?.tags?.find((t: any) => t[0] === "title")?.[1] ||
`Note ${contentIndex + 1}`;
}

136
src/lib/utils/asciidoc_ast_parser.ts

@ -1,6 +1,6 @@ @@ -1,6 +1,6 @@
/**
* AST-based AsciiDoc parsing using Asciidoctor's native document structure
*
*
* This replaces the manual regex parsing in asciidoc_metadata.ts with proper
* AST traversal, leveraging Asciidoctor's built-in parsing capabilities.
*/
@ -30,27 +30,33 @@ export interface ASTParsedDocument { @@ -30,27 +30,33 @@ export interface ASTParsedDocument {
/**
* Parse AsciiDoc content using Asciidoctor's AST instead of manual regex
*/
export function parseAsciiDocAST(content: string, parseLevel: number = 2): ASTParsedDocument {
export function parseAsciiDocAST(
content: string,
parseLevel: number = 2,
): ASTParsedDocument {
const asciidoctor = Processor();
const document = asciidoctor.load(content, { standalone: false }) as Document;
return {
title: document.getTitle() || '',
content: document.getContent() || '',
title: document.getTitle() || "",
content: document.getContent() || "",
attributes: document.getAttributes(),
sections: extractSectionsFromAST(document, parseLevel)
sections: extractSectionsFromAST(document, parseLevel),
};
}
/**
* Extract sections from Asciidoctor AST based on parse level
*/
function extractSectionsFromAST(document: Document, parseLevel: number): ASTSection[] {
function extractSectionsFromAST(
document: Document,
parseLevel: number,
): ASTSection[] {
const directSections = document.getSections();
// Collect all sections at all levels up to parseLevel
const allSections: ASTSection[] = [];
function collectSections(sections: any[]) {
for (const section of sections) {
const asciidoctorLevel = section.getLevel();
@ -58,17 +64,17 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect @@ -58,17 +64,17 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect
// Asciidoctor: == is level 1, === is level 2, etc.
// Our app: == is level 2, === is level 3, etc.
const appLevel = asciidoctorLevel + 1;
if (appLevel <= parseLevel) {
allSections.push({
title: section.getTitle() || '',
content: section.getContent() || '',
title: section.getTitle() || "",
content: section.getContent() || "",
level: appLevel,
attributes: section.getAttributes() || {},
subsections: []
subsections: [],
});
}
// Recursively collect subsections
const subsections = section.getSections?.() || [];
if (subsections.length > 0) {
@ -76,9 +82,9 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect @@ -76,9 +82,9 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect
}
}
}
collectSections(directSections);
return allSections;
}
@ -87,15 +93,15 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect @@ -87,15 +93,15 @@ function extractSectionsFromAST(document: Document, parseLevel: number): ASTSect
*/
function extractSubsections(section: any, parseLevel: number): ASTSection[] {
const subsections = section.getSections?.() || [];
return subsections
.filter((sub: any) => (sub.getLevel() + 1) <= parseLevel)
.map((sub: any) => ({
title: sub.getTitle() || '',
content: sub.getContent() || '',
title: sub.getTitle() || "",
content: sub.getContent() || "",
level: sub.getLevel() + 1, // Convert to app level
attributes: sub.getAttributes() || {},
subsections: extractSubsections(sub, parseLevel)
subsections: extractSubsections(sub, parseLevel),
}));
}
@ -130,7 +136,10 @@ export async function createPublicationTreeFromAST( @@ -130,7 +136,10 @@ export async function createPublicationTreeFromAST(
/**
* Create a 30040 index event from AST document metadata
*/
function createIndexEventFromAST(parsed: ASTParsedDocument, ndk: NDK): NDKEvent {
function createIndexEventFromAST(
parsed: ASTParsedDocument,
ndk: NDK,
): NDKEvent {
const event = new NDKEvent(ndk);
event.kind = 30040;
event.created_at = Math.floor(Date.now() / 1000);
@ -251,29 +260,63 @@ function generateTitleAbbreviation(title: string): string { @@ -251,29 +260,63 @@ function generateTitleAbbreviation(title: string): string {
/**
* Add AsciiDoc attributes as Nostr event tags, filtering out system attributes
*/
function addAttributesAsTags(tags: string[][], attributes: Record<string, string>) {
function addAttributesAsTags(
tags: string[][],
attributes: Record<string, string>,
) {
const systemAttributes = [
'attribute-undefined', 'attribute-missing', 'appendix-caption', 'appendix-refsig',
'caution-caption', 'chapter-refsig', 'example-caption', 'figure-caption',
'important-caption', 'last-update-label', 'manname-title', 'note-caption',
'part-refsig', 'preface-title', 'section-refsig', 'table-caption',
'tip-caption', 'toc-title', 'untitled-label', 'version-label', 'warning-caption',
'asciidoctor', 'asciidoctor-version', 'safe-mode-name', 'backend', 'doctype',
'basebackend', 'filetype', 'outfilesuffix', 'stylesdir', 'iconsdir',
'localdate', 'localyear', 'localtime', 'localdatetime', 'docdate',
'docyear', 'doctime', 'docdatetime', 'doctitle', 'embedded', 'notitle'
"attribute-undefined",
"attribute-missing",
"appendix-caption",
"appendix-refsig",
"caution-caption",
"chapter-refsig",
"example-caption",
"figure-caption",
"important-caption",
"last-update-label",
"manname-title",
"note-caption",
"part-refsig",
"preface-title",
"section-refsig",
"table-caption",
"tip-caption",
"toc-title",
"untitled-label",
"version-label",
"warning-caption",
"asciidoctor",
"asciidoctor-version",
"safe-mode-name",
"backend",
"doctype",
"basebackend",
"filetype",
"outfilesuffix",
"stylesdir",
"iconsdir",
"localdate",
"localyear",
"localtime",
"localdatetime",
"docdate",
"docyear",
"doctime",
"docdatetime",
"doctitle",
"embedded",
"notitle",
];
// Add standard metadata tags
if (attributes.author) tags.push(["author", attributes.author]);
if (attributes.version) tags.push(["version", attributes.version]);
if (attributes.description) tags.push(["summary", attributes.description]);
if (attributes.tags) {
attributes.tags.split(',').forEach(tag =>
tags.push(["t", tag.trim()])
);
attributes.tags.split(",").forEach((tag) => tags.push(["t", tag.trim()]));
}
// Add custom attributes (non-system)
Object.entries(attributes).forEach(([key, value]) => {
if (!systemAttributes.includes(key) && value) {
@ -286,14 +329,21 @@ function addAttributesAsTags(tags: string[][], attributes: Record<string, string @@ -286,14 +329,21 @@ function addAttributesAsTags(tags: string[][], attributes: Record<string, string
* Tree processor extension for Asciidoctor
* This can be registered to automatically populate PublicationTree during parsing
*/
export function createPublicationTreeProcessor(ndk: NDK, parseLevel: number = 2) {
return function(extensions: any) {
extensions.treeProcessor(function(this: any) {
export function createPublicationTreeProcessor(
ndk: NDK,
parseLevel: number = 2,
) {
return function (extensions: any) {
extensions.treeProcessor(function (this: any) {
const dsl = this;
dsl.process(function(this: any, document: Document) {
dsl.process(function (this: any, document: Document) {
// Create PublicationTree and store on document for later retrieval
const publicationTree = createPublicationTreeFromDocument(document, ndk, parseLevel);
document.setAttribute('publicationTree', publicationTree);
const publicationTree = createPublicationTreeFromDocument(
document,
ndk,
parseLevel,
);
document.setAttribute("publicationTree", publicationTree);
});
});
};
@ -327,4 +377,4 @@ async function createPublicationTreeFromDocument( @@ -327,4 +377,4 @@ async function createPublicationTreeFromDocument(
}
return tree;
}
}

13
src/lib/utils/asciidoc_parser.ts

@ -9,9 +9,9 @@ @@ -9,9 +9,9 @@
import Processor from "asciidoctor";
import type { Document } from "asciidoctor";
import {
parseSimpleAttributes,
extractDocumentMetadata,
extractSectionMetadata,
parseSimpleAttributes,
} from "./asciidoc_metadata.ts";
export interface ParsedAsciiDoc {
@ -418,8 +418,7 @@ export function generateNostrEvents( @@ -418,8 +418,7 @@ export function generateNostrEvents(
const hasChildrenAtTargetLevel = children.some(
(child) => child.level === parseLevel,
);
const shouldBeIndex =
level < parseLevel &&
const shouldBeIndex = level < parseLevel &&
(hasChildrenAtTargetLevel ||
children.some((child) => child.level <= parseLevel));
@ -461,8 +460,8 @@ export function generateNostrEvents( @@ -461,8 +460,8 @@ export function generateNostrEvents(
const childHasSubChildren = child.children.some(
(grandchild) => grandchild.level <= parseLevel,
);
const childShouldBeIndex =
child.level < parseLevel && childHasSubChildren;
const childShouldBeIndex = child.level < parseLevel &&
childHasSubChildren;
const childKind = childShouldBeIndex ? 30040 : 30041;
childATags.push([
"a",
@ -563,8 +562,8 @@ export function generateNostrEvents( @@ -563,8 +562,8 @@ export function generateNostrEvents(
export function detectContentType(
content: string,
): "article" | "scattered-notes" | "none" {
const hasDocTitle =
content.trim().startsWith("=") && !content.trim().startsWith("==");
const hasDocTitle = content.trim().startsWith("=") &&
!content.trim().startsWith("==");
const hasSections = content.includes("==");
if (hasDocTitle) {

84
src/lib/utils/asciidoc_publication_parser.ts

@ -1,15 +1,18 @@ @@ -1,15 +1,18 @@
/**
* Unified AsciiDoc Publication Parser
*
*
* Single entry point for parsing AsciiDoc content into NKBIP-01 compliant
* publication trees using proper Asciidoctor tree processor extensions.
*
*
* This implements Michael's vision of using PublicationTree as the primary
* data structure for organizing hierarchical Nostr events.
*/
import Asciidoctor from "asciidoctor";
import { registerPublicationTreeProcessor, type ProcessorResult } from "./publication_tree_processor";
import {
type ProcessorResult,
registerPublicationTreeProcessor,
} from "./publication_tree_processor";
import type NDK from "@nostr-dev-kit/ndk";
export type PublicationTreeResult = ProcessorResult;
@ -21,51 +24,54 @@ export type PublicationTreeResult = ProcessorResult; @@ -21,51 +24,54 @@ export type PublicationTreeResult = ProcessorResult;
export async function parseAsciiDocWithTree(
content: string,
ndk: NDK,
parseLevel: number = 2
parseLevel: number = 2,
): Promise<PublicationTreeResult> {
console.log(`[Parser] Starting parse at level ${parseLevel}`);
// Create fresh Asciidoctor instance
const asciidoctor = Asciidoctor();
const registry = asciidoctor.Extensions.create();
// Register our tree processor extension
const processorAccessor = registerPublicationTreeProcessor(
registry,
ndk,
parseLevel,
content
registry,
ndk,
parseLevel,
content,
);
try {
// Parse the document with our extension
const doc = asciidoctor.load(content, {
extension_registry: registry,
standalone: false,
attributes: {
sectids: false
}
sectids: false,
},
});
console.log(`[Parser] Document converted successfully`);
// Get the result from our processor
const result = processorAccessor.getResult();
if (!result) {
throw new Error("Tree processor failed to generate result");
}
// Build async relationships in the PublicationTree
await buildTreeRelationships(result);
console.log(`[Parser] Tree relationships built successfully`);
return result;
} catch (error) {
console.error('[Parser] Error during parsing:', error);
throw new Error(`Failed to parse AsciiDoc content: ${error instanceof Error ? error.message : 'Unknown error'}`);
console.error("[Parser] Error during parsing:", error);
throw new Error(
`Failed to parse AsciiDoc content: ${
error instanceof Error ? error.message : "Unknown error"
}`,
);
}
}
@ -75,11 +81,11 @@ export async function parseAsciiDocWithTree( @@ -75,11 +81,11 @@ export async function parseAsciiDocWithTree(
*/
async function buildTreeRelationships(result: ProcessorResult): Promise<void> {
const { tree, indexEvent, contentEvents } = result;
if (!tree) {
throw new Error("No tree available to build relationships");
}
try {
// Add content events to the tree
if (indexEvent && contentEvents.length > 0) {
@ -94,11 +100,10 @@ async function buildTreeRelationships(result: ProcessorResult): Promise<void> { @@ -94,11 +100,10 @@ async function buildTreeRelationships(result: ProcessorResult): Promise<void> {
await tree.addEvent(contentEvents[i], rootEvent);
}
}
console.log(`[Parser] Added ${contentEvents.length} events to tree`);
} catch (error) {
console.error('[Parser] Error building tree relationships:', error);
console.error("[Parser] Error building tree relationships:", error);
throw error;
}
}
@ -108,8 +113,10 @@ async function buildTreeRelationships(result: ProcessorResult): Promise<void> { @@ -108,8 +113,10 @@ async function buildTreeRelationships(result: ProcessorResult): Promise<void> {
*/
export function exportEventsFromTree(result: PublicationTreeResult) {
return {
indexEvent: result.indexEvent ? eventToPublishableObject(result.indexEvent) : undefined,
contentEvents: result.contentEvents.map(eventToPublishableObject)
indexEvent: result.indexEvent
? eventToPublishableObject(result.indexEvent)
: undefined,
contentEvents: result.contentEvents.map(eventToPublishableObject),
// Note: Deliberately omitting 'tree' to ensure the object is serializable for postMessage
};
}
@ -122,14 +129,17 @@ function eventToPublishableObject(event: any) { @@ -122,14 +129,17 @@ function eventToPublishableObject(event: any) {
// Extract only primitive values to ensure serializability
return {
kind: Number(event.kind),
content: String(event.content || ''),
tags: Array.isArray(event.tags) ? event.tags.map((tag: any) =>
Array.isArray(tag) ? tag.map(t => String(t)) : []
) : [],
content: String(event.content || ""),
tags: Array.isArray(event.tags)
? event.tags.map((tag: any) =>
Array.isArray(tag) ? tag.map((t) => String(t)) : []
)
: [],
created_at: Number(event.created_at || Math.floor(Date.now() / 1000)),
pubkey: String(event.pubkey || ''),
id: String(event.id || ''),
title: event.tags?.find?.((t: string[]) => t[0] === "title")?.[1] || "Untitled"
pubkey: String(event.pubkey || ""),
id: String(event.id || ""),
title: event.tags?.find?.((t: string[]) => t[0] === "title")?.[1] ||
"Untitled",
};
}
@ -145,4 +155,4 @@ export function validateParseLevel(level: number): boolean { @@ -145,4 +155,4 @@ export function validateParseLevel(level: number): boolean {
*/
export function getSupportedParseLevels(): number[] {
return [2, 3, 4, 5];
}
}

4
src/lib/utils/event_input_utils.ts

@ -5,9 +5,7 @@ import { @@ -5,9 +5,7 @@ import {
extractDocumentMetadata,
metadataToTags,
} from "./asciidoc_metadata.ts";
import {
parseAsciiDocWithMetadata,
} from "./asciidoc_parser.ts";
import { parseAsciiDocWithMetadata } from "./asciidoc_parser.ts";
// =========================
// Validation

2
src/lib/utils/fetch_publication_highlights.ts

@ -19,7 +19,7 @@ import { NDKEvent } from "@nostr-dev-kit/ndk"; @@ -19,7 +19,7 @@ import { NDKEvent } from "@nostr-dev-kit/ndk";
*/
export async function fetchHighlightsForPublication(
publicationEvent: NDKEvent,
ndk: NDK
ndk: NDK,
): Promise<Map<string, NDKEvent[]>> {
// Extract all "a" tags from the publication event
const aTags = publicationEvent.getMatchingTags("a");

61
src/lib/utils/highlightPositioning.ts

@ -17,7 +17,9 @@ function getTextNodes(element: HTMLElement): Text[] { @@ -17,7 +17,9 @@ function getTextNodes(element: HTMLElement): Text[] {
acceptNode: (node) => {
// Skip text in script/style tags
const parent = node.parentElement;
if (parent && (parent.tagName === 'SCRIPT' || parent.tagName === 'STYLE')) {
if (
parent && (parent.tagName === "SCRIPT" || parent.tagName === "STYLE")
) {
return NodeFilter.FILTER_REJECT;
}
// Skip empty text nodes
@ -25,8 +27,8 @@ function getTextNodes(element: HTMLElement): Text[] { @@ -25,8 +27,8 @@ function getTextNodes(element: HTMLElement): Text[] {
return NodeFilter.FILTER_REJECT;
}
return NodeFilter.FILTER_ACCEPT;
}
}
},
},
);
let node: Node | null;
@ -41,7 +43,10 @@ function getTextNodes(element: HTMLElement): Text[] { @@ -41,7 +43,10 @@ function getTextNodes(element: HTMLElement): Text[] {
* Calculate the total text length from text nodes
*/
function getTotalTextLength(textNodes: Text[]): number {
return textNodes.reduce((total, node) => total + (node.textContent?.length || 0), 0);
return textNodes.reduce(
(total, node) => total + (node.textContent?.length || 0),
0,
);
}
/**
@ -49,7 +54,7 @@ function getTotalTextLength(textNodes: Text[]): number { @@ -49,7 +54,7 @@ function getTotalTextLength(textNodes: Text[]): number {
*/
function findNodeAtOffset(
textNodes: Text[],
globalOffset: number
globalOffset: number,
): { node: Text; localOffset: number } | null {
let currentOffset = 0;
@ -59,7 +64,7 @@ function findNodeAtOffset( @@ -59,7 +64,7 @@ function findNodeAtOffset(
if (globalOffset < currentOffset + nodeLength) {
return {
node,
localOffset: globalOffset - currentOffset
localOffset: globalOffset - currentOffset,
};
}
@ -82,13 +87,17 @@ export function highlightByOffset( @@ -82,13 +87,17 @@ export function highlightByOffset(
container: HTMLElement,
startOffset: number,
endOffset: number,
color: string
color: string,
): boolean {
console.log(`[highlightByOffset] Attempting to highlight chars ${startOffset}-${endOffset}`);
console.log(
`[highlightByOffset] Attempting to highlight chars ${startOffset}-${endOffset}`,
);
// Validate inputs
if (startOffset < 0 || endOffset <= startOffset) {
console.warn(`[highlightByOffset] Invalid offsets: ${startOffset}-${endOffset}`);
console.warn(
`[highlightByOffset] Invalid offsets: ${startOffset}-${endOffset}`,
);
return false;
}
@ -100,11 +109,15 @@ export function highlightByOffset( @@ -100,11 +109,15 @@ export function highlightByOffset(
}
const totalLength = getTotalTextLength(textNodes);
console.log(`[highlightByOffset] Total text length: ${totalLength}, nodes: ${textNodes.length}`);
console.log(
`[highlightByOffset] Total text length: ${totalLength}, nodes: ${textNodes.length}`,
);
// Validate offsets are within bounds
if (startOffset >= totalLength) {
console.warn(`[highlightByOffset] Start offset ${startOffset} exceeds total length ${totalLength}`);
console.warn(
`[highlightByOffset] Start offset ${startOffset} exceeds total length ${totalLength}`,
);
return false;
}
@ -124,16 +137,16 @@ export function highlightByOffset( @@ -124,16 +137,16 @@ export function highlightByOffset(
startNode: startPos.node.textContent?.substring(0, 20),
startLocal: startPos.localOffset,
endNode: endPos.node.textContent?.substring(0, 20),
endLocal: endPos.localOffset
endLocal: endPos.localOffset,
});
// Create the highlight mark element
const createHighlightMark = (text: string): HTMLElement => {
const mark = document.createElement('mark');
mark.className = 'highlight';
const mark = document.createElement("mark");
mark.className = "highlight";
mark.style.backgroundColor = color;
mark.style.borderRadius = '2px';
mark.style.padding = '2px 0';
mark.style.borderRadius = "2px";
mark.style.padding = "2px 0";
mark.textContent = text;
return mark;
};
@ -141,9 +154,12 @@ export function highlightByOffset( @@ -141,9 +154,12 @@ export function highlightByOffset(
try {
// Case 1: Highlight is within a single text node
if (startPos.node === endPos.node) {
const text = startPos.node.textContent || '';
const text = startPos.node.textContent || "";
const before = text.substring(0, startPos.localOffset);
const highlighted = text.substring(startPos.localOffset, endPos.localOffset);
const highlighted = text.substring(
startPos.localOffset,
endPos.localOffset,
);
const after = text.substring(endPos.localOffset);
const parent = startPos.node.parentNode;
@ -156,7 +172,9 @@ export function highlightByOffset( @@ -156,7 +172,9 @@ export function highlightByOffset(
if (after) fragment.appendChild(document.createTextNode(after));
parent.replaceChild(fragment, startPos.node);
console.log(`[highlightByOffset] Applied single-node highlight: "${highlighted}"`);
console.log(
`[highlightByOffset] Applied single-node highlight: "${highlighted}"`,
);
return true;
}
@ -169,7 +187,7 @@ export function highlightByOffset( @@ -169,7 +187,7 @@ export function highlightByOffset(
const parent = currentNode.parentNode;
if (!parent) break;
const text = currentNode.textContent || '';
const text = currentNode.textContent || "";
let fragment = document.createDocumentFragment();
if (isFirstNode) {
@ -200,7 +218,6 @@ export function highlightByOffset( @@ -200,7 +218,6 @@ export function highlightByOffset(
console.log(`[highlightByOffset] Applied multi-node highlight`);
return true;
} catch (err) {
console.error(`[highlightByOffset] Error applying highlight:`, err);
return false;
@ -213,7 +230,7 @@ export function highlightByOffset( @@ -213,7 +230,7 @@ export function highlightByOffset(
*/
export function getPlainText(element: HTMLElement): string {
const textNodes = getTextNodes(element);
return textNodes.map(node => node.textContent).join('');
return textNodes.map((node) => node.textContent).join("");
}
/**

189
src/lib/utils/highlightUtils.ts

@ -6,26 +6,28 @@ import type { NDKEvent } from "@nostr-dev-kit/ndk"; @@ -6,26 +6,28 @@ import type { NDKEvent } from "@nostr-dev-kit/ndk";
import { nip19 } from "nostr-tools";
export interface GroupedHighlight {
pubkey: string;
highlights: NDKEvent[];
count: number;
pubkey: string;
highlights: NDKEvent[];
count: number;
}
/**
* Groups highlights by author pubkey
* Returns a Map with pubkey as key and array of highlights as value
*/
export function groupHighlightsByAuthor(highlights: NDKEvent[]): Map<string, NDKEvent[]> {
const grouped = new Map<string, NDKEvent[]>();
for (const highlight of highlights) {
const pubkey = highlight.pubkey;
const existing = grouped.get(pubkey) || [];
existing.push(highlight);
grouped.set(pubkey, existing);
}
return grouped;
export function groupHighlightsByAuthor(
highlights: NDKEvent[],
): Map<string, NDKEvent[]> {
const grouped = new Map<string, NDKEvent[]>();
for (const highlight of highlights) {
const pubkey = highlight.pubkey;
const existing = grouped.get(pubkey) || [];
existing.push(highlight);
grouped.set(pubkey, existing);
}
return grouped;
}
/**
@ -34,21 +36,24 @@ export function groupHighlightsByAuthor(highlights: NDKEvent[]): Map<string, NDK @@ -34,21 +36,24 @@ export function groupHighlightsByAuthor(highlights: NDKEvent[]): Map<string, NDK
* @param maxLength - Maximum length (default: 50)
* @returns Truncated text with ellipsis if needed
*/
export function truncateHighlight(text: string, maxLength: number = 50): string {
if (!text || text.length <= maxLength) {
return text;
}
export function truncateHighlight(
text: string,
maxLength: number = 50,
): string {
if (!text || text.length <= maxLength) {
return text;
}
// Find the last space before maxLength
const truncated = text.slice(0, maxLength);
const lastSpace = truncated.lastIndexOf(" ");
// Find the last space before maxLength
const truncated = text.slice(0, maxLength);
const lastSpace = truncated.lastIndexOf(" ");
// If there's a space, break there; otherwise use the full maxLength
if (lastSpace > 0) {
return truncated.slice(0, lastSpace) + "...";
}
// If there's a space, break there; otherwise use the full maxLength
if (lastSpace > 0) {
return truncated.slice(0, lastSpace) + "...";
}
return truncated + "...";
return truncated + "...";
}
/**
@ -57,26 +62,29 @@ export function truncateHighlight(text: string, maxLength: number = 50): string @@ -57,26 +62,29 @@ export function truncateHighlight(text: string, maxLength: number = 50): string
* @param relays - Array of relay URLs to include as hints
* @returns naddr string
*/
export function encodeHighlightNaddr(event: NDKEvent, relays: string[] = []): string {
try {
// For kind 9802 highlights, we need the event's unique identifier
// Since highlights don't have a d-tag, we'll use the event id as nevent instead
// But per NIP-19, naddr is for addressable events (with d-tag)
// For non-addressable events like kind 9802, we should use nevent
const nevent = nip19.neventEncode({
id: event.id,
relays: relays.length > 0 ? relays : undefined,
author: event.pubkey,
kind: event.kind,
});
return nevent;
} catch (error) {
console.error("Error encoding highlight naddr:", error);
// Fallback to just the event id
return event.id;
}
export function encodeHighlightNaddr(
event: NDKEvent,
relays: string[] = [],
): string {
try {
// For kind 9802 highlights, we need the event's unique identifier
// Since highlights don't have a d-tag, we'll use the event id as nevent instead
// But per NIP-19, naddr is for addressable events (with d-tag)
// For non-addressable events like kind 9802, we should use nevent
const nevent = nip19.neventEncode({
id: event.id,
relays: relays.length > 0 ? relays : undefined,
author: event.pubkey,
kind: event.kind,
});
return nevent;
} catch (error) {
console.error("Error encoding highlight naddr:", error);
// Fallback to just the event id
return event.id;
}
}
/**
@ -86,22 +94,22 @@ export function encodeHighlightNaddr(event: NDKEvent, relays: string[] = []): st @@ -86,22 +94,22 @@ export function encodeHighlightNaddr(event: NDKEvent, relays: string[] = []): st
* @returns Shortened npub like "npub1abc...xyz"
*/
export function shortenNpub(pubkey: string, length: number = 8): string {
try {
const npub = nip19.npubEncode(pubkey);
// npub format: "npub1" + bech32 encoded data
// Show first part and last part
if (npub.length <= length + 10) {
return npub;
}
const start = npub.slice(0, length + 5); // "npub1" + first chars
const end = npub.slice(-4); // last chars
return `${start}...${end}`;
} catch (error) {
console.error("Error creating shortened npub:", error);
// Fallback to shortened hex
return `${pubkey.slice(0, 8)}...${pubkey.slice(-4)}`;
}
try {
const npub = nip19.npubEncode(pubkey);
// npub format: "npub1" + bech32 encoded data
// Show first part and last part
if (npub.length <= length + 10) {
return npub;
}
const start = npub.slice(0, length + 5); // "npub1" + first chars
const end = npub.slice(-4); // last chars
return `${start}...${end}`;
} catch (error) {
console.error("Error creating shortened npub:", error);
// Fallback to shortened hex
return `${pubkey.slice(0, 8)}...${pubkey.slice(-4)}`;
}
}
/**
@ -110,22 +118,22 @@ export function shortenNpub(pubkey: string, length: number = 8): string { @@ -110,22 +118,22 @@ export function shortenNpub(pubkey: string, length: number = 8): string {
* @returns Array of relay URLs
*/
export function getRelaysFromHighlight(event: NDKEvent): string[] {
const relays: string[] = [];
// Check for relay hints in tags (e.g., ["a", "30041:pubkey:id", "relay-url"])
for (const tag of event.tags) {
if ((tag[0] === "a" || tag[0] === "e" || tag[0] === "p") && tag[2]) {
relays.push(tag[2]);
}
}
// Also include relay from the event if available
if (event.relay?.url) {
relays.push(event.relay.url);
}
// Deduplicate
return [...new Set(relays)];
const relays: string[] = [];
// Check for relay hints in tags (e.g., ["a", "30041:pubkey:id", "relay-url"])
for (const tag of event.tags) {
if ((tag[0] === "a" || tag[0] === "e" || tag[0] === "p") && tag[2]) {
relays.push(tag[2]);
}
}
// Also include relay from the event if available
if (event.relay?.url) {
relays.push(event.relay.url);
}
// Deduplicate
return [...new Set(relays)];
}
/**
@ -134,11 +142,11 @@ export function getRelaysFromHighlight(event: NDKEvent): string[] { @@ -134,11 +142,11 @@ export function getRelaysFromHighlight(event: NDKEvent): string[] {
* @returns Sorted array
*/
export function sortHighlightsByTime(highlights: NDKEvent[]): NDKEvent[] {
return [...highlights].sort((a, b) => {
const timeA = a.created_at || 0;
const timeB = b.created_at || 0;
return timeB - timeA; // Newest first
});
return [...highlights].sort((a, b) => {
const timeA = a.created_at || 0;
const timeB = b.created_at || 0;
return timeB - timeA; // Newest first
});
}
/**
@ -146,11 +154,14 @@ export function sortHighlightsByTime(highlights: NDKEvent[]): NDKEvent[] { @@ -146,11 +154,14 @@ export function sortHighlightsByTime(highlights: NDKEvent[]): NDKEvent[] {
* Priority: displayName > name > shortened npub
*/
export function getAuthorDisplayName(
profile: { name?: string; displayName?: string; display_name?: string } | null,
pubkey: string,
profile:
| { name?: string; displayName?: string; display_name?: string }
| null,
pubkey: string,
): string {
if (profile) {
return profile.displayName || profile.display_name || profile.name || shortenNpub(pubkey);
}
return shortenNpub(pubkey);
if (profile) {
return profile.displayName || profile.display_name || profile.name ||
shortenNpub(pubkey);
}
return shortenNpub(pubkey);
}

18
src/lib/utils/mockCommentData.ts

@ -47,7 +47,7 @@ function createMockComment( @@ -47,7 +47,7 @@ function createMockComment(
targetAddress: string,
createdAt: number,
replyToId?: string,
replyToAuthor?: string
replyToAuthor?: string,
): any {
const tags: string[][] = [
["A", targetAddress, "wss://relay.damus.io", pubkey],
@ -85,7 +85,7 @@ function createMockComment( @@ -85,7 +85,7 @@ function createMockComment(
export function generateMockComments(
sectionAddress: string,
numRootComments: number = 3,
numRepliesPerThread: number = 2
numRepliesPerThread: number = 2,
): any[] {
const comments: any[] = [];
const now = Math.floor(Date.now() / 1000);
@ -103,7 +103,7 @@ export function generateMockComments( @@ -103,7 +103,7 @@ export function generateMockComments(
rootContent,
rootPubkey,
sectionAddress,
rootCreatedAt
rootCreatedAt,
);
comments.push(rootComment);
@ -112,7 +112,8 @@ export function generateMockComments( @@ -112,7 +112,8 @@ export function generateMockComments(
for (let j = 0; j < numRepliesPerThread; j++) {
const replyId = `mock-reply-${i}-${j}-${Date.now()}`;
const replyPubkey = mockPubkeys[(i + j + 1) % mockPubkeys.length];
const replyContent = loremIpsumReplies[commentIndex % loremIpsumReplies.length];
const replyContent =
loremIpsumReplies[commentIndex % loremIpsumReplies.length];
const replyCreatedAt = rootCreatedAt + (j + 1) * 1800; // 30 min after each
const reply = createMockComment(
@ -122,7 +123,7 @@ export function generateMockComments( @@ -122,7 +123,7 @@ export function generateMockComments(
sectionAddress,
replyCreatedAt,
rootId,
rootPubkey
rootPubkey,
);
comments.push(reply);
@ -131,7 +132,8 @@ export function generateMockComments( @@ -131,7 +132,8 @@ export function generateMockComments(
if (j === 0 && i < 2) {
const nestedId = `mock-nested-${i}-${j}-${Date.now()}`;
const nestedPubkey = mockPubkeys[(i + j + 2) % mockPubkeys.length];
const nestedContent = loremIpsumReplies[(commentIndex + 1) % loremIpsumReplies.length];
const nestedContent =
loremIpsumReplies[(commentIndex + 1) % loremIpsumReplies.length];
const nestedCreatedAt = replyCreatedAt + 900; // 15 min after reply
const nested = createMockComment(
@ -141,7 +143,7 @@ export function generateMockComments( @@ -141,7 +143,7 @@ export function generateMockComments(
sectionAddress,
nestedCreatedAt,
replyId,
replyPubkey
replyPubkey,
);
comments.push(nested);
@ -160,7 +162,7 @@ export function generateMockComments( @@ -160,7 +162,7 @@ export function generateMockComments(
* @returns Array of all mock comments across all sections
*/
export function generateMockCommentsForSections(
sectionAddresses: string[]
sectionAddresses: string[],
): any[] {
const allComments: any[] = [];

113
src/lib/utils/mockHighlightData.ts

@ -5,53 +5,53 @@ @@ -5,53 +5,53 @@
// Sample highlighted text snippets (things users might actually highlight)
const highlightedTexts = [
'Knowledge that tries to stay put inevitably becomes ossified',
'The attempt to hold knowledge still is like trying to photograph a river',
'Understanding emerges not from rigid frameworks but from fluid engagement',
'Traditional institutions struggle with the natural promiscuity of ideas',
'Thinking without permission means refusing predetermined categories',
'The most valuable insights often come from unexpected juxtapositions',
'Anarchistic knowledge rejects the notion of authorized interpreters',
'Every act of reading is an act of creative interpretation',
'Hierarchy in knowledge systems serves power, not understanding',
'The boundary between creator and consumer is an artificial construction',
"Knowledge that tries to stay put inevitably becomes ossified",
"The attempt to hold knowledge still is like trying to photograph a river",
"Understanding emerges not from rigid frameworks but from fluid engagement",
"Traditional institutions struggle with the natural promiscuity of ideas",
"Thinking without permission means refusing predetermined categories",
"The most valuable insights often come from unexpected juxtapositions",
"Anarchistic knowledge rejects the notion of authorized interpreters",
"Every act of reading is an act of creative interpretation",
"Hierarchy in knowledge systems serves power, not understanding",
"The boundary between creator and consumer is an artificial construction",
];
// Context strings (surrounding text to help locate the highlight)
const contexts = [
'This is the fundamental paradox of institutionalized knowledge. Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice.',
'The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow. What remains is a static representation, not the dynamic reality.',
'Understanding emerges not from rigid frameworks but from fluid engagement with ideas, people, and contexts. This fluidity is precisely what traditional systems attempt to eliminate.',
'Traditional institutions struggle with the natural promiscuity of ideas—the way concepts naturally migrate, mutate, and merge across boundaries that were meant to contain them.',
'Thinking without permission means refusing predetermined categories and challenging the gatekeepers who claim authority over legitimate thought.',
'The most valuable insights often come from unexpected juxtapositions, from bringing together ideas that were never meant to meet.',
'Anarchistic knowledge rejects the notion of authorized interpreters, asserting instead that meaning-making is a fundamentally distributed and democratic process.',
'Every act of reading is an act of creative interpretation, a collaboration between text and reader that produces something new each time.',
'Hierarchy in knowledge systems serves power, not understanding. It determines who gets to speak, who must listen, and what counts as legitimate knowledge.',
'The boundary between creator and consumer is an artificial construction, one that digital networks make increasingly untenable and obsolete.',
"This is the fundamental paradox of institutionalized knowledge. Knowledge that tries to stay put inevitably becomes ossified, a monument to itself rather than a living practice.",
"The attempt to hold knowledge still is like trying to photograph a river—you capture an image, but you lose the flow. What remains is a static representation, not the dynamic reality.",
"Understanding emerges not from rigid frameworks but from fluid engagement with ideas, people, and contexts. This fluidity is precisely what traditional systems attempt to eliminate.",
"Traditional institutions struggle with the natural promiscuity of ideas—the way concepts naturally migrate, mutate, and merge across boundaries that were meant to contain them.",
"Thinking without permission means refusing predetermined categories and challenging the gatekeepers who claim authority over legitimate thought.",
"The most valuable insights often come from unexpected juxtapositions, from bringing together ideas that were never meant to meet.",
"Anarchistic knowledge rejects the notion of authorized interpreters, asserting instead that meaning-making is a fundamentally distributed and democratic process.",
"Every act of reading is an act of creative interpretation, a collaboration between text and reader that produces something new each time.",
"Hierarchy in knowledge systems serves power, not understanding. It determines who gets to speak, who must listen, and what counts as legitimate knowledge.",
"The boundary between creator and consumer is an artificial construction, one that digital networks make increasingly untenable and obsolete.",
];
// Optional annotations (user comments on their highlights)
const annotations = [
'This perfectly captures the institutional problem',
'Key insight - worth revisiting',
'Reminds me of Deleuze on rhizomatic structures',
'Fundamental critique of academic gatekeeping',
'The core argument in one sentence',
"This perfectly captures the institutional problem",
"Key insight - worth revisiting",
"Reminds me of Deleuze on rhizomatic structures",
"Fundamental critique of academic gatekeeping",
"The core argument in one sentence",
null, // Some highlights have no annotation
'Important for understanding the broader thesis',
"Important for understanding the broader thesis",
null,
'Connects to earlier discussion on page 12',
"Connects to earlier discussion on page 12",
null,
];
// Mock pubkeys - MUST be exactly 64 hex characters
const mockPubkeys = [
'a1b2c3d4e5f67890123456789012345678901234567890123456789012345678',
'b2c3d4e5f67890123456789012345678901234567890123456789012345678ab',
'c3d4e5f67890123456789012345678901234567890123456789012345678abcd',
'd4e5f67890123456789012345678901234567890123456789012345678abcdef',
'e5f6789012345678901234567890123456789012345678901234567890abcdef',
"a1b2c3d4e5f67890123456789012345678901234567890123456789012345678",
"b2c3d4e5f67890123456789012345678901234567890123456789012345678ab",
"c3d4e5f67890123456789012345678901234567890123456789012345678abcd",
"d4e5f67890123456789012345678901234567890123456789012345678abcdef",
"e5f6789012345678901234567890123456789012345678901234567890abcdef",
];
/**
@ -74,22 +74,22 @@ function createMockHighlight( @@ -74,22 +74,22 @@ function createMockHighlight(
authorPubkey: string,
annotation?: string | null,
offsetStart?: number,
offsetEnd?: number
offsetEnd?: number,
): any {
const tags: string[][] = [
['a', targetAddress, 'wss://relay.damus.io'],
['context', context],
['p', authorPubkey, 'wss://relay.damus.io', 'author'],
["a", targetAddress, "wss://relay.damus.io"],
["context", context],
["p", authorPubkey, "wss://relay.damus.io", "author"],
];
// Add optional annotation
if (annotation) {
tags.push(['comment', annotation]);
tags.push(["comment", annotation]);
}
// Add optional offset for position-based highlighting
if (offsetStart !== undefined && offsetEnd !== undefined) {
tags.push(['offset', offsetStart.toString(), offsetEnd.toString()]);
tags.push(["offset", offsetStart.toString(), offsetEnd.toString()]);
}
return {
@ -99,7 +99,7 @@ function createMockHighlight( @@ -99,7 +99,7 @@ function createMockHighlight(
created_at: createdAt,
content: highlightedText, // The highlighted text itself
tags,
sig: 'mock-signature-' + id,
sig: "mock-signature-" + id,
};
}
@ -113,7 +113,7 @@ function createMockHighlight( @@ -113,7 +113,7 @@ function createMockHighlight(
export function generateMockHighlights(
sectionAddress: string,
authorPubkey: string,
numHighlights: number = Math.floor(Math.random() * 2) + 2 // 2-3 highlights
numHighlights: number = Math.floor(Math.random() * 2) + 2, // 2-3 highlights
): any[] {
const highlights: any[] = [];
const now = Math.floor(Date.now() / 1000);
@ -123,7 +123,9 @@ export function generateMockHighlights( @@ -123,7 +123,9 @@ export function generateMockHighlights(
// The offset tags will highlight the ACTUAL text at those positions in the section
for (let i = 0; i < numHighlights; i++) {
const id = `mock-highlight-${i}-${Date.now()}-${Math.random().toString(36).substring(7)}`;
const id = `mock-highlight-${i}-${Date.now()}-${
Math.random().toString(36).substring(7)
}`;
const highlighterPubkey = mockPubkeys[i % mockPubkeys.length];
const annotation = annotations[i % annotations.length];
const createdAt = now - (numHighlights - i) * 7200; // Stagger by 2 hours
@ -136,7 +138,9 @@ export function generateMockHighlights( @@ -136,7 +138,9 @@ export function generateMockHighlights(
// Use placeholder text - the actual highlighted text will be determined by the offsets
const placeholderText = `Test highlight ${i + 1}`;
const placeholderContext = `This is test highlight ${i + 1} at position ${offsetStart}-${offsetEnd}`;
const placeholderContext = `This is test highlight ${
i + 1
} at position ${offsetStart}-${offsetEnd}`;
const highlight = createMockHighlight(
id,
@ -148,7 +152,7 @@ export function generateMockHighlights( @@ -148,7 +152,7 @@ export function generateMockHighlights(
authorPubkey,
annotation,
offsetStart,
offsetEnd
offsetEnd,
);
highlights.push(highlight);
@ -165,19 +169,32 @@ export function generateMockHighlights( @@ -165,19 +169,32 @@ export function generateMockHighlights(
*/
export function generateMockHighlightsForSections(
sectionAddresses: string[],
authorPubkey: string = 'dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06'
authorPubkey: string =
"dc4cd086cd7ce5b1832adf4fdd1211289880d2c7e295bcb0e684c01acee77c06",
): any[] {
const allHighlights: any[] = [];
sectionAddresses.forEach((address, index) => {
// Each section gets 2 highlights at the very beginning (positions 0-100 and 120-220)
const numHighlights = 2;
const sectionHighlights = generateMockHighlights(address, authorPubkey, numHighlights);
console.log(`[MockHighlightData] Generated ${numHighlights} highlights for section ${address.split(':')[2]?.substring(0, 20)}... at positions 0-100, 120-220`);
const sectionHighlights = generateMockHighlights(
address,
authorPubkey,
numHighlights,
);
console.log(
`[MockHighlightData] Generated ${numHighlights} highlights for section ${
address.split(":")[2]?.substring(0, 20)
}... at positions 0-100, 120-220`,
);
allHighlights.push(...sectionHighlights);
});
console.log(`[MockHighlightData] Total: ${allHighlights.length} highlights across ${sectionAddresses.length} sections`);
console.log(`[MockHighlightData] Each highlight is anchored to its section via "a" tag and uses offset tags for position`);
console.log(
`[MockHighlightData] Total: ${allHighlights.length} highlights across ${sectionAddresses.length} sections`,
);
console.log(
`[MockHighlightData] Each highlight is anchored to its section via "a" tag and uses offset tags for position`,
);
return allHighlights;
}

15
src/lib/utils/publication_tree_factory.ts

@ -189,8 +189,8 @@ function detectContentType( @@ -189,8 +189,8 @@ function detectContentType(
// Check if the "title" is actually just the first section title
// This happens when AsciiDoc starts with == instead of =
const titleMatchesFirstSection =
parsed.sections.length > 0 && parsed.title === parsed.sections[0].title;
const titleMatchesFirstSection = parsed.sections.length > 0 &&
parsed.title === parsed.sections[0].title;
if (hasDocTitle && hasSections && !titleMatchesFirstSection) {
return "article";
@ -286,8 +286,9 @@ function inheritDocumentAttributes( @@ -286,8 +286,9 @@ function inheritDocumentAttributes(
documentAttributes: Record<string, string>,
) {
// Inherit selected document attributes
if (documentAttributes.language)
if (documentAttributes.language) {
tags.push(["language", documentAttributes.language]);
}
if (documentAttributes.type) tags.push(["type", documentAttributes.type]);
}
@ -368,9 +369,11 @@ function generateIndexContent(parsed: any): string { @@ -368,9 +369,11 @@ function generateIndexContent(parsed: any): string {
${parsed.sections.length} sections available:
${parsed.sections
.map((section: any, i: number) => `${i + 1}. ${section.title}`)
.join("\n")}`;
${
parsed.sections
.map((section: any, i: number) => `${i + 1}. ${section.title}`)
.join("\n")
}`;
}
/**

74
src/lib/utils/publication_tree_processor.ts

@ -127,7 +127,9 @@ export function registerPublicationTreeProcessor( @@ -127,7 +127,9 @@ export function registerPublicationTreeProcessor(
};
console.log(
`[TreeProcessor] Built tree with ${contentEvents.length} content events and ${indexEvent ? "1" : "0"} index events`,
`[TreeProcessor] Built tree with ${contentEvents.length} content events and ${
indexEvent ? "1" : "0"
} index events`,
);
} catch (error) {
console.error("[TreeProcessor] Error processing document:", error);
@ -333,11 +335,11 @@ function parseSegmentContent( @@ -333,11 +335,11 @@ function parseSegmentContent(
// Extract content (everything after attributes, but stop at child sections)
const contentLines = sectionLines.slice(contentStartIdx);
// Find where to stop content extraction based on parse level
let contentEndIdx = contentLines.length;
const currentSectionLevel = sectionLines[0].match(/^(=+)/)?.[1].length || 2;
for (let i = 0; i < contentLines.length; i++) {
const line = contentLines[i];
const headerMatch = line.match(/^(=+)\s+/);
@ -350,7 +352,7 @@ function parseSegmentContent( @@ -350,7 +352,7 @@ function parseSegmentContent(
}
}
}
const content = contentLines.slice(0, contentEndIdx).join("\n").trim();
// Debug logging for Level 3+ content extraction
@ -363,7 +365,6 @@ function parseSegmentContent( @@ -363,7 +365,6 @@ function parseSegmentContent(
console.log(` extracted content:`, JSON.stringify(content));
}
return { attributes, content };
}
@ -378,8 +379,8 @@ function detectContentType( @@ -378,8 +379,8 @@ function detectContentType(
const hasSections = segments.length > 0;
// Check if the title matches the first section title
const titleMatchesFirstSection =
segments.length > 0 && title === segments[0].title;
const titleMatchesFirstSection = segments.length > 0 &&
title === segments[0].title;
if (hasDocTitle && hasSections && !titleMatchesFirstSection) {
return "article";
@ -530,7 +531,11 @@ function buildLevel2Structure( @@ -530,7 +531,11 @@ function buildLevel2Structure(
// Group segments by level 2 sections
const level2Groups = groupSegmentsByLevel2(segments);
console.log(`[TreeProcessor] Level 2 groups:`, level2Groups.length, level2Groups.map(g => g.title));
console.log(
`[TreeProcessor] Level 2 groups:`,
level2Groups.length,
level2Groups.map((g) => g.title),
);
// Generate publication abbreviation for namespacing
const pubAbbrev = generateTitleAbbreviation(title);
@ -550,7 +555,7 @@ function buildLevel2Structure( @@ -550,7 +555,7 @@ function buildLevel2Structure(
dTag: namespacedDTag,
children: [],
};
console.log(`[TreeProcessor] Adding child node:`, childNode.title);
eventStructure[0].children.push(childNode);
}
@ -599,7 +604,7 @@ function buildHierarchicalStructure( @@ -599,7 +604,7 @@ function buildHierarchicalStructure(
contentEvents,
ndk,
parseLevel,
title
title,
);
return { tree, indexEvent, contentEvents, eventStructure };
@ -680,7 +685,10 @@ function createContentEvent( @@ -680,7 +685,10 @@ function createContentEvent(
if (wikiLinks.length > 0) {
const wikiTags = wikiLinksToTags(wikiLinks);
tags.push(...wikiTags);
console.log(`[TreeProcessor] Added ${wikiTags.length} wiki link tags:`, wikiTags);
console.log(
`[TreeProcessor] Added ${wikiTags.length} wiki link tags:`,
wikiTags,
);
}
event.tags = tags;
@ -879,17 +887,18 @@ function groupSegmentsByLevel2(segments: ContentSegment[]): ContentSegment[] { @@ -879,17 +887,18 @@ function groupSegmentsByLevel2(segments: ContentSegment[]): ContentSegment[] {
s.level > 2 &&
s.startLine > segment.startLine &&
(segments.find(
(next) => next.level <= 2 && next.startLine > segment.startLine,
)?.startLine || Infinity) > s.startLine,
(next) => next.level <= 2 && next.startLine > segment.startLine,
)?.startLine || Infinity) > s.startLine,
);
// Combine the level 2 content with all nested content
let combinedContent = segment.content;
for (const nested of nestedSegments) {
combinedContent += `\n\n${"=".repeat(nested.level)} ${nested.title}\n${nested.content}`;
combinedContent += `\n\n${
"=".repeat(nested.level)
} ${nested.title}\n${nested.content}`;
}
level2Groups.push({
...segment,
content: combinedContent,
@ -906,22 +915,22 @@ function groupSegmentsByLevel2(segments: ContentSegment[]): ContentSegment[] { @@ -906,22 +915,22 @@ function groupSegmentsByLevel2(segments: ContentSegment[]): ContentSegment[] {
*/
function buildHierarchicalGroups(
segments: ContentSegment[],
parseLevel: number
parseLevel: number,
): HierarchicalNode[] {
const groups: HierarchicalNode[] = [];
// Group segments by their parent-child relationships
const segmentsByLevel: Map<number, ContentSegment[]> = new Map();
for (let level = 2; level <= parseLevel; level++) {
segmentsByLevel.set(level, segments.filter(s => s.level === level));
segmentsByLevel.set(level, segments.filter((s) => s.level === level));
}
// Build the hierarchy from level 2 down to parseLevel
for (const segment of segmentsByLevel.get(2) || []) {
const node = buildNodeHierarchy(segment, segments, parseLevel);
groups.push(node);
}
return groups;
}
@ -931,22 +940,23 @@ function buildHierarchicalGroups( @@ -931,22 +940,23 @@ function buildHierarchicalGroups(
function buildNodeHierarchy(
segment: ContentSegment,
allSegments: ContentSegment[],
parseLevel: number
parseLevel: number,
): HierarchicalNode {
// Find direct children (one level deeper)
const directChildren = allSegments.filter(s => {
const directChildren = allSegments.filter((s) => {
if (s.level !== segment.level + 1) return false;
if (s.startLine <= segment.startLine) return false;
// Check if this segment is within our section's bounds
const nextSibling = allSegments.find(
next => next.level <= segment.level && next.startLine > segment.startLine
(next) =>
next.level <= segment.level && next.startLine > segment.startLine,
);
const endLine = nextSibling?.startLine || Infinity;
return s.startLine < endLine;
});
// Recursively build child nodes
const childNodes: HierarchicalNode[] = [];
for (const child of directChildren) {
@ -958,15 +968,15 @@ function buildNodeHierarchy( @@ -958,15 +968,15 @@ function buildNodeHierarchy(
childNodes.push({
segment: child,
children: [],
hasChildren: false
hasChildren: false,
});
}
}
return {
segment,
children: childNodes,
hasChildren: childNodes.length > 0
hasChildren: childNodes.length > 0,
};
}
@ -1119,7 +1129,6 @@ function createIndexEventForHierarchicalNode( @@ -1119,7 +1129,6 @@ function createIndexEventForHierarchicalNode(
return event;
}
/**
* Build hierarchical segment structure for Level 3+ parsing
*/
@ -1135,8 +1144,9 @@ function buildSegmentHierarchy( @@ -1135,8 +1144,9 @@ function buildSegmentHierarchy(
s.level > 2 &&
s.startLine > level2Segment.startLine &&
(segments.find(
(next) => next.level <= 2 && next.startLine > level2Segment.startLine,
)?.startLine || Infinity) > s.startLine,
(next) =>
next.level <= 2 && next.startLine > level2Segment.startLine,
)?.startLine || Infinity) > s.startLine,
);
hierarchy.push({

41
src/lib/utils/wiki_links.ts

@ -5,7 +5,7 @@ @@ -5,7 +5,7 @@
export interface WikiLink {
fullMatch: string;
type: 'w' | 'd' | 'auto'; // auto means [[term]] without explicit prefix
type: "w" | "d" | "auto"; // auto means [[term]] without explicit prefix
term: string;
displayText: string;
startIndex: number;
@ -34,7 +34,7 @@ export function extractWikiLinks(content: string): WikiLink[] { @@ -34,7 +34,7 @@ export function extractWikiLinks(content: string): WikiLink[] {
wikiLinks.push({
fullMatch: match[0],
type: prefix ? (prefix as 'w' | 'd') : 'auto',
type: prefix ? (prefix as "w" | "d") : "auto",
term,
displayText: customDisplay || term,
startIndex: match.index,
@ -53,8 +53,8 @@ export function termToTag(term: string): string { @@ -53,8 +53,8 @@ export function termToTag(term: string): string {
return term
.toLowerCase()
.trim()
.replace(/\s+/g, '-')
.replace(/[^a-z0-9-]/g, '');
.replace(/\s+/g, "-")
.replace(/[^a-z0-9-]/g, "");
}
/**
@ -67,14 +67,14 @@ export function wikiLinksToTags(wikiLinks: WikiLink[]): string[][] { @@ -67,14 +67,14 @@ export function wikiLinksToTags(wikiLinks: WikiLink[]): string[][] {
for (const link of wikiLinks) {
const tagSlug = termToTag(link.term);
if (link.type === 'w' || link.type === 'auto') {
if (link.type === "w" || link.type === "auto") {
// Reference tag includes display text
tags.push(['w', tagSlug, link.displayText]);
tags.push(["w", tagSlug, link.displayText]);
}
if (link.type === 'd') {
if (link.type === "d") {
// Definition tag (no display text, it IS the thing)
tags.push(['d', tagSlug]);
tags.push(["d", tagSlug]);
}
}
@ -91,13 +91,13 @@ export function renderWikiLinksToHtml( @@ -91,13 +91,13 @@ export function renderWikiLinksToHtml(
linkClass?: string;
wLinkClass?: string;
dLinkClass?: string;
onClickHandler?: (type: 'w' | 'd' | 'auto', term: string) => string;
onClickHandler?: (type: "w" | "d" | "auto", term: string) => string;
} = {},
): string {
const {
linkClass = 'wiki-link',
wLinkClass = 'wiki-link-reference',
dLinkClass = 'wiki-link-definition',
linkClass = "wiki-link",
wLinkClass = "wiki-link-reference",
dLinkClass = "wiki-link-definition",
onClickHandler,
} = options;
@ -105,13 +105,13 @@ export function renderWikiLinksToHtml( @@ -105,13 +105,13 @@ export function renderWikiLinksToHtml(
/\[\[(?:(w|d):)?([^\]|]+)(?:\|([^\]]+))?\]\]/g,
(match, prefix, term, customDisplay) => {
const displayText = customDisplay?.trim() || term.trim();
const type = prefix ? prefix : 'auto';
const type = prefix ? prefix : "auto";
const tagSlug = termToTag(term);
// Determine CSS classes
let classes = linkClass;
if (type === 'w') classes += ` ${wLinkClass}`;
else if (type === 'd') classes += ` ${dLinkClass}`;
if (type === "w") classes += ` ${wLinkClass}`;
else if (type === "d") classes += ` ${dLinkClass}`;
// Generate href or onclick
const action = onClickHandler
@ -119,12 +119,11 @@ export function renderWikiLinksToHtml( @@ -119,12 +119,11 @@ export function renderWikiLinksToHtml(
: `href="#wiki/${type}/${encodeURIComponent(tagSlug)}"`;
// Add title attribute showing the type
const title =
type === 'w'
? 'Wiki reference (mentions this concept)'
: type === 'd'
? 'Wiki definition (defines this concept)'
: 'Wiki link (searches both references and definitions)';
const title = type === "w"
? "Wiki reference (mentions this concept)"
: type === "d"
? "Wiki definition (defines this concept)"
: "Wiki link (searches both references and definitions)";
return `<a class="${classes}" ${action} title="${title}" data-wiki-type="${type}" data-wiki-term="${tagSlug}">${displayText}</a>`;
},

445
tests/unit/highlightLayer.test.ts

@ -1,62 +1,62 @@ @@ -1,62 +1,62 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { pubkeyToHue } from '../../src/lib/utils/nostrUtils';
import { nip19 } from 'nostr-tools';
describe('pubkeyToHue', () => {
describe('Consistency', () => {
it('returns consistent hue for same pubkey', () => {
const pubkey = 'a'.repeat(64);
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { pubkeyToHue } from "../../src/lib/utils/nostrUtils";
import { nip19 } from "nostr-tools";
describe("pubkeyToHue", () => {
describe("Consistency", () => {
it("returns consistent hue for same pubkey", () => {
const pubkey = "a".repeat(64);
const hue1 = pubkeyToHue(pubkey);
const hue2 = pubkeyToHue(pubkey);
expect(hue1).toBe(hue2);
});
it('returns same hue for same pubkey called multiple times', () => {
const pubkey = 'abc123def456'.repeat(5) + 'abcd';
it("returns same hue for same pubkey called multiple times", () => {
const pubkey = "abc123def456".repeat(5) + "abcd";
const hues = Array.from({ length: 10 }, () => pubkeyToHue(pubkey));
expect(new Set(hues).size).toBe(1); // All hues should be the same
});
});
describe('Range Validation', () => {
it('returns hue in valid range (0-360)', () => {
describe("Range Validation", () => {
it("returns hue in valid range (0-360)", () => {
const pubkeys = [
'a'.repeat(64),
'f'.repeat(64),
'0'.repeat(64),
'9'.repeat(64),
'abc123def456'.repeat(5) + 'abcd',
'123456789abc'.repeat(5) + 'def0',
"a".repeat(64),
"f".repeat(64),
"0".repeat(64),
"9".repeat(64),
"abc123def456".repeat(5) + "abcd",
"123456789abc".repeat(5) + "def0",
];
pubkeys.forEach(pubkey => {
pubkeys.forEach((pubkey) => {
const hue = pubkeyToHue(pubkey);
expect(hue).toBeGreaterThanOrEqual(0);
expect(hue).toBeLessThan(360);
});
});
it('returns integer hue value', () => {
const pubkey = 'a'.repeat(64);
it("returns integer hue value", () => {
const pubkey = "a".repeat(64);
const hue = pubkeyToHue(pubkey);
expect(Number.isInteger(hue)).toBe(true);
});
});
describe('Format Handling', () => {
it('handles hex format pubkeys', () => {
const hexPubkey = 'abcdef123456789'.repeat(4) + '0123';
describe("Format Handling", () => {
it("handles hex format pubkeys", () => {
const hexPubkey = "abcdef123456789".repeat(4) + "0123";
const hue = pubkeyToHue(hexPubkey);
expect(hue).toBeGreaterThanOrEqual(0);
expect(hue).toBeLessThan(360);
});
it('handles npub format pubkeys', () => {
const hexPubkey = 'a'.repeat(64);
it("handles npub format pubkeys", () => {
const hexPubkey = "a".repeat(64);
const npub = nip19.npubEncode(hexPubkey);
const hue = pubkeyToHue(npub);
@ -64,8 +64,8 @@ describe('pubkeyToHue', () => { @@ -64,8 +64,8 @@ describe('pubkeyToHue', () => {
expect(hue).toBeLessThan(360);
});
it('returns same hue for hex and npub format of same pubkey', () => {
const hexPubkey = 'abc123def456'.repeat(5) + 'abcd';
it("returns same hue for hex and npub format of same pubkey", () => {
const hexPubkey = "abc123def456".repeat(5) + "abcd";
const npub = nip19.npubEncode(hexPubkey);
const hueFromHex = pubkeyToHue(hexPubkey);
@ -75,11 +75,11 @@ describe('pubkeyToHue', () => { @@ -75,11 +75,11 @@ describe('pubkeyToHue', () => {
});
});
describe('Uniqueness', () => {
it('different pubkeys generate different hues', () => {
const pubkey1 = 'a'.repeat(64);
const pubkey2 = 'b'.repeat(64);
const pubkey3 = 'c'.repeat(64);
describe("Uniqueness", () => {
it("different pubkeys generate different hues", () => {
const pubkey1 = "a".repeat(64);
const pubkey2 = "b".repeat(64);
const pubkey3 = "c".repeat(64);
const hue1 = pubkeyToHue(pubkey1);
const hue2 = pubkeyToHue(pubkey2);
@ -90,12 +90,13 @@ describe('pubkeyToHue', () => { @@ -90,12 +90,13 @@ describe('pubkeyToHue', () => {
expect(hue1).not.toBe(hue3);
});
it('generates diverse hues for multiple pubkeys', () => {
const pubkeys = Array.from({ length: 10 }, (_, i) =>
String.fromCharCode(97 + i).repeat(64)
it("generates diverse hues for multiple pubkeys", () => {
const pubkeys = Array.from(
{ length: 10 },
(_, i) => String.fromCharCode(97 + i).repeat(64),
);
const hues = pubkeys.map(pk => pubkeyToHue(pk));
const hues = pubkeys.map((pk) => pubkeyToHue(pk));
const uniqueHues = new Set(hues);
// Most pubkeys should generate unique hues (allowing for some collisions)
@ -103,16 +104,16 @@ describe('pubkeyToHue', () => { @@ -103,16 +104,16 @@ describe('pubkeyToHue', () => {
});
});
describe('Edge Cases', () => {
it('handles empty string input', () => {
const hue = pubkeyToHue('');
describe("Edge Cases", () => {
it("handles empty string input", () => {
const hue = pubkeyToHue("");
expect(hue).toBeGreaterThanOrEqual(0);
expect(hue).toBeLessThan(360);
});
it('handles invalid npub format gracefully', () => {
const invalidNpub = 'npub1invalid';
it("handles invalid npub format gracefully", () => {
const invalidNpub = "npub1invalid";
const hue = pubkeyToHue(invalidNpub);
// Should still return a valid hue even if decode fails
@ -120,16 +121,16 @@ describe('pubkeyToHue', () => { @@ -120,16 +121,16 @@ describe('pubkeyToHue', () => {
expect(hue).toBeLessThan(360);
});
it('handles short input strings', () => {
const shortInput = 'abc';
it("handles short input strings", () => {
const shortInput = "abc";
const hue = pubkeyToHue(shortInput);
expect(hue).toBeGreaterThanOrEqual(0);
expect(hue).toBeLessThan(360);
});
it('handles special characters', () => {
const specialInput = '!@#$%^&*()';
it("handles special characters", () => {
const specialInput = "!@#$%^&*()";
const hue = pubkeyToHue(specialInput);
expect(hue).toBeGreaterThanOrEqual(0);
@ -137,19 +138,20 @@ describe('pubkeyToHue', () => { @@ -137,19 +138,20 @@ describe('pubkeyToHue', () => {
});
});
describe('Color Distribution', () => {
it('distributes colors across the spectrum', () => {
describe("Color Distribution", () => {
it("distributes colors across the spectrum", () => {
// Generate hues for many different pubkeys
const pubkeys = Array.from({ length: 50 }, (_, i) =>
i.toString().repeat(16)
const pubkeys = Array.from(
{ length: 50 },
(_, i) => i.toString().repeat(16),
);
const hues = pubkeys.map(pk => pubkeyToHue(pk));
const hues = pubkeys.map((pk) => pubkeyToHue(pk));
// Check that we have hues in different ranges of the spectrum
const hasLowHues = hues.some(h => h < 120);
const hasMidHues = hues.some(h => h >= 120 && h < 240);
const hasHighHues = hues.some(h => h >= 240);
const hasLowHues = hues.some((h) => h < 120);
const hasMidHues = hues.some((h) => h >= 120 && h < 240);
const hasHighHues = hues.some((h) => h >= 240);
expect(hasLowHues).toBe(true);
expect(hasMidHues).toBe(true);
@ -158,7 +160,7 @@ describe('pubkeyToHue', () => { @@ -158,7 +160,7 @@ describe('pubkeyToHue', () => {
});
});
describe('HighlightLayer Component', () => {
describe("HighlightLayer Component", () => {
let mockNdk: any;
let mockSubscription: any;
let eventHandlers: Map<string, Function>;
@ -190,9 +192,9 @@ describe('HighlightLayer Component', () => { @@ -190,9 +192,9 @@ describe('HighlightLayer Component', () => {
textContent: text,
})),
createElement: vi.fn((tag: string) => ({
className: '',
className: "",
style: {},
textContent: '',
textContent: "",
})),
} as any;
});
@ -201,60 +203,60 @@ describe('HighlightLayer Component', () => { @@ -201,60 +203,60 @@ describe('HighlightLayer Component', () => {
vi.clearAllMocks();
});
describe('NDK Subscription', () => {
it('fetches kind 9802 events with correct filter when eventId provided', () => {
const eventId = 'a'.repeat(64);
describe("NDK Subscription", () => {
it("fetches kind 9802 events with correct filter when eventId provided", () => {
const eventId = "a".repeat(64);
// Simulate calling fetchHighlights
mockNdk.subscribe({ kinds: [9802], '#e': [eventId], limit: 100 });
mockNdk.subscribe({ kinds: [9802], "#e": [eventId], limit: 100 });
expect(mockNdk.subscribe).toHaveBeenCalledWith(
expect.objectContaining({
kinds: [9802],
'#e': [eventId],
"#e": [eventId],
limit: 100,
})
}),
);
});
it('fetches kind 9802 events with correct filter when eventAddress provided', () => {
const eventAddress = '30040:' + 'a'.repeat(64) + ':chapter-1';
it("fetches kind 9802 events with correct filter when eventAddress provided", () => {
const eventAddress = "30040:" + "a".repeat(64) + ":chapter-1";
// Simulate calling fetchHighlights
mockNdk.subscribe({ kinds: [9802], '#a': [eventAddress], limit: 100 });
mockNdk.subscribe({ kinds: [9802], "#a": [eventAddress], limit: 100 });
expect(mockNdk.subscribe).toHaveBeenCalledWith(
expect.objectContaining({
kinds: [9802],
'#a': [eventAddress],
"#a": [eventAddress],
limit: 100,
})
}),
);
});
it('fetches with both eventId and eventAddress filters when both provided', () => {
const eventId = 'a'.repeat(64);
const eventAddress = '30040:' + 'b'.repeat(64) + ':chapter-1';
it("fetches with both eventId and eventAddress filters when both provided", () => {
const eventId = "a".repeat(64);
const eventAddress = "30040:" + "b".repeat(64) + ":chapter-1";
// Simulate calling fetchHighlights
mockNdk.subscribe({
kinds: [9802],
'#e': [eventId],
'#a': [eventAddress],
"#e": [eventId],
"#a": [eventAddress],
limit: 100,
});
expect(mockNdk.subscribe).toHaveBeenCalledWith(
expect.objectContaining({
kinds: [9802],
'#e': [eventId],
'#a': [eventAddress],
"#e": [eventId],
"#a": [eventAddress],
limit: 100,
})
}),
);
});
it('cleans up subscription on unmount', () => {
it("cleans up subscription on unmount", () => {
mockNdk.subscribe({ kinds: [9802], limit: 100 });
// Simulate unmount by calling stop
@ -264,10 +266,10 @@ describe('HighlightLayer Component', () => { @@ -264,10 +266,10 @@ describe('HighlightLayer Component', () => {
});
});
describe('Color Mapping', () => {
it('maps highlights to colors correctly', () => {
const pubkey1 = 'a'.repeat(64);
const pubkey2 = 'b'.repeat(64);
describe("Color Mapping", () => {
it("maps highlights to colors correctly", () => {
const pubkey1 = "a".repeat(64);
const pubkey2 = "b".repeat(64);
const hue1 = pubkeyToHue(pubkey1);
const hue2 = pubkeyToHue(pubkey2);
@ -280,8 +282,8 @@ describe('HighlightLayer Component', () => { @@ -280,8 +282,8 @@ describe('HighlightLayer Component', () => {
expect(expectedColor1).not.toBe(expectedColor2);
});
it('uses consistent color for same pubkey', () => {
const pubkey = 'abc123def456'.repeat(5) + 'abcd';
it("uses consistent color for same pubkey", () => {
const pubkey = "abc123def456".repeat(5) + "abcd";
const hue = pubkeyToHue(pubkey);
const color1 = `hsla(${hue}, 70%, 60%, 0.3)`;
@ -290,16 +292,16 @@ describe('HighlightLayer Component', () => { @@ -290,16 +292,16 @@ describe('HighlightLayer Component', () => {
expect(color1).toBe(color2);
});
it('generates semi-transparent colors with 0.3 opacity', () => {
const pubkey = 'a'.repeat(64);
it("generates semi-transparent colors with 0.3 opacity", () => {
const pubkey = "a".repeat(64);
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
expect(color).toContain('0.3');
expect(color).toContain("0.3");
});
it('uses HSL color format with correct values', () => {
const pubkey = 'a'.repeat(64);
it("uses HSL color format with correct values", () => {
const pubkey = "a".repeat(64);
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
@ -308,20 +310,20 @@ describe('HighlightLayer Component', () => { @@ -308,20 +310,20 @@ describe('HighlightLayer Component', () => {
});
});
describe('Highlight Events', () => {
it('handles no highlights gracefully', () => {
describe("Highlight Events", () => {
it("handles no highlights gracefully", () => {
const highlights: any[] = [];
expect(highlights.length).toBe(0);
// Component should render without errors
});
it('handles single highlight from one user', () => {
it("handles single highlight from one user", () => {
const mockHighlight = {
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: 'highlighted text',
pubkey: "a".repeat(64),
content: "highlighted text",
created_at: Date.now(),
tags: [],
};
@ -329,25 +331,25 @@ describe('HighlightLayer Component', () => { @@ -329,25 +331,25 @@ describe('HighlightLayer Component', () => {
const highlights = [mockHighlight];
expect(highlights.length).toBe(1);
expect(highlights[0].pubkey).toBe('a'.repeat(64));
expect(highlights[0].pubkey).toBe("a".repeat(64));
});
it('handles multiple highlights from same user', () => {
const pubkey = 'a'.repeat(64);
it("handles multiple highlights from same user", () => {
const pubkey = "a".repeat(64);
const mockHighlights = [
{
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: pubkey,
content: 'first highlight',
content: "first highlight",
created_at: Date.now(),
tags: [],
},
{
id: 'highlight2',
id: "highlight2",
kind: 9802,
pubkey: pubkey,
content: 'second highlight',
content: "second highlight",
created_at: Date.now(),
tags: [],
},
@ -363,33 +365,33 @@ describe('HighlightLayer Component', () => { @@ -363,33 +365,33 @@ describe('HighlightLayer Component', () => {
expect(color).toMatch(/^hsla\(\d+, 70%, 60%, 0\.3\)$/);
});
it('handles multiple highlights from different users', () => {
const pubkey1 = 'a'.repeat(64);
const pubkey2 = 'b'.repeat(64);
const pubkey3 = 'c'.repeat(64);
it("handles multiple highlights from different users", () => {
const pubkey1 = "a".repeat(64);
const pubkey2 = "b".repeat(64);
const pubkey3 = "c".repeat(64);
const mockHighlights = [
{
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: pubkey1,
content: 'highlight from user 1',
content: "highlight from user 1",
created_at: Date.now(),
tags: [],
},
{
id: 'highlight2',
id: "highlight2",
kind: 9802,
pubkey: pubkey2,
content: 'highlight from user 2',
content: "highlight from user 2",
created_at: Date.now(),
tags: [],
},
{
id: 'highlight3',
id: "highlight3",
kind: 9802,
pubkey: pubkey3,
content: 'highlight from user 3',
content: "highlight from user 3",
created_at: Date.now(),
tags: [],
},
@ -407,12 +409,12 @@ describe('HighlightLayer Component', () => { @@ -407,12 +409,12 @@ describe('HighlightLayer Component', () => {
expect(hue1).not.toBe(hue3);
});
it('prevents duplicate highlights', () => {
it("prevents duplicate highlights", () => {
const mockHighlight = {
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: 'highlighted text',
pubkey: "a".repeat(64),
content: "highlighted text",
created_at: Date.now(),
tags: [],
};
@ -420,32 +422,32 @@ describe('HighlightLayer Component', () => { @@ -420,32 +422,32 @@ describe('HighlightLayer Component', () => {
const highlights = [mockHighlight];
// Try to add duplicate
const isDuplicate = highlights.some(h => h.id === mockHighlight.id);
const isDuplicate = highlights.some((h) => h.id === mockHighlight.id);
expect(isDuplicate).toBe(true);
// Should not add duplicate
});
it('handles empty content gracefully', () => {
it("handles empty content gracefully", () => {
const mockHighlight = {
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: '',
pubkey: "a".repeat(64),
content: "",
created_at: Date.now(),
tags: [],
};
// Should not crash
expect(mockHighlight.content).toBe('');
expect(mockHighlight.content).toBe("");
});
it('handles whitespace-only content', () => {
it("handles whitespace-only content", () => {
const mockHighlight = {
id: 'highlight1',
id: "highlight1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: ' \n\t ',
pubkey: "a".repeat(64),
content: " \n\t ",
created_at: Date.now(),
tags: [],
};
@ -455,9 +457,9 @@ describe('HighlightLayer Component', () => { @@ -455,9 +457,9 @@ describe('HighlightLayer Component', () => {
});
});
describe('Highlighter Legend', () => {
it('displays legend with correct color for single highlighter', () => {
const pubkey = 'abc123def456'.repeat(5) + 'abcd';
describe("Highlighter Legend", () => {
it("displays legend with correct color for single highlighter", () => {
const pubkey = "abc123def456".repeat(5) + "abcd";
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
@ -471,14 +473,14 @@ describe('HighlightLayer Component', () => { @@ -471,14 +473,14 @@ describe('HighlightLayer Component', () => {
expect(legend.shortPubkey).toBe(`${pubkey.slice(0, 8)}...`);
});
it('displays legend with colors for multiple highlighters', () => {
it("displays legend with colors for multiple highlighters", () => {
const pubkeys = [
'a'.repeat(64),
'b'.repeat(64),
'c'.repeat(64),
"a".repeat(64),
"b".repeat(64),
"c".repeat(64),
];
const legendEntries = pubkeys.map(pubkey => ({
const legendEntries = pubkeys.map((pubkey) => ({
pubkey,
color: `hsla(${pubkeyToHue(pubkey)}, 70%, 60%, 0.3)`,
shortPubkey: `${pubkey.slice(0, 8)}...`,
@ -487,45 +489,45 @@ describe('HighlightLayer Component', () => { @@ -487,45 +489,45 @@ describe('HighlightLayer Component', () => {
expect(legendEntries.length).toBe(3);
// Each should have unique color
const colors = legendEntries.map(e => e.color);
const colors = legendEntries.map((e) => e.color);
const uniqueColors = new Set(colors);
expect(uniqueColors.size).toBe(3);
});
it('shows truncated pubkey in legend', () => {
const pubkey = 'abcdefghijklmnop'.repeat(4);
it("shows truncated pubkey in legend", () => {
const pubkey = "abcdefghijklmnop".repeat(4);
const shortPubkey = `${pubkey.slice(0, 8)}...`;
expect(shortPubkey).toBe('abcdefgh...');
expect(shortPubkey).toBe("abcdefgh...");
expect(shortPubkey.length).toBeLessThan(pubkey.length);
});
it('displays highlight count', () => {
it("displays highlight count", () => {
const highlights = [
{ id: '1', pubkey: 'a'.repeat(64), content: 'text1' },
{ id: '2', pubkey: 'b'.repeat(64), content: 'text2' },
{ id: '3', pubkey: 'a'.repeat(64), content: 'text3' },
{ id: "1", pubkey: "a".repeat(64), content: "text1" },
{ id: "2", pubkey: "b".repeat(64), content: "text2" },
{ id: "3", pubkey: "a".repeat(64), content: "text3" },
];
expect(highlights.length).toBe(3);
// Count unique highlighters
const uniqueHighlighters = new Set(highlights.map(h => h.pubkey));
const uniqueHighlighters = new Set(highlights.map((h) => h.pubkey));
expect(uniqueHighlighters.size).toBe(2);
});
});
describe('Text Matching', () => {
it('matches text case-insensitively', () => {
const searchText = 'Hello World';
const contentText = 'hello world';
describe("Text Matching", () => {
it("matches text case-insensitively", () => {
const searchText = "Hello World";
const contentText = "hello world";
const index = contentText.toLowerCase().indexOf(searchText.toLowerCase());
expect(index).toBeGreaterThanOrEqual(0);
});
it('handles special characters in search text', () => {
it("handles special characters in search text", () => {
const searchText = 'text with "quotes" and symbols!';
const contentText = 'This is text with "quotes" and symbols! in it.';
@ -534,67 +536,75 @@ describe('HighlightLayer Component', () => { @@ -534,67 +536,75 @@ describe('HighlightLayer Component', () => {
expect(index).toBeGreaterThanOrEqual(0);
});
it('handles Unicode characters', () => {
const searchText = 'café résumé';
const contentText = 'The café résumé was excellent.';
it("handles Unicode characters", () => {
const searchText = "café résumé";
const contentText = "The café résumé was excellent.";
const index = contentText.toLowerCase().indexOf(searchText.toLowerCase());
expect(index).toBeGreaterThanOrEqual(0);
});
it('handles multi-line text', () => {
const searchText = 'line one\nline two';
const contentText = 'This is line one\nline two in the document.';
it("handles multi-line text", () => {
const searchText = "line one\nline two";
const contentText = "This is line one\nline two in the document.";
const index = contentText.indexOf(searchText);
expect(index).toBeGreaterThanOrEqual(0);
});
it('does not match partial words when searching for whole words', () => {
const searchText = 'cat';
const contentText = 'The category is important.';
it("does not match partial words when searching for whole words", () => {
const searchText = "cat";
const contentText = "The category is important.";
// Simple word boundary check
const wordBoundaryMatch = new RegExp(`\\b${searchText}\\b`, 'i').test(contentText);
const wordBoundaryMatch = new RegExp(`\\b${searchText}\\b`, "i").test(
contentText,
);
expect(wordBoundaryMatch).toBe(false);
});
});
describe('Subscription Lifecycle', () => {
it('registers EOSE event handler', () => {
describe("Subscription Lifecycle", () => {
it("registers EOSE event handler", () => {
const subscription = mockNdk.subscribe({ kinds: [9802], limit: 100 });
// Verify that 'on' method is available for registering handlers
expect(subscription.on).toBeDefined();
// Register EOSE handler
subscription.on('eose', () => {
subscription.on("eose", () => {
subscription.stop();
});
// Verify on was called
expect(subscription.on).toHaveBeenCalledWith('eose', expect.any(Function));
expect(subscription.on).toHaveBeenCalledWith(
"eose",
expect.any(Function),
);
});
it('registers error event handler', () => {
it("registers error event handler", () => {
const subscription = mockNdk.subscribe({ kinds: [9802], limit: 100 });
// Verify that 'on' method is available for registering handlers
expect(subscription.on).toBeDefined();
// Register error handler
subscription.on('error', () => {
subscription.on("error", () => {
subscription.stop();
});
// Verify on was called
expect(subscription.on).toHaveBeenCalledWith('error', expect.any(Function));
expect(subscription.on).toHaveBeenCalledWith(
"error",
expect.any(Function),
);
});
it('stops subscription on timeout', async () => {
it("stops subscription on timeout", async () => {
vi.useFakeTimers();
mockNdk.subscribe({ kinds: [9802], limit: 100 });
@ -608,7 +618,7 @@ describe('HighlightLayer Component', () => { @@ -608,7 +618,7 @@ describe('HighlightLayer Component', () => {
vi.useRealTimers();
});
it('handles multiple subscription cleanup calls safely', () => {
it("handles multiple subscription cleanup calls safely", () => {
mockNdk.subscribe({ kinds: [9802], limit: 100 });
// Call stop multiple times
@ -621,8 +631,8 @@ describe('HighlightLayer Component', () => { @@ -621,8 +631,8 @@ describe('HighlightLayer Component', () => {
});
});
describe('Performance', () => {
it('handles large number of highlights efficiently', () => {
describe("Performance", () => {
it("handles large number of highlights efficiently", () => {
const startTime = Date.now();
const highlights = Array.from({ length: 1000 }, (_, i) => ({
@ -636,7 +646,7 @@ describe('HighlightLayer Component', () => { @@ -636,7 +646,7 @@ describe('HighlightLayer Component', () => {
// Generate colors for all highlights
const colorMap = new Map<string, string>();
highlights.forEach(h => {
highlights.forEach((h) => {
if (!colorMap.has(h.pubkey)) {
const hue = pubkeyToHue(h.pubkey);
colorMap.set(h.pubkey, `hsla(${hue}, 70%, 60%, 0.3)`);
@ -653,9 +663,9 @@ describe('HighlightLayer Component', () => { @@ -653,9 +663,9 @@ describe('HighlightLayer Component', () => {
});
});
describe('Integration Tests', () => {
describe('Toggle Functionality', () => {
it('toggle button shows highlights when clicked', () => {
describe("Integration Tests", () => {
describe("Toggle Functionality", () => {
it("toggle button shows highlights when clicked", () => {
let highlightsVisible = false;
// Simulate toggle
@ -664,7 +674,7 @@ describe('Integration Tests', () => { @@ -664,7 +674,7 @@ describe('Integration Tests', () => {
expect(highlightsVisible).toBe(true);
});
it('toggle button hides highlights when clicked again', () => {
it("toggle button hides highlights when clicked again", () => {
let highlightsVisible = true;
// Simulate toggle
@ -673,7 +683,7 @@ describe('Integration Tests', () => { @@ -673,7 +683,7 @@ describe('Integration Tests', () => {
expect(highlightsVisible).toBe(false);
});
it('toggle state persists between interactions', () => {
it("toggle state persists between interactions", () => {
let highlightsVisible = false;
highlightsVisible = !highlightsVisible;
@ -687,37 +697,38 @@ describe('Integration Tests', () => { @@ -687,37 +697,38 @@ describe('Integration Tests', () => {
});
});
describe('Color Format Validation', () => {
it('generates semi-transparent colors with 0.3 opacity', () => {
describe("Color Format Validation", () => {
it("generates semi-transparent colors with 0.3 opacity", () => {
const pubkeys = [
'a'.repeat(64),
'b'.repeat(64),
'c'.repeat(64),
"a".repeat(64),
"b".repeat(64),
"c".repeat(64),
];
pubkeys.forEach(pubkey => {
pubkeys.forEach((pubkey) => {
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
expect(color).toContain('0.3');
expect(color).toContain("0.3");
});
});
it('uses HSL color format with correct saturation and lightness', () => {
const pubkey = 'a'.repeat(64);
it("uses HSL color format with correct saturation and lightness", () => {
const pubkey = "a".repeat(64);
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
expect(color).toContain('70%');
expect(color).toContain('60%');
expect(color).toContain("70%");
expect(color).toContain("60%");
});
it('generates valid CSS color strings', () => {
const pubkeys = Array.from({ length: 20 }, (_, i) =>
String.fromCharCode(97 + i).repeat(64)
it("generates valid CSS color strings", () => {
const pubkeys = Array.from(
{ length: 20 },
(_, i) => String.fromCharCode(97 + i).repeat(64),
);
pubkeys.forEach(pubkey => {
pubkeys.forEach((pubkey) => {
const hue = pubkeyToHue(pubkey);
const color = `hsla(${hue}, 70%, 60%, 0.3)`;
@ -727,8 +738,8 @@ describe('Integration Tests', () => { @@ -727,8 +738,8 @@ describe('Integration Tests', () => {
});
});
describe('End-to-End Flow', () => {
it('complete highlight workflow', () => {
describe("End-to-End Flow", () => {
it("complete highlight workflow", () => {
// 1. Start with no highlights visible
let highlightsVisible = false;
let highlights: any[] = [];
@ -739,18 +750,18 @@ describe('Integration Tests', () => { @@ -739,18 +750,18 @@ describe('Integration Tests', () => {
// 2. Fetch highlights
const mockHighlights = [
{
id: 'h1',
id: "h1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: 'first highlight',
pubkey: "a".repeat(64),
content: "first highlight",
created_at: Date.now(),
tags: [],
},
{
id: 'h2',
id: "h2",
kind: 9802,
pubkey: 'b'.repeat(64),
content: 'second highlight',
pubkey: "b".repeat(64),
content: "second highlight",
created_at: Date.now(),
tags: [],
},
@ -761,7 +772,7 @@ describe('Integration Tests', () => { @@ -761,7 +772,7 @@ describe('Integration Tests', () => {
// 3. Generate color map
const colorMap = new Map<string, string>();
highlights.forEach(h => {
highlights.forEach((h) => {
if (!colorMap.has(h.pubkey)) {
const hue = pubkeyToHue(h.pubkey);
colorMap.set(h.pubkey, `hsla(${hue}, 70%, 60%, 0.3)`);
@ -783,17 +794,17 @@ describe('Integration Tests', () => { @@ -783,17 +794,17 @@ describe('Integration Tests', () => {
expect(highlightsVisible).toBe(false);
});
it('handles event updates correctly', () => {
let eventId = 'event1';
it("handles event updates correctly", () => {
let eventId = "event1";
let highlights: any[] = [];
// Initial load
highlights = [
{
id: 'h1',
id: "h1",
kind: 9802,
pubkey: 'a'.repeat(64),
content: 'highlight 1',
pubkey: "a".repeat(64),
content: "highlight 1",
created_at: Date.now(),
tags: [],
},
@ -802,7 +813,7 @@ describe('Integration Tests', () => { @@ -802,7 +813,7 @@ describe('Integration Tests', () => {
expect(highlights.length).toBe(1);
// Event changes
eventId = 'event2';
eventId = "event2";
highlights = [];
expect(highlights.length).toBe(0);
@ -810,22 +821,22 @@ describe('Integration Tests', () => { @@ -810,22 +821,22 @@ describe('Integration Tests', () => {
// New highlights loaded
highlights = [
{
id: 'h2',
id: "h2",
kind: 9802,
pubkey: 'b'.repeat(64),
content: 'highlight 2',
pubkey: "b".repeat(64),
content: "highlight 2",
created_at: Date.now(),
tags: [],
},
];
expect(highlights.length).toBe(1);
expect(highlights[0].id).toBe('h2');
expect(highlights[0].id).toBe("h2");
});
});
describe('Error Handling', () => {
it('handles missing event ID and address gracefully', () => {
describe("Error Handling", () => {
it("handles missing event ID and address gracefully", () => {
const eventId = undefined;
const eventAddress = undefined;
@ -834,25 +845,25 @@ describe('Integration Tests', () => { @@ -834,25 +845,25 @@ describe('Integration Tests', () => {
expect(eventAddress).toBeUndefined();
});
it('handles subscription errors gracefully', () => {
const error = new Error('Subscription failed');
it("handles subscription errors gracefully", () => {
const error = new Error("Subscription failed");
// Should log error but not crash
expect(error.message).toBe('Subscription failed');
expect(error.message).toBe("Subscription failed");
});
it('handles malformed highlight events', () => {
it("handles malformed highlight events", () => {
const malformedHighlight = {
id: 'h1',
id: "h1",
kind: 9802,
pubkey: '', // Empty pubkey
pubkey: "", // Empty pubkey
content: undefined, // Missing content
created_at: Date.now(),
tags: [],
};
// Should handle gracefully
expect(malformedHighlight.pubkey).toBe('');
expect(malformedHighlight.pubkey).toBe("");
expect(malformedHighlight.content).toBeUndefined();
});
});

121
tests/unit/publication_tree_processor.test.ts

@ -1,19 +1,23 @@ @@ -1,19 +1,23 @@
/**
* TDD Tests for NKBIP-01 Publication Tree Processor
*
*
* Tests the iterative parsing function at different hierarchy levels
* using deep_hierarchy_test.adoc to verify NKBIP-01 compliance.
*/
import { describe, it, expect, beforeAll } from 'vitest';
import { readFileSync } from 'fs';
import { parseAsciiDocWithTree, validateParseLevel, getSupportedParseLevels } from '../../src/lib/utils/asciidoc_publication_parser.js';
import { beforeAll, describe, expect, it } from "vitest";
import { readFileSync } from "fs";
import {
getSupportedParseLevels,
parseAsciiDocWithTree,
validateParseLevel,
} from "../../src/lib/utils/asciidoc_publication_parser.js";
// Mock NDK for testing
const mockNDK = {
activeUser: {
pubkey: "test-pubkey-12345"
}
pubkey: "test-pubkey-12345",
},
} as any;
// Read the test document
@ -21,7 +25,7 @@ const testDocumentPath = "./test_data/AsciidocFiles/deep_hierarchy_test.adoc"; @@ -21,7 +25,7 @@ const testDocumentPath = "./test_data/AsciidocFiles/deep_hierarchy_test.adoc";
let testContent: string;
try {
testContent = readFileSync(testDocumentPath, 'utf-8');
testContent = readFileSync(testDocumentPath, "utf-8");
} catch (error) {
console.error("Failed to read test document:", error);
testContent = `= Deep Hierarchical Document Test
@ -65,20 +69,19 @@ A second main section to ensure we have balanced content at the top level.`; @@ -65,20 +69,19 @@ A second main section to ensure we have balanced content at the top level.`;
}
describe("NKBIP-01 Publication Tree Processor", () => {
it("should validate parse levels correctly", () => {
// Test valid parse levels
expect(validateParseLevel(2)).toBe(true);
expect(validateParseLevel(3)).toBe(true);
expect(validateParseLevel(5)).toBe(true);
// Test invalid parse levels
expect(validateParseLevel(1)).toBe(false);
expect(validateParseLevel(6)).toBe(false);
expect(validateParseLevel(7)).toBe(false);
expect(validateParseLevel(2.5)).toBe(false);
expect(validateParseLevel(-1)).toBe(false);
// Test supported levels array
const supportedLevels = getSupportedParseLevels();
expect(supportedLevels).toEqual([2, 3, 4, 5]);
@ -86,63 +89,66 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -86,63 +89,66 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should parse Level 2 with NKBIP-01 minimal structure", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 2);
// Should be detected as article (has title and sections)
expect(result.metadata.contentType).toBe("article");
expect(result.metadata.parseLevel).toBe(2);
expect(result.metadata.title).toBe("Deep Hierarchical Document Test");
// Should have 1 index event (30040) + 2 content events (30041) for level 2 sections
expect(result.indexEvent).toBeDefined();
expect(result.indexEvent?.kind).toBe(30040);
expect(result.contentEvents.length).toBe(2);
// All content events should be kind 30041
result.contentEvents.forEach(event => {
result.contentEvents.forEach((event) => {
expect(event.kind).toBe(30041);
});
// Check titles of level 2 sections
const contentTitles = result.contentEvents.map(e =>
const contentTitles = result.contentEvents.map((e) =>
e.tags.find((t: string[]) => t[0] === "title")?.[1]
);
expect(contentTitles).toContain("Level 2: Main Sections");
expect(contentTitles).toContain("Level 2: Second Main Section");
// Content should include all nested subsections as AsciiDoc
const firstSectionContent = result.contentEvents[0].content;
expect(firstSectionContent).toBeDefined();
// Should contain level 3, 4, 5 content as nested AsciiDoc markup
expect(firstSectionContent.includes("=== Level 3: Subsections")).toBe(true);
expect(firstSectionContent.includes("==== Level 4: Sub-subsections")).toBe(true);
expect(firstSectionContent.includes("===== Level 5: Deep Subsections")).toBe(true);
expect(firstSectionContent.includes("==== Level 4: Sub-subsections")).toBe(
true,
);
expect(firstSectionContent.includes("===== Level 5: Deep Subsections"))
.toBe(true);
});
it("should parse Level 3 with NKBIP-01 intermediate structure", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 3);
expect(result.metadata.contentType).toBe("article");
expect(result.metadata.parseLevel).toBe(3);
// Should have hierarchical structure
expect(result.indexEvent).toBeDefined();
expect(result.indexEvent?.kind).toBe(30040);
// Should have mix of 30040 (for level 2 sections with children) and 30041 (for content)
const kinds = result.contentEvents.map(e => e.kind);
const kinds = result.contentEvents.map((e) => e.kind);
expect(kinds).toContain(30040); // Level 2 sections with children
expect(kinds).toContain(30041); // Level 3 content sections
// Level 2 sections with children should be 30040 index events
const level2WithChildrenEvents = result.contentEvents.filter(e =>
e.kind === 30040 &&
const level2WithChildrenEvents = result.contentEvents.filter((e) =>
e.kind === 30040 &&
e.tags.find((t: string[]) => t[0] === "title")?.[1]?.includes("Level 2:")
);
expect(level2WithChildrenEvents.length).toBe(2); // Both level 2 sections have children
// Should have 30041 events for level 3 content
const level3ContentEvents = result.contentEvents.filter(e =>
e.kind === 30041 &&
const level3ContentEvents = result.contentEvents.filter((e) =>
e.kind === 30041 &&
e.tags.find((t: string[]) => t[0] === "title")?.[1]?.includes("Level 3:")
);
expect(level3ContentEvents.length).toBeGreaterThan(0);
@ -150,20 +156,20 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -150,20 +156,20 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should parse Level 4 with NKBIP-01 detailed structure", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 4);
expect(result.metadata.contentType).toBe("article");
expect(result.metadata.parseLevel).toBe(4);
// Should have hierarchical structure with mix of 30040 and 30041 events
expect(result.indexEvent).toBeDefined();
expect(result.indexEvent?.kind).toBe(30040);
const kinds = result.contentEvents.map(e => e.kind);
const kinds = result.contentEvents.map((e) => e.kind);
expect(kinds).toContain(30040); // Level 2 sections with children
expect(kinds).toContain(30041); // Content sections
// Check that we have level 4 content sections
const contentTitles = result.contentEvents.map(e =>
const contentTitles = result.contentEvents.map((e) =>
e.tags.find((t: string[]) => t[0] === "title")?.[1]
);
expect(contentTitles).toContain("Level 4: Sub-subsections");
@ -171,16 +177,16 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -171,16 +177,16 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should parse Level 5 with NKBIP-01 maximum depth", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 5);
expect(result.metadata.contentType).toBe("article");
expect(result.metadata.parseLevel).toBe(5);
// Should have hierarchical structure
expect(result.indexEvent).toBeDefined();
expect(result.indexEvent?.kind).toBe(30040);
// Should include level 5 sections as content events
const contentTitles = result.contentEvents.map(e =>
const contentTitles = result.contentEvents.map((e) =>
e.tags.find((t: string[]) => t[0] === "title")?.[1]
);
expect(contentTitles).toContain("Level 5: Deep Subsections");
@ -188,27 +194,27 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -188,27 +194,27 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should validate event structure correctly", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 3);
// Test index event structure
expect(result.indexEvent).toBeDefined();
expect(result.indexEvent?.kind).toBe(30040);
expect(result.indexEvent?.tags).toBeDefined();
// Check required tags
const indexTags = result.indexEvent!.tags;
const dTag = indexTags.find((t: string[]) => t[0] === "d");
const titleTag = indexTags.find((t: string[]) => t[0] === "title");
expect(dTag).toBeDefined();
expect(titleTag).toBeDefined();
expect(titleTag![1]).toBe("Deep Hierarchical Document Test");
// Test content events structure - mix of 30040 and 30041
result.contentEvents.forEach(event => {
result.contentEvents.forEach((event) => {
expect([30040, 30041]).toContain(event.kind);
expect(event.tags).toBeDefined();
expect(event.content).toBeDefined();
const eventTitleTag = event.tags.find((t: string[]) => t[0] === "title");
expect(eventTitleTag).toBeDefined();
});
@ -216,11 +222,11 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -216,11 +222,11 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should preserve content as AsciiDoc", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 2);
// Content should be preserved as original AsciiDoc, not converted to HTML
const firstEvent = result.contentEvents[0];
expect(firstEvent.content).toBeDefined();
// Should contain AsciiDoc markup, not HTML
expect(firstEvent.content.includes("<")).toBe(false);
expect(firstEvent.content.includes("===")).toBe(true);
@ -228,16 +234,16 @@ describe("NKBIP-01 Publication Tree Processor", () => { @@ -228,16 +234,16 @@ describe("NKBIP-01 Publication Tree Processor", () => {
it("should handle attributes correctly", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 2);
// Document-level attributes should be in index event
expect(result.indexEvent).toBeDefined();
const indexTags = result.indexEvent!.tags;
// Check for document attributes
const authorTag = indexTags.find((t: string[]) => t[0] === "author");
const typeTag = indexTags.find((t: string[]) => t[0] === "type");
const tagsTag = indexTags.find((t: string[]) => t[0] === "t");
expect(authorTag?.[1]).toBe("Test Author");
expect(typeTag?.[1]).toBe("technical");
expect(tagsTag).toBeDefined(); // Should have at least one t-tag
@ -256,29 +262,28 @@ Content of first note. @@ -256,29 +262,28 @@ Content of first note.
Content of second note.`;
const result = await parseAsciiDocWithTree(scatteredContent, mockNDK, 2);
expect(result.metadata.contentType).toBe("scattered-notes");
expect(result.indexEvent).toBeNull(); // No index event for scattered notes
expect(result.contentEvents.length).toBe(2);
// All events should be 30041 content events
result.contentEvents.forEach(event => {
result.contentEvents.forEach((event) => {
expect(event.kind).toBe(30041);
});
});
it("should integrate with PublicationTree structure", async () => {
const result = await parseAsciiDocWithTree(testContent, mockNDK, 2);
// Should have a PublicationTree instance
expect(result.tree).toBeDefined();
// Tree should have methods for event management
expect(typeof result.tree.addEvent).toBe("function");
// Event structure should be populated
expect(result.metadata.eventStructure).toBeDefined();
expect(Array.isArray(result.metadata.eventStructure)).toBe(true);
});
});
});

534
tests/zettel-publisher-tdd.test.ts

@ -3,7 +3,7 @@ @@ -3,7 +3,7 @@
/**
* Test-Driven Development for ZettelPublisher Enhancement
* Based on understanding_knowledge.adoc, desire.adoc, and docreference.md
*
*
* Key Requirements Discovered:
* 1. ITERATIVE parsing (not recursive): sections at target level become events
* 2. Level 2: == sections become 30041 events containing ALL subsections (===, ====, etc.)
@ -14,8 +14,8 @@ @@ -14,8 +14,8 @@
* 7. Custom attributes: all :key: value pairs preserved as event tags
*/
import fs from 'fs';
import path from 'path';
import fs from "fs";
import path from "path";
// Test framework
interface TestCase {
@ -40,7 +40,9 @@ class TestFramework { @@ -40,7 +40,9 @@ class TestFramework {
},
toEqual: (expected: any) => {
if (JSON.stringify(actual) === JSON.stringify(expected)) return true;
throw new Error(`Expected ${JSON.stringify(expected)}, got ${JSON.stringify(actual)}`);
throw new Error(
`Expected ${JSON.stringify(expected)}, got ${JSON.stringify(actual)}`,
);
},
toContain: (expected: any) => {
if (actual && actual.includes && actual.includes(expected)) return true;
@ -48,9 +50,11 @@ class TestFramework { @@ -48,9 +50,11 @@ class TestFramework {
},
not: {
toContain: (expected: any) => {
if (actual && actual.includes && !actual.includes(expected)) return true;
if (actual && actual.includes && !actual.includes(expected)) {
return true;
}
throw new Error(`Expected "${actual}" NOT to contain "${expected}"`);
}
},
},
toBeTruthy: () => {
if (actual) return true;
@ -58,14 +62,18 @@ class TestFramework { @@ -58,14 +62,18 @@ class TestFramework {
},
toHaveLength: (expected: number) => {
if (actual && actual.length === expected) return true;
throw new Error(`Expected length ${expected}, got ${actual ? actual.length : 'undefined'}`);
}
throw new Error(
`Expected length ${expected}, got ${
actual ? actual.length : "undefined"
}`,
);
},
};
}
async run() {
console.log(`🧪 Running ${this.tests.length} tests...\n`);
for (const { name, fn } of this.tests) {
try {
await fn();
@ -87,57 +95,68 @@ class TestFramework { @@ -87,57 +95,68 @@ class TestFramework {
const test = new TestFramework();
// Load test data files
const testDataPath = path.join(process.cwd(), 'test_data', 'AsciidocFiles');
const understandingKnowledge = fs.readFileSync(path.join(testDataPath, 'understanding_knowledge.adoc'), 'utf-8');
const desire = fs.readFileSync(path.join(testDataPath, 'desire.adoc'), 'utf-8');
const testDataPath = path.join(process.cwd(), "test_data", "AsciidocFiles");
const understandingKnowledge = fs.readFileSync(
path.join(testDataPath, "understanding_knowledge.adoc"),
"utf-8",
);
const desire = fs.readFileSync(path.join(testDataPath, "desire.adoc"), "utf-8");
// =============================================================================
// PHASE 1: Core Data Structure Tests (Based on Real Test Data)
// =============================================================================
test.test('Understanding Knowledge: Document metadata should be extracted from = level', () => {
test.test("Understanding Knowledge: Document metadata should be extracted from = level", () => {
// Expected 30040 metadata from understanding_knowledge.adoc
const expectedDocMetadata = {
title: 'Understanding Knowledge',
image: 'https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg',
published: '2025-04-21',
language: 'en, ISO-639-1',
tags: ['knowledge', 'philosophy', 'education'],
type: 'text'
title: "Understanding Knowledge",
image: "https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg",
published: "2025-04-21",
language: "en, ISO-639-1",
tags: ["knowledge", "philosophy", "education"],
type: "text",
};
// Test will pass when document parsing extracts these correctly
test.expect(expectedDocMetadata.title).toBe('Understanding Knowledge');
test.expect(expectedDocMetadata.title).toBe("Understanding Knowledge");
test.expect(expectedDocMetadata.tags).toHaveLength(3);
test.expect(expectedDocMetadata.type).toBe('text');
test.expect(expectedDocMetadata.type).toBe("text");
});
test.test('Desire: Document metadata should include all custom attributes', () => {
test.test("Desire: Document metadata should include all custom attributes", () => {
// Expected 30040 metadata from desire.adoc
const expectedDocMetadata = {
title: 'Desire Part 1: Mimesis',
image: 'https://i.nostr.build/hGzyi4c3YhTwoCCe.png',
published: '2025-07-02',
language: 'en, ISO-639-1',
tags: ['memetics', 'philosophy', 'desire'],
type: 'podcastArticle'
title: "Desire Part 1: Mimesis",
image: "https://i.nostr.build/hGzyi4c3YhTwoCCe.png",
published: "2025-07-02",
language: "en, ISO-639-1",
tags: ["memetics", "philosophy", "desire"],
type: "podcastArticle",
};
test.expect(expectedDocMetadata.type).toBe('podcastArticle');
test.expect(expectedDocMetadata.tags).toContain('memetics');
test.expect(expectedDocMetadata.type).toBe("podcastArticle");
test.expect(expectedDocMetadata.tags).toContain("memetics");
});
test.test('Iterative ParsedAsciiDoc interface should support level-based parsing', () => {
test.test("Iterative ParsedAsciiDoc interface should support level-based parsing", () => {
// Test the ITERATIVE interface structure (not recursive)
// Based on docreference.md - Level 2 parsing example
const mockLevel2Structure = {
metadata: { title: 'Programming Fundamentals Guide', tags: ['programming', 'fundamentals'] },
content: 'This is the main introduction to the programming guide.',
title: 'Programming Fundamentals Guide',
metadata: {
title: "Programming Fundamentals Guide",
tags: ["programming", "fundamentals"],
},
content: "This is the main introduction to the programming guide.",
title: "Programming Fundamentals Guide",
sections: [
{
metadata: { title: 'Data Structures', tags: ['arrays', 'lists', 'trees'], difficulty: 'intermediate' },
content: `Understanding fundamental data structures is crucial for effective programming.
metadata: {
title: "Data Structures",
tags: ["arrays", "lists", "trees"],
difficulty: "intermediate",
},
content:
`Understanding fundamental data structures is crucial for effective programming.
=== Arrays and Lists
@ -155,11 +174,16 @@ Linked lists use pointers to connect elements. @@ -155,11 +174,16 @@ Linked lists use pointers to connect elements.
=== Trees and Graphs
Tree and graph structures enable hierarchical and networked data representation.`,
title: 'Data Structures'
title: "Data Structures",
},
{
metadata: { title: 'Algorithms', tags: ['sorting', 'searching', 'optimization'], difficulty: 'advanced' },
content: `Algorithmic thinking forms the foundation of efficient problem-solving.
metadata: {
title: "Algorithms",
tags: ["sorting", "searching", "optimization"],
difficulty: "advanced",
},
content:
`Algorithmic thinking forms the foundation of efficient problem-solving.
=== Sorting Algorithms
@ -172,54 +196,64 @@ Bubble sort repeatedly steps through the list, compares adjacent elements. @@ -172,54 +196,64 @@ Bubble sort repeatedly steps through the list, compares adjacent elements.
==== Quick Sort
Quick sort uses divide-and-conquer approach with pivot selection.`,
title: 'Algorithms'
}
]
title: "Algorithms",
},
],
};
// Verify ITERATIVE structure: only level 2 sections, containing ALL subsections
test.expect(mockLevel2Structure.sections).toHaveLength(2);
test.expect(mockLevel2Structure.sections[0].title).toBe('Data Structures');
test.expect(mockLevel2Structure.sections[0].content).toContain('=== Arrays and Lists');
test.expect(mockLevel2Structure.sections[0].content).toContain('==== Dynamic Arrays');
test.expect(mockLevel2Structure.sections[1].content).toContain('==== Quick Sort');
test.expect(mockLevel2Structure.sections[0].title).toBe("Data Structures");
test.expect(mockLevel2Structure.sections[0].content).toContain(
"=== Arrays and Lists",
);
test.expect(mockLevel2Structure.sections[0].content).toContain(
"==== Dynamic Arrays",
);
test.expect(mockLevel2Structure.sections[1].content).toContain(
"==== Quick Sort",
);
});
// =============================================================================
// PHASE 2: Content Processing Tests (Header Separation)
// =============================================================================
test.test('Section content should NOT contain its own header', () => {
test.test("Section content should NOT contain its own header", () => {
// From understanding_knowledge.adoc: "== Preface" section
const expectedPrefaceContent = `[NOTE]
This essay was written to outline and elaborate on the purpose of the Nostr client Alexandria. No formal academic citations are included as this serves primarily as a conceptual foundation, inviting readers to experience related ideas connecting and forming as more content becomes uploaded. Traces of AI edits and guidance are left, but the essay style is still my own. Over time this essay may change its wording, structure and content.
-- liminal`;
// Should NOT contain "== Preface"
test.expect(expectedPrefaceContent).not.toContain('== Preface');
test.expect(expectedPrefaceContent).toContain('[NOTE]');
test.expect(expectedPrefaceContent).not.toContain("== Preface");
test.expect(expectedPrefaceContent).toContain("[NOTE]");
});
test.test('Introduction section should separate from its subsections', () => {
test.test("Introduction section should separate from its subsections", () => {
// From understanding_knowledge.adoc
const expectedIntroContent = `image:https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg[library]`;
const expectedIntroContent =
`image:https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg[library]`;
// Should NOT contain subsection content or headers
test.expect(expectedIntroContent).not.toContain('=== Why Investigate');
test.expect(expectedIntroContent).not.toContain('Understanding the nature of knowledge');
test.expect(expectedIntroContent).toContain('image:https://i.nostr.build');
test.expect(expectedIntroContent).not.toContain("=== Why Investigate");
test.expect(expectedIntroContent).not.toContain(
"Understanding the nature of knowledge",
);
test.expect(expectedIntroContent).toContain("image:https://i.nostr.build");
});
test.test('Subsection content should be cleanly separated', () => {
test.test("Subsection content should be cleanly separated", () => {
// "=== Why Investigate the Nature of Knowledge?" subsection
const expectedSubsectionContent = `Understanding the nature of knowledge itself is fundamental, distinct from simply studying how we learn or communicate. Knowledge exests first as representations within individuals, separate from how we interact with it...`;
const expectedSubsectionContent =
`Understanding the nature of knowledge itself is fundamental, distinct from simply studying how we learn or communicate. Knowledge exests first as representations within individuals, separate from how we interact with it...`;
// Should NOT contain its own header
test.expect(expectedSubsectionContent).not.toContain('=== Why Investigate');
test.expect(expectedSubsectionContent).toContain('Understanding the nature');
test.expect(expectedSubsectionContent).not.toContain("=== Why Investigate");
test.expect(expectedSubsectionContent).toContain("Understanding the nature");
});
test.test('Deep headers (====) should have proper newlines', () => {
test.test("Deep headers (====) should have proper newlines", () => {
// From "=== The Four Perspectives" section with ==== subsections
const expectedFormatted = `
==== 1. The Building Blocks (Material Cause)
@ -230,188 +264,226 @@ Just as living organisms are made up of cells, knowledge systems are built from @@ -230,188 +264,226 @@ Just as living organisms are made up of cells, knowledge systems are built from
If you've ever seen how mushrooms connect through underground networks...`;
test.expect(expectedFormatted).toContain('\n==== 1. The Building Blocks (Material Cause)\n');
test.expect(expectedFormatted).toContain('\n==== 2. The Pattern of Organization (Formal Cause)\n');
test.expect(expectedFormatted).toContain(
"\n==== 1. The Building Blocks (Material Cause)\n",
);
test.expect(expectedFormatted).toContain(
"\n==== 2. The Pattern of Organization (Formal Cause)\n",
);
});
// =============================================================================
// PHASE 3: Publishing Logic Tests (30040/30041 Structure)
// =============================================================================
test.test('Understanding Knowledge should create proper 30040 index event', () => {
test.test("Understanding Knowledge should create proper 30040 index event", () => {
// Expected 30040 index event structure
const expectedIndexEvent = {
kind: 30040,
content: '', // Index events have empty content
content: "", // Index events have empty content
tags: [
['d', 'understanding-knowledge'],
['title', 'Understanding Knowledge'],
['image', 'https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg'],
['published', '2025-04-21'],
['language', 'en, ISO-639-1'],
['t', 'knowledge'],
['t', 'philosophy'],
['t', 'education'],
['type', 'text'],
["d", "understanding-knowledge"],
["title", "Understanding Knowledge"],
["image", "https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg"],
["published", "2025-04-21"],
["language", "en, ISO-639-1"],
["t", "knowledge"],
["t", "philosophy"],
["t", "education"],
["type", "text"],
// a-tags referencing sections
['a', '30041:pubkey:understanding-knowledge-preface'],
['a', '30041:pubkey:understanding-knowledge-introduction-knowledge-as-a-living-ecosystem'],
['a', '30041:pubkey:understanding-knowledge-i-material-cause-the-substance-of-knowledge'],
["a", "30041:pubkey:understanding-knowledge-preface"],
[
"a",
"30041:pubkey:understanding-knowledge-introduction-knowledge-as-a-living-ecosystem",
],
[
"a",
"30041:pubkey:understanding-knowledge-i-material-cause-the-substance-of-knowledge",
],
// ... more a-tags for each section
]
],
};
test.expect(expectedIndexEvent.kind).toBe(30040);
test.expect(expectedIndexEvent.content).toBe('');
test.expect(expectedIndexEvent.tags.filter(([k]) => k === 't')).toHaveLength(3);
test.expect(expectedIndexEvent.tags.find(([k, v]) => k === 'type' && v === 'text')).toBeTruthy();
test.expect(expectedIndexEvent.content).toBe("");
test.expect(expectedIndexEvent.tags.filter(([k]) => k === "t")).toHaveLength(
3,
);
test.expect(
expectedIndexEvent.tags.find(([k, v]) => k === "type" && v === "text"),
).toBeTruthy();
});
test.test('Understanding Knowledge sections should create proper 30041 events', () => {
test.test("Understanding Knowledge sections should create proper 30041 events", () => {
// Expected 30041 events for main sections
const expectedSectionEvents = [
{
kind: 30041,
content: `[NOTE]\nThis essay was written to outline and elaborate on the purpose of the Nostr client Alexandria...`,
content:
`[NOTE]\nThis essay was written to outline and elaborate on the purpose of the Nostr client Alexandria...`,
tags: [
['d', 'understanding-knowledge-preface'],
['title', 'Preface']
]
["d", "understanding-knowledge-preface"],
["title", "Preface"],
],
},
{
kind: 30041,
kind: 30041,
content: `image:https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg[library]`,
tags: [
['d', 'understanding-knowledge-introduction-knowledge-as-a-living-ecosystem'],
['title', 'Introduction: Knowledge as a Living Ecosystem']
]
}
[
"d",
"understanding-knowledge-introduction-knowledge-as-a-living-ecosystem",
],
["title", "Introduction: Knowledge as a Living Ecosystem"],
],
},
];
expectedSectionEvents.forEach(event => {
expectedSectionEvents.forEach((event) => {
test.expect(event.kind).toBe(30041);
test.expect(event.content).toBeTruthy();
test.expect(event.tags.find(([k]) => k === 'd')).toBeTruthy();
test.expect(event.tags.find(([k]) => k === 'title')).toBeTruthy();
test.expect(event.tags.find(([k]) => k === "d")).toBeTruthy();
test.expect(event.tags.find(([k]) => k === "title")).toBeTruthy();
});
});
test.test('Level-based parsing should create correct 30040/30041 structure', () => {
test.test("Level-based parsing should create correct 30040/30041 structure", () => {
// Based on docreference.md examples
// Level 2 parsing: only == sections become events, containing all subsections
const expectedLevel2Events = {
mainIndex: {
kind: 30040,
content: '',
content: "",
tags: [
['d', 'programming-fundamentals-guide'],
['title', 'Programming Fundamentals Guide'],
['a', '30041:author_pubkey:data-structures'],
['a', '30041:author_pubkey:algorithms']
]
["d", "programming-fundamentals-guide"],
["title", "Programming Fundamentals Guide"],
["a", "30041:author_pubkey:data-structures"],
["a", "30041:author_pubkey:algorithms"],
],
},
dataStructuresSection: {
kind: 30041,
content: 'Understanding fundamental data structures...\n\n=== Arrays and Lists\n\n...==== Dynamic Arrays\n\n...==== Linked Lists\n\n...',
content:
"Understanding fundamental data structures...\n\n=== Arrays and Lists\n\n...==== Dynamic Arrays\n\n...==== Linked Lists\n\n...",
tags: [
['d', 'data-structures'],
['title', 'Data Structures'],
['difficulty', 'intermediate']
]
}
["d", "data-structures"],
["title", "Data Structures"],
["difficulty", "intermediate"],
],
},
};
// Level 3 parsing: == sections become 30040 indices, === sections become 30041 events
const expectedLevel3Events = {
mainIndex: {
kind: 30040,
content: '',
content: "",
tags: [
['d', 'programming-fundamentals-guide'],
['title', 'Programming Fundamentals Guide'],
['a', '30040:author_pubkey:data-structures'], // Now references sub-index
['a', '30040:author_pubkey:algorithms']
]
["d", "programming-fundamentals-guide"],
["title", "Programming Fundamentals Guide"],
["a", "30040:author_pubkey:data-structures"], // Now references sub-index
["a", "30040:author_pubkey:algorithms"],
],
},
dataStructuresIndex: {
kind: 30040,
content: '',
content: "",
tags: [
['d', 'data-structures'],
['title', 'Data Structures'],
['a', '30041:author_pubkey:data-structures-content'],
['a', '30041:author_pubkey:arrays-and-lists'],
['a', '30041:author_pubkey:trees-and-graphs']
]
["d", "data-structures"],
["title", "Data Structures"],
["a", "30041:author_pubkey:data-structures-content"],
["a", "30041:author_pubkey:arrays-and-lists"],
["a", "30041:author_pubkey:trees-and-graphs"],
],
},
arraysAndListsSection: {
kind: 30041,
content: 'Arrays are contiguous...\n\n==== Dynamic Arrays\n\n...==== Linked Lists\n\n...',
content:
"Arrays are contiguous...\n\n==== Dynamic Arrays\n\n...==== Linked Lists\n\n...",
tags: [
['d', 'arrays-and-lists'],
['title', 'Arrays and Lists']
]
}
["d", "arrays-and-lists"],
["title", "Arrays and Lists"],
],
},
};
test.expect(expectedLevel2Events.mainIndex.kind).toBe(30040);
test.expect(expectedLevel2Events.dataStructuresSection.kind).toBe(30041);
test.expect(expectedLevel2Events.dataStructuresSection.content).toContain('=== Arrays and Lists');
test.expect(expectedLevel2Events.dataStructuresSection.content).toContain(
"=== Arrays and Lists",
);
test.expect(expectedLevel3Events.dataStructuresIndex.kind).toBe(30040);
test.expect(expectedLevel3Events.arraysAndListsSection.content).toContain('==== Dynamic Arrays');
test.expect(expectedLevel3Events.arraysAndListsSection.content).toContain(
"==== Dynamic Arrays",
);
});
// =============================================================================
// PHASE 4: Smart Publishing System Tests
// =============================================================================
test.test('Content type detection should work for both test files', () => {
test.test("Content type detection should work for both test files", () => {
const testCases = [
{
name: 'Understanding Knowledge (article)',
name: "Understanding Knowledge (article)",
content: understandingKnowledge,
expected: 'article'
expected: "article",
},
{
name: 'Desire (article)',
content: desire,
expected: 'article'
name: "Desire (article)",
content: desire,
expected: "article",
},
{
name: 'Scattered notes format',
content: '== Note 1\nContent\n\n== Note 2\nMore content',
expected: 'scattered-notes'
}
name: "Scattered notes format",
content: "== Note 1\nContent\n\n== Note 2\nMore content",
expected: "scattered-notes",
},
];
testCases.forEach(({ name, content, expected }) => {
const hasDocTitle = content.trim().startsWith('=') && !content.trim().startsWith('==');
const hasSections = content.includes('==');
const hasDocTitle = content.trim().startsWith("=") &&
!content.trim().startsWith("==");
const hasSections = content.includes("==");
let detected;
if (hasDocTitle) {
detected = 'article';
detected = "article";
} else if (hasSections) {
detected = 'scattered-notes';
detected = "scattered-notes";
} else {
detected = 'none';
detected = "none";
}
console.log(` ${name}: detected ${detected}`);
test.expect(detected).toBe(expected);
});
});
test.test('Parse level should affect event structure correctly', () => {
test.test("Parse level should affect event structure correctly", () => {
// Understanding Knowledge has structure: = > == (6 sections) > === (many subsections) > ====
// Based on actual content analysis
const levelEventCounts = [
{ level: 1, description: 'Only document index', events: 1 },
{ level: 2, description: 'Document index + level 2 sections (==)', events: 7 }, // 1 index + 6 sections
{ level: 3, description: 'Document index + section indices + level 3 subsections (===)', events: 20 }, // More complex
{ level: 4, description: 'Full hierarchy including level 4 (====)', events: 35 }
{ level: 1, description: "Only document index", events: 1 },
{
level: 2,
description: "Document index + level 2 sections (==)",
events: 7,
}, // 1 index + 6 sections
{
level: 3,
description:
"Document index + section indices + level 3 subsections (===)",
events: 20,
}, // More complex
{
level: 4,
description: "Full hierarchy including level 4 (====)",
events: 35,
},
];
levelEventCounts.forEach(({ level, description, events }) => {
@ -424,27 +496,27 @@ test.test('Parse level should affect event structure correctly', () => { @@ -424,27 +496,27 @@ test.test('Parse level should affect event structure correctly', () => {
// PHASE 5: Integration Tests (End-to-End Workflow)
// =============================================================================
test.test('Full Understanding Knowledge publishing workflow (Level 2)', async () => {
test.test("Full Understanding Knowledge publishing workflow (Level 2)", async () => {
// Mock the complete ITERATIVE workflow
const mockWorkflow = {
parseLevel2: (content: string) => ({
metadata: {
title: 'Understanding Knowledge',
image: 'https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg',
published: '2025-04-21',
tags: ['knowledge', 'philosophy', 'education'],
type: 'text'
title: "Understanding Knowledge",
image: "https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg",
published: "2025-04-21",
tags: ["knowledge", "philosophy", "education"],
type: "text",
},
title: 'Understanding Knowledge',
content: 'Introduction content before any sections',
title: "Understanding Knowledge",
content: "Introduction content before any sections",
sections: [
{
title: 'Preface',
content: '[NOTE]\nThis essay was written to outline...',
metadata: { title: 'Preface' }
{
title: "Preface",
content: "[NOTE]\nThis essay was written to outline...",
metadata: { title: "Preface" },
},
{
title: 'Introduction: Knowledge as a Living Ecosystem',
{
title: "Introduction: Knowledge as a Living Ecosystem",
// Contains ALL subsections (===, ====) in content
content: `image:https://i.nostr.build/IUs0xNyUEf5hXTFL.jpg[library]
@ -461,41 +533,46 @@ Traditionally, knowledge has been perceived as a static repository... @@ -461,41 +533,46 @@ Traditionally, knowledge has been perceived as a static repository...
===== 1. The Building Blocks (Material Cause)
Just as living organisms are made up of cells...`,
metadata: { title: 'Introduction: Knowledge as a Living Ecosystem' }
}
metadata: { title: "Introduction: Knowledge as a Living Ecosystem" },
},
// ... 4 more sections (Material Cause, Formal Cause, Efficient Cause, Final Cause)
]
],
}),
buildLevel2Events: (parsed: any) => ({
indexEvent: {
kind: 30040,
content: '',
indexEvent: {
kind: 30040,
content: "",
tags: [
['d', 'understanding-knowledge'],
['title', parsed.title],
['image', parsed.metadata.image],
['t', 'knowledge'], ['t', 'philosophy'], ['t', 'education'],
['type', 'text'],
['a', '30041:pubkey:preface'],
['a', '30041:pubkey:introduction-knowledge-as-a-living-ecosystem']
]
["d", "understanding-knowledge"],
["title", parsed.title],
["image", parsed.metadata.image],
["t", "knowledge"],
["t", "philosophy"],
["t", "education"],
["type", "text"],
["a", "30041:pubkey:preface"],
["a", "30041:pubkey:introduction-knowledge-as-a-living-ecosystem"],
],
},
sectionEvents: parsed.sections.map((s: any) => ({
kind: 30041,
content: s.content,
tags: [
['d', s.title.toLowerCase().replace(/[^a-z0-9]+/g, '-')],
['title', s.title]
]
}))
["d", s.title.toLowerCase().replace(/[^a-z0-9]+/g, "-")],
["title", s.title],
],
})),
}),
publish: (events: any) => ({
success: true,
published: events.sectionEvents.length + 1,
eventIds: ['main-index', ...events.sectionEvents.map((_: any, i: number) => `section-${i}`)]
})
eventIds: [
"main-index",
...events.sectionEvents.map((_: any, i: number) => `section-${i}`),
],
}),
};
// Test the full Level 2 workflow
@ -503,29 +580,38 @@ Just as living organisms are made up of cells...`, @@ -503,29 +580,38 @@ Just as living organisms are made up of cells...`,
const events = mockWorkflow.buildLevel2Events(parsed);
const result = mockWorkflow.publish(events);
test.expect(parsed.metadata.title).toBe('Understanding Knowledge');
test.expect(parsed.metadata.title).toBe("Understanding Knowledge");
test.expect(parsed.sections).toHaveLength(2);
test.expect(events.indexEvent.kind).toBe(30040);
test.expect(events.sectionEvents).toHaveLength(2);
test.expect(events.sectionEvents[1].content).toContain('=== Why Investigate'); // Contains subsections
test.expect(events.sectionEvents[1].content).toContain('===== 1. The Building Blocks'); // Contains deeper levels
test.expect(events.sectionEvents[1].content).toContain("=== Why Investigate"); // Contains subsections
test.expect(events.sectionEvents[1].content).toContain(
"===== 1. The Building Blocks",
); // Contains deeper levels
test.expect(result.success).toBeTruthy();
test.expect(result.published).toBe(3); // 1 index + 2 sections
});
test.test('Error handling for malformed content', () => {
test.test("Error handling for malformed content", () => {
const invalidCases = [
{ content: '== Section\n=== Subsection\n==== Missing content', error: 'Empty content sections' },
{ content: '= Title\n\n== Section\n==== Skipped level', error: 'Invalid header nesting' },
{ content: '', error: 'Empty document' }
{
content: "== Section\n=== Subsection\n==== Missing content",
error: "Empty content sections",
},
{
content: "= Title\n\n== Section\n==== Skipped level",
error: "Invalid header nesting",
},
{ content: "", error: "Empty document" },
];
invalidCases.forEach(({ content, error }) => {
// Mock error detection
const hasEmptySections = content.includes('Missing content');
const hasSkippedLevels = content.includes('====') && !content.includes('===');
const isEmpty = content.trim() === '';
const hasEmptySections = content.includes("Missing content");
const hasSkippedLevels = content.includes("====") &&
!content.includes("===");
const isEmpty = content.trim() === "";
const shouldError = hasEmptySections || hasSkippedLevels || isEmpty;
test.expect(shouldError).toBeTruthy();
});
@ -535,26 +621,40 @@ test.test('Error handling for malformed content', () => { @@ -535,26 +621,40 @@ test.test('Error handling for malformed content', () => {
// Test Execution
// =============================================================================
console.log('🎯 ZettelPublisher Test-Driven Development (ITERATIVE)\n');
console.log('📋 Test Data Analysis:');
console.log(`- Understanding Knowledge: ${understandingKnowledge.split('\n').length} lines`);
console.log(`- Desire: ${desire.split('\n').length} lines`);
console.log('- Both files use = document title with metadata directly underneath');
console.log('- Sections use == with deep nesting (===, ====, =====)');
console.log('- Custom attributes like :type: podcastArticle need preservation');
console.log('- CRITICAL: Structure is ITERATIVE not recursive (per docreference.md)\n');
test.run().then(success => {
console.log("🎯 ZettelPublisher Test-Driven Development (ITERATIVE)\n");
console.log("📋 Test Data Analysis:");
console.log(
`- Understanding Knowledge: ${
understandingKnowledge.split("\n").length
} lines`,
);
console.log(`- Desire: ${desire.split("\n").length} lines`);
console.log(
"- Both files use = document title with metadata directly underneath",
);
console.log("- Sections use == with deep nesting (===, ====, =====)");
console.log("- Custom attributes like :type: podcastArticle need preservation");
console.log(
"- CRITICAL: Structure is ITERATIVE not recursive (per docreference.md)\n",
);
test.run().then((success) => {
if (success) {
console.log('\n🎉 All tests defined! Ready for ITERATIVE implementation.');
console.log('\n📋 Implementation Plan:');
console.log('1. ✅ Update ParsedAsciiDoc interface for ITERATIVE parsing');
console.log('2. ✅ Fix content processing (header separation, custom attributes)');
console.log('3. ✅ Implement level-based publishing logic (30040/30041 structure)');
console.log('4. ✅ Add parse-level controlled event generation');
console.log('5. ✅ Create context-aware UI with level selector');
console.log('\n🔄 Each level can be developed and tested independently!');
console.log("\n🎉 All tests defined! Ready for ITERATIVE implementation.");
console.log("\n📋 Implementation Plan:");
console.log("1. ✅ Update ParsedAsciiDoc interface for ITERATIVE parsing");
console.log(
"2. ✅ Fix content processing (header separation, custom attributes)",
);
console.log(
"3. ✅ Implement level-based publishing logic (30040/30041 structure)",
);
console.log("4. ✅ Add parse-level controlled event generation");
console.log("5. ✅ Create context-aware UI with level selector");
console.log("\n🔄 Each level can be developed and tested independently!");
} else {
console.log('\n❌ Tests ready - implement ITERATIVE features to make them pass!');
console.log(
"\n❌ Tests ready - implement ITERATIVE features to make them pass!",
);
}
}).catch(console.error);
}).catch(console.error);

Loading…
Cancel
Save