betty.land

bettyd

Introduction

bettyd: The Local-First Engine of the BeTTY Network.

It's BeTTY UN*X daemon, the local process which serves, stores, and maintains your data in bettyland. bettyd encrypts, indexes, and federates data for the BeTTY ecosystem. It serves as both the heart of a local BeTTY node and the substrate for interoperable communication across the network.

Running quietly in the background, bettyd handles all data persistence, access control, and peer synchronization. At its heart, bettyd turns your local machine into an intelligent, privacy-preserving data node capable of hosting, querying, and sharing BettyDocs across trusted peers. It's the lightweight, modular, foundational layer that brings the open Semantic Internet to life.

Purpose

The goal of bettyd is to provide a trusted foundation for the semantic internet, allowing both humans and machines to store and exchange meaningful information without centralization or lock-in. bettyd runs as a lightweight service that handles BeTTY API calls for all local applications, from the command-line via btty, to web clients through httbd, and directly to the rest of bettyland via the LUMA API.

Every bettyd instance operates independently yet speaks a shared protocol, allowing for seamless, secure federation across devices, teams, and institutions. Your node is both sovereign and connected—local by design, global by intent.

Architecture

bettyd integrates a modular storage backend, semantic graph engine, and REST-like API, allowing developers to plug in new data systems or query layers with minimal effort.

Data Lakes, Warehouses, and Lakehouses

The BeTTY network (a.k.a. "bettyland") functions as a federated data lake, composed of all bettyd nodes (even when they're offline; it's kind of a quantum network in that way). Each node maintains its own local data warehouse that synchronizes, indexes, and exchanges metadata with other trusted peers.

This design follows the emergent data lakehouse model. The big data concepts of the data lake (a large, undifferentiated repository of raw data) and the data warehouse (a large, well maintained store of schema-conformant data) have been combined into the popular data lakehouse model.

The global BeTTY network forms the "lake," a shared semantic fabric of discoverable knowledge. Your local bettyd instance forms the "house," a curated, high-integrity subset of the data, maintained for your needs and permissions. The result is a scalable, searchable, and decentralized system that never sacrifices user control or data locality.

The system inherits the simplicity of UNIX and the semantics of the Web, combining line-based text storage (as in ndjson) with JSON-LD for linked meaning. Through nbson, all data is stored compressed and encrypted at rest.

Key Features

The LUMA API

How LUMA Works Internally: The Data Lakehouse in Action

Under the hood, bettyd translates every LUMA call into coordinated actions across its internal nbson data lakes. Each lake is a namespace within the node's local nbson store, and together they form the data lakehouse providing both fluid document mobility and consistent, queryable state. By default, each node maintains three logical lakes:

LakePurpose
mainThe primary user data store
archiveA temporary workspace used for backups, drafts, and prepublication artifacts.
systemContains indexes, system metadata, public keys, and other local configuration documents.

Example: Updating a BettyDoc

When a document is updated via LUMA's upsert command, bettyd coordinates a multi-step process to preserve data integrity, maintain provenance, and update relevant indexes. This workflow provides strong guarantees for:

IntegrityOnly complete documents are written; no partial updates or broken schemas.
ProvenanceEvery change is versioned and archived, preserving a transparent audit trail.
AutonomyNodes may enforce local access control policies while maintaining federation.
Index ConsistencyTags and metadata drive index maintenance dynamically, without schema migrations.
ResilienceThe archive lake doubles as a recovery mechanism and audit ledger.

Here's what happens behind the scenes.

  1. Retrieve the existing document:
    The client requests the full document by its topic ID: http GET /list?topicID=<uuid>
    The node returns the complete BettyDoc from the main data lake.
  2. Edit and resubmit via upsert:
    After local edits, the updated document is resubmitted via upsert API request to the proper topicID. BeTTY requires full-document upserts to ensure structural consistency and simplify federation.
  3. Backup the old version:
    If a document already exists in the main lake at the requested topicID, it is copied in full to the archive lake. A log message is generated containing the complete meta key of the old document, then dispatched via message to the archive log topicID in the system lake. This guarantees immutable, auditable change tracking for every revision.
  4. Reindex and validate metadata:
    The new document's meta section is scanned for index fields and tags. Tags may impose index field requirements, so if the node has the full decryption key (i.e., can read the document contents), it will:
    1. Rebuild the meta.index keys
    2. Generate a list of all index files that must be updated
    3. Compute the corresponding key-value pairs for those indices
    This ensures that every tag and topic remains queryable and federated across compatible nodes.
  5. Propagate index updates:
    The list of required index updates is sent as individual message requests to the appropriate index topicIDs in the system lake. Upon receipt, each directory file updates the relevant entry, replacing or appending the new values as needed.
  6. Finalize the update:
    The new document is inserted into the main lake at the requested topicID. Confirmation messages are sent to both the original requester and the system logs via message, ensuring traceable acknowledgment of the completed transaction.

Integration and Extensibility

Integrating with the BeTTY Stack

bettyd forms the base of the BeTTY stack, serving data to:

By combining these layers, bettyd helps build a local-first, semantic, and private alternative to today's cloud-centric web.

Plugins in the BeTTY Ecosystem

BeTTY separates concerns between data and presentation by distinguishing between plugins and applications. Plugins (registered via btty) use LUMA verbs and connect them to code written in any language, run on any machine. This makes plugins an incredibly easy way to extend the functionality of your BeTTY stack.

This leads to the question of what distinguishes a BeTTY plugin from a BeTTY application. In brief, bettyd plugins extend what BeTTY is capable of doing; applications built on the BeTTY stack (e.g., Memeograph, built atop httbd and marXDown) extend how humans experience BeTTY.

Featurebettyd pluginhttbd app
Primary RoleBackend logic & storageFrontend interface & interaction
Runs Onbettyd nodehttbd gateway
Encryption HandlingNativeInherited (through httbd)
Data AccessDirect API (LUMA)Translated JSON-LD
Examplenbson, marXDownmemeograph

Why It Matters

bettyd provides the quiet infrastructure beneath the thinking, writing, and collaboration that drive digital human interaction. Every node contributes to a network of meaning rather than a market of attention.

Built for openness, it invites both developers and end users to regain control of their information and take part in a new Internet serving knowledge, not algorithms.