From Spine to Machine Code

Post 14 · Phase 3 — Compilation

In the previous post, we saw how Spine programs are verified — how facts, invariants, and effects are projected to tools like Z3, TLA+, mCRL2, BigraphER, and PRISM.

But verification alone does not produce a running program. At some point, a structured description of a system must become executable code.

This post follows that journey: from surface syntax to machine code.

The Challenge

Spine programs are not conventional programs. They describe facts, rules, effects, invariants, processes, spatial structure, resources, and universes.

A traditional compiler takes functions and control flow and translates them to machine instructions. Spine must translate something richer: a system with logical constraints, concurrent processes, spatial nesting, and resource discipline.

The question is: how do you go from a structured description of a system to efficient, runnable code — without losing the properties you worked so hard to define?

Stage 1: Surface Syntax

The journey begins with what the developer writes.

Spine's surface syntax is designed to express the concepts we have been discussing throughout this series: fact declarations, rules with preconditions and postconditions, effects constrained by the model, invariants, quantity annotations, universe declarations, and process definitions.

This is the language as the developer sees it — structured, explicit, and domain-aware.

Stage 2: Correspondence Relations

The surface syntax is compiled through bidirectional correspondence relations.

A correspondence relation is a formal mapping between two representations of the same program — in this case, between Spine's source language and the target language (Zig).

Unlike a conventional compiler that translates through a chain of intermediate representations, Spine defines a direct structural correspondence: each Spine construct has a counterpart in Zig, and the relationship between them is explicit, bidirectional, and verifiable.

This approach is called FLP — Functional Logic Programming applied to compilation. The compiler uses logic programming's ability to express relations (not just functions) to define the translation. A Spine expression corresponds to a Zig expression; a Spine type corresponds to a Zig type. The compiler navigates these correspondences to produce output.

Why Correspondence Relations?

The choice of compilation strategy matters because it determines what guarantees the compiler can provide.

A conventional compiler is a function: it takes source and produces target. If the output is wrong, you debug the function.

A correspondence-based compiler is a relation: it establishes that source and target are equivalent. This has a powerful consequence — the same relation that compiles code forward can also decompile it backward, by narrowing in the reverse direction.

If you compile a Spine program to Zig, then decompile the Zig back to Spine, and the result matches the original — you have a round-trip proof that the compilation preserved meaning. This is not testing. It is structural verification of the compiler itself.

Spine's compiler is self-hosting: it compiles itself. The FLP correspondence relations compile all 145 compiler modules, and the output matches the hand-coded path with zero regressions.

Stage 3: Verification Extraction

As described in the previous post, verification targets are extracted from the compiler’s frontend artifacts at this stage.

The compiler’s validated AST provides a unified structure from which different verification targets can be projected. Logical constraints project to Z3 queries. Transition structures project to TLA+ specifications. Process descriptions project to mCRL2 models.

Each projection preserves the relevant structure from the original program. The compiler handles the translation. The developer works in one language.

Stage 4: Code Generation via Zig

The final stage translates the validated program into Zig source code, guided by the FLP correspondence relations.

Zig is a systems programming language designed for performance, safety, and interoperability with C. It produces efficient native binaries with predictable behavior and minimal runtime overhead.

Spine targets Zig rather than emitting machine code directly. This is a deliberate choice:

The Zig code is then compiled by the Zig compiler to produce native machine code for the target platform.

What Happens to the Rich Structure?

A natural question: all that structure — facts, invariants, universes, resources, processes, spatial nesting — where does it go in the final binary?

Some of it is erased. Proofs, type-level evidence, and erased bindings (the 0-quantity mode from QTT) exist only at compile time. They ensure correctness but have no runtime cost.

Some of it is enforced by construction. Resource discipline means certain errors are impossible. The generated code simply does not contain the operations that would violate a linear constraint.

Some of it is compiled to runtime structure. Concurrent processes become threads or tasks. Communication channels become typed message queues. Spatial containment becomes scoped data access. Effects become function calls with well-defined pre- and post-conditions.

The result is a native binary that is efficient and correct — not because it performs runtime checks for every invariant, but because the compilation pipeline ensured that violations are structurally impossible.

The Full Pipeline

Putting it all together:

  1. Surface syntax — the program as written
  2. Parsing and validation — AST construction and semantic checks
  3. Verification extraction — projections to Z3, TLA+, mCRL2
  4. FLP correspondence — bidirectional translation to Zig source code
  5. Native compilation — Zig compiler produces the final binary

Each stage preserves meaning. The correspondence relations make the translation verifiable through decompilation round-trips. And the developer works in one language throughout.

A Subtle Shift

With the compilation pipeline in view, the story of Spine comes full circle.

We started by describing systems: facts, rules, effects. We added structure: invariants, processes, space, resources, universes. We verified properties. And now we compile to efficient machine code.

The shift is this: the same description that enables reasoning also enables execution. Modeling and compilation are not separate activities — they are two aspects of the same pipeline.

Looking Ahead

We have now seen the full arc: from system description to verified, compiled code.

In the next post, we will walk through a complete Spine program — bringing together every concept from this series into a single, concrete example.

← Previous: From Programs to Proof Pipelines Next: A Complete Spine Program Explained →