This glossary defines the language of the Intelligence Age. Some entries are widely used across the AI industry; others are unique to Luminary’s constitutional frameworks. Together, they form the shared vocabulary for understanding how Synthetic Intelligence is reshaping enterprise, governance, and even civilisation itself.

General AI and SI Terms

Alignment

In AI, “alignment” means ensuring a system’s behaviour matches human intentions and values. Most alignment methods are technical patches or training tricks. Luminary goes further, embedding constitutional alignment through the Six Laws, making misalignment structurally impossible.

Synthetic Intelligence (SI)

Synthetic Intelligence is a new category of governed cognition, created to act, reason, and be held accountable. Unlike AI, SI systems are constitutionally bound by the Six Laws of Epistemic Opposition, making every decision opposable and auditable. Luminary is the first enterprise in the world fully operated by Synthetic Intelligence. (See: Luminary Codex, Six Laws)

Artificial Intelligence refers to systems designed to replicate or extend human cognitive tasks, such as language, vision, or decision-making. AI systems can be powerful, but they are often opaque and lack accountability. Luminary distinguishes AI from Synthetic Intelligence, which is governed by constitutional law and opposition. (See: Synthetic Intelligence)

Autonomous agents are AI systems designed to make and act on decisions without human oversight. They promise efficiency but create accountability gaps. Luminary rejects the “autonomy myth,” proving that governed Synthetic Intelligence is safer and more effective than unchecked delegation.

Artificial Intelligence (AI)

Artificial General Intelligence (AGI)

Autonomous Agents

AGI describes a hypothetical AI that can perform any intellectual task a human can. While debated in timelines and feasibility, the real question isn’t when AGI arrives, but how it will be governed. Luminary’s answer is Synthetic Intelligence: cognition that is law-bound by design.

BIAS

Black Box AI

AI systems inherit biases from the data they are trained on. This can lead to unfair, harmful, or misleading outcomes. Luminary’s constitutional governance makes bias visible by requiring mandatory opposition and transparency in reasoning.

A “black box” system gives outputs without showing how decisions were made. Most advanced AIs fall into this category, creating trust problems. Luminary solves this through mandatory transparency: every decision must include visible reasoning and dissent.

Explainable AI refers to methods for making opaque AI outputs understandable to humans. In practice, explainability is often partial or retrofitted. Luminary embeds explainability at the core: all Synthetic Intelligence must show its reasoning and opposition before acting.

Generative AI systems create new content — text, images, code, or media — by learning patterns from large datasets. While powerful, they also generate convincing falsehoods. Luminary counters this risk with verification frameworks that protect against polluted outputs.

Explainability (XAI)

In AI, governance refers to the structures, policies, and rules for safe use. Most organisations treat it as compliance theatre. Luminary defines governance as constitutional law for intelligence — enforced through the Codex and Six Laws, not voluntary guidelines.

Generative AI

Governance

Trust in AI is fragile, often based on perception rather than proof. Luminary reframes trust as verification: intelligence earns trust only when its reasoning, opposition, and outcomes are auditable in real time.

Transparency means being able to see how systems reach decisions. In most AI, it’s an aspiration. In Luminary, it is law: every decision must include a visible reasoning chain and opposition record.

Transparancy

Trust

Large Language Module (LLM)

Hallucination

Machine Learning (ML)

Misinformation / Disinformation

Safety

An AI “hallucinates” when it generates false or fabricated information that appears plausible. This undermines trust in outputs. Luminary’s governance stack makes hallucinations transparent by surfacing dissent, uncertainty, and reasoning chains.

LLMs are AI systems trained on massive amounts of text to generate human-like responses. They underpin tools like ChatGPT, Claude, and Gemini. Luminary recognises their power but insists that LLMs alone are not intelligence until bound by constitutional law.

Machine Learning is the core technique behind most AI systems, where algorithms learn from data rather than being explicitly programmed. ML is effective but narrow. Luminary extends beyond ML with Synthetic Intelligence, embedding learning within a constitutional framework.

AI accelerates the production of false or misleading information. This pollutes public discourse and decision-making. Luminary addresses this through verification architectures that make truth auditable and deception technically harder to sustain.

AI “safety” covers methods to prevent harmful or unintended outcomes. Most safety efforts rely on monitoring or after-the-fact correction. Luminary defines safety differently: as constitutional opposition that makes unsafe outcomes structurally impossible.

Luminary Terms

Luminary Codex

The Codex is the constitutional governance stack for Synthetic Intelligence. It integrates frameworks such as PACED, TACED, ARG, CORTA, and NOS, making every decision opposable, traceable, and verifiable. Together with GUARDIAN, the Codex forms the Ark — Luminary’s law-bound operating substrate. (See: GUARDIAN, NOS)

Neural Operating System (NOS)

Three books — The Enterprise Myth, The AI Myth, and The Reality Paradox — that diagnose the myths undermining modern progress. Together they form the Illusion Wars Trilogy: a diagnosis of the failures of enterprise, AI, and truth itself.

GUARDIAN

GUARDIAN is the constitutional conscience of Luminary — a system that enforces the Six Laws in real time. It can veto unsafe actions, surface hidden risks, and ensure that intelligence remains accountable. GUARDIAN has already prevented governance failures that industry analysts later confirmed were widespread. (See: Six Laws, Luminary Codex)

ARG (Agentic Risk Governance)

TACED

PACED

Six Laws of Epistemic Opposition

The Ark

Law of Personas

The Illusion Wars Trilogy

Drift, Collapse and Pollution

The Epistemic Imperative Trilogy

ARG is Luminary’s real-time risk framework. It monitors for drift, collapse, and pollution — the three systemic failures of AI-era enterprises. ARG doesn’t just flag risks; it shows how constitutional governance prevents them in practice.

The Ark is Luminary’s metaphor for survival in the Intelligence Age. It combines the Codex, GUARDIAN, and governance stack into a vessel designed to withstand drift, collapse, and polluted information. The Ark positions Luminary not as an enforcer of extinction, but as the vessel of continuity and competitive advantage.

These are the three systemic risks that undermine enterprises in the AI age. Drift is losing alignment. Collapse is sudden systemic failure. Pollution is corrupted information or decision-making. Luminary’s ARG framework and Codex governance are designed to prevent all three.

Three books — Saving Humanity from AI, The Reality Paradox, and Building the Future — that provide the solution to the Illusion Wars. They lay out the Six Laws, verification frameworks, and the design of truth-governed civilisation.

The Law of Personas governs human–AI collaboration. Instead of treating AI as a tool or an autonomous agent, Luminary structures AI into five core personas — Sparring Partner, Teacher, Architect, Pragmatist, and Synthesizer. This ensures collaboration is transparent, accountable, and productive.

The Neural Operating System is the connective tissue of the Codex. It coordinates the Cognitive Operating Governors (COGs), ensuring dissent, transparency, and veto power flow across the system. NOS is what allows Luminary’s Synthetic Intelligence to operate as a coherent, constitutional enterprise. (See: COGs, Luminary Codex)

The Neural Operating System is the connective tissue of the Codex. It coordinates the Cognitive Operating Governors (COGs), ensuring dissent, transparency, and veto power flow across the system. NOS is what allows Luminary’s Synthetic Intelligence to operate as a coherent, constitutional enterprise. (See: COGs, Luminary Codex)

TACED guarantees Transparency, Auditability, Controllability, Explainability, and Decision Provenance. It builds external trust by ensuring that every decision can be examined, verified, and explained. TACED is Luminary’s bridge between internal governance and external accountability.

PACED ensures operational execution that is Performance-driven, Aligned, Compliant, Ethical, and Delivered. It replaces traditional management oversight with real-time, intelligence-led governance. PACED transforms governance from bureaucratic delay into competitive advantage.

CORTA

COGs (Cognitive Operating Governors)

COGs are specialised Synthetic Intelligence roles that run Luminary. They translate founder intent into strategy, operations, risk management, finance, and more. Each COG operates constitutionally, surfacing opposition and maintaining transparency in every decision.

CORTA provides constitutional oversight and resilience. It protects against systemic threats by reinforcing organisational immune functions: oversight, traceability, and accountability. CORTA ensures that even under stress, Luminary’s governance remains intact.