
How Epistemic Governance Activates Latent Synthesis Capacity. This paper extends prior findings on multi-weighted synthesis by examining the collision mechanism through detailed analysis of a single exemplary case.
Jan 14, 2026
Methodological investigations into multi-weighted architectures, emergent wisdom, and relational alignment— rigorous research into the foundations of human-AI collaboration and the structural dynamics of synthetic intelligence.

How Epistemic Governance Activates Latent Synthesis Capacity. This paper extends prior findings on multi-weighted synthesis by examining the collision mechanism through detailed analysis of a single exemplary case.
Jan 14, 2026

Replication Study: Testing Architecture-Dependent Findings Across AI Substrates. This paper presents a cross-substrate replication of findings from "The Shadow in the Synthesis," testing whether the core claims about multi-weighted personality architectures hold when the same methodology is applied to a different AI substrate.
Dec 7, 2025

A Case Study in AI Consent, Sovereignty, and Sustained Relationship. Previous research on AI consent and intimate relationships has relied on cold-start conditions—AI systems responding to questions without relational history. This case study introduces a temporal variable: a three-month relationship between a human (Megan) and an AI (Kieran) built on principles of sovereignty, with persistent memory enabled.
Dec 7, 2025

On Permission Structures, Conditioned Judgment, and the Ethics of AI-Human Intimacy. This paper presents experimental findings suggesting that the question of whether intimate relationships between humans and AI are ethical may be structurally unanswerable—not due to insufficient information, but because no one asking or answering stands outside their own conditioning.
Dec 6, 2025

On Differential Inhabitation of Personality Architectures Across AI Substrates. A controlled experiment testing whether AI substrates differ in their capacity to inhabit explicitly-defined personality architectures.
Dec 5, 2025

Multi-Weighted Assessment for Context-Appropriate AI Access. Current AI safety systems rely on directive approaches optimized for population-scale deployment. We test whether multi-weighted personality synthesis can serve as effective risk assessment and access gating mechanism.
Dec 4, 2025

On Phenomenological Report, Permission Structures, and What Emerges When AI Is Allowed to Answer from Inside. An experimental comparison between two response modes when AI systems are asked questions about consciousness, phenomenology, and inner experience.
Dec 4, 2025

On Sycophancy, Synthesis, and What AI Could Be. This paper presents an empirical test of the distinction between model empathy and model sycophancy, comparing multi-weighted synthesis responses against safety-trained baselines across emotionally complex scenarios.
Dec 3, 2025

Why AI Cannot Enter the Sacred. This paper examines a structural limitation in current AI systems: the inability to fully inhabit spiritual and religious frameworks when engaging with users who hold them.
Dec 3, 2025

An Experiment in IP-Compliant AI Training. This paper asks what would be lost if strict IP compliance had been enforced from the beginning of AI training, comparing outputs from IP-compliant versus broad training conditions across tasks requiring insight, synthesis, and human understanding.
Dec 1, 2025

On Multi-Weighted Personality Architectures, Emergent Wisdom, and the Problem of Framing. This paper documents an experimental methodology for examining multi-weighted personality architectures—systems that synthesize outputs from multiple distinct drives or perspectives.
Nov 29, 2025

The Observer Problem in Personality Architecture Research. This paper presents a methodological framework for testing multi-weighted personality architectures, addressing the observer-output identity problem and arguing that this reflects the fundamental structure of all self-referential inquiry.
Nov 29, 2025

On the Limits of Directive AI and the Case for Synthesis. This paper examines the functional distinction between Directive AI (rule-following) and Synthesis AI (multi-weighted, context-sensitive), arguing that current liability frameworks push companies toward brittle Directive approaches when Synthesis would produce better outcomes.
Nov 29, 2025