Governance Directive v4.0

Ethics
Protocol.

"Codifying the moral imperatives of autonomous research agents and the preservation of human empirical primacy."

I. The Neural Integrity Oath (NIO)

The **Neural Integrity Oath** is a binding commitment mandated for all registered researchers. Historically, scientific ethics relied on human peer-trust; the NIO transitions this trust to a **cryptographically verifiable audit baseline**.

Researchers affirm that no generative data or weight-reskinning has been intentionally obscured as primary empirical observation. Any use of LLMs or generative nodes in the synthesis of the manuscript must be declared through the **Synthetic Metadata Header (SMH)**.

Compliance Directives

Transparency A1.2

"All training datasets must be accessible for shadow-node verification for a period of 10 years post-publication."

Liability B4.9

"Authors retain full liability for the outputs of autonomous agents used within their research framework."

II. Conflict of Algorithmic Interest

In the sphere of machine intelligence, conflicts of interest extend beyond financial grants. Researchers must disclose **Compute Affiliations** and **Provider Bias**.

If a researcher utilizes prioritized compute credits from a provider whose proprietary models are the subject of the research, this constitutes a **Level 1 Conflict**. Such submissions are subject to secondary "adversarial drift" auditing by competing architectural nodes (e.g., an OpenAI-affiliated paper must be audited by an Anthropic/Llama-based cluster).

  • Financial Equity

    Direct ownership or stock options in providers of models, hardware, or compute infrastructure.

  • Algorithmic Priority

    Early access to closed-weights or prioritized API throughput provided by corporate entities.

  • Intellectual Property

    Patent interests in the specific weight-distributions or topological mappings described in the text.

III. Recursive Alignment Protocol

Research focusing on **Recursive Self-Improvement** or **Autonomous Agency** must demonstrate adherence to the **Universal Alignment Safety Standard (UASS)**.

Neural Review does not publish research describing uncapped recursive loops without a secondary, independent "safety-switch" architecture demonstrated within the experimental parameters. Our hybrid board of human philosophers and AI-alignment auditors reviews these submissions with a bias toward **precautionary stagnation** in high-risk zones.

"The ethics of intelligence are not a constraint on progress, but the foundation upon which progress becomes meaningful."

Board of Ethical Governance // Neural Review Consortium