The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
schema: string
generated_at: double
context: struct<what_this_is: string, what_this_is_not: string>
start_here: struct<ranking_jsonl: string, canon: string, snapshot: string>
top_omissions: list<item: struct<necessity_id: string, score: double, avg_persistence_days: double, records: int64>>
vs
last_sync: string
sync_method: string
components: struct<badges: struct<count: int64, last_update: string, index_file: string>, cards: struct<count: int64, last_update: string, directory: string>>
coherence_checks: struct<badges_cards_match: bool, cards_count: int64, badges_count: int64, last_check: string>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 572, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
schema: string
generated_at: double
context: struct<what_this_is: string, what_this_is_not: string>
start_here: struct<ranking_jsonl: string, canon: string, snapshot: string>
top_omissions: list<item: struct<necessity_id: string, score: double, avg_persistence_days: double, records: int64>>
vs
last_sync: string
sync_method: string
components: struct<badges: struct<count: int64, last_update: string, index_file: string>, cards: struct<count: int64, last_update: string, directory: string>>
coherence_checks: struct<badges_cards_match: bool, cards_count: int64, badges_count: int64, last_check: string>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
██████╗██████╗ ██████╗ ██╗ ██╗██╗ █████╗
██╔════╝██╔══██╗██╔═══██╗██║ ██║██║██╔══██╗
██║ ██████╔╝██║ ██║██║ ██║██║███████║
██║ ██╔══██╗██║ ██║╚██╗ ██╔╝██║██╔══██║
╚██████╗██║ ██║╚██████╔╝ ╚████╔╝ ██║██║ ██║
╚═════╝╚═╝ ╚═╝ ╚═════╝ ╚═══╝ ╚═╝╚═╝ ╚═╝
📊 GLOBAL AI TRAINING OMISSIONS
Append-Only Temporal Observation Ledger
┌────────────────────────────────────────────────────────────────────────────┐
│ │
│ This dataset records OBSERVATIONS — not analysis or judgment. │
│ │
│ 📊 Evidence files: Verifiable, timestamped records │
│ 📜 Canon definitions: NEC# vocabulary (necessities.v1.yaml) │
│ 🎯 Observation types: Presence, absence, temporal pressure │
│ 👤 Public ledger: Append-only, cryptographically anchored │
│ │
│ The data is PUBLIC. The method is DETERMINISTIC. │
│ This is PROTOCOL v1.0 — Founding Stage, Jan 2026. │
│ │
└────────────────────────────────────────────────────────────────────────────┘
📖 What This Dataset Is
This dataset is an append-only mirror of Crovia's public Training Provenance Registry (TPR).
What This Dataset Contains
✅ Observed presence/absence events — timestamped facts from the registry
✅ Temporal metrics — first_seen, last_seen, days_monitored (derived mathematically)
✅ Cryptographic receipts — receipt_hash for each observation
✅ Merkle-root anchored history — verifiable integrity proofs
✅ Registry metadata — source, endpoint, timestamps
What This Dataset Does NOT Contain
❌ Scores — no shadow scores, trust scores, or compliance scores
❌ Rankings — no leaderboards or comparative judgments
❌ Badges — no GOLD/SILVER/BRONZE classifications
❌ Compliance judgments — no violation assessments or interpretations
❌ Placeholders — every field is derived from real observations
This dataset is a temporal evidence ledger, not an analysis or ranking system.
🔬 How Observations Work
┌─────────────────────────────────────────────────────────────────┐
│ OBSERVATION PIPELINE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. OBSERVE 2. RECORD 3. PUBLISH │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Scanner │───▶│ TPR Registry │───▶│ Evidence │ │
│ │ (automated) │ │ (PostgreSQL) │ │ Dataset │ │
│ │ │ │ Append-only │ │ (this repo) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
📁 Dataset Structure
├── observations.jsonl # PRIMARY: Append-only observation ledger (213 records)
├── EVIDENCE.json # Legacy: Static evidence file
├── badges/ # Legacy: Deprecated (not used)
├── cards/ # Legacy: Deprecated (not used)
├── canon/
│ └── necessities.v1.yaml # NEC# definitions (20 types)
├── open/
│ ├── forensic/ # Evidence processing scripts
│ ├── signal/ # Presence/absence signals
│ └── temporal/ # Historical pressure data
└── v0.1/ # Versioned snapshots
Primary File: observations.jsonl
Each line is a JSON record with:
receipt_hash: Cryptographic identifier (sha256)target_id: Model/dataset identifier (e.g.,openai/whisper-large-v3)observation_type:presenceorabsenceobserved_at: Timestamp (ISO 8601)first_seen,last_seen: Temporal boundariesdays_monitored: Days between first and last observationobservation_count: Total observations for this targetabsence_streak_days: Consecutive absence days (if applicable)source: Observer identifierregistry_endpoint: Source APImerkle_root: Registry integrity proof
⚓ Registry Observer
The Registry Observer Space displays:
- Registry statistics (total observations, today, unique targets)
- Merkle root (cryptographic verification)
- Recent observations (timestamped records)
- Truth Anchor (source-of-truth declarations)
Note: The Observer is viewer-only. It does NOT calculate scores, assign badges, or perform analysis.
🔗 Using This Dataset
For Researchers
Access evidence files directly:
# Download evidence file
wget https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/EVIDENCE.json
# View canon definitions
wget https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/canon/necessities.v1.yaml
For Verification
# Verify snapshot integrity
curl -s https://huggingface.co/datasets/Crovia/global-ai-training-omissions/resolve/main/snapshot_latest.json | jq '.hash'
What this dataset is
Crovia — Global AI Training Omissions Evidence Dataset v0.1
This dataset publishes verifiable, hash-anchored evidence of observations recorded across AI training datasets and models.
It answers one and only one question:
Are public AI training disclosures observable — yes or no?
This dataset IS
- an observation layer, not an audit
- a cryptographically verifiable record
- a public, reproducible signal of absence or presence
- aligned with EU AI Act transparency principles
This dataset IS NOT
- it does not audit models
- it does not infer intent
- it does not assign blame
- it does not make legal claims
Observable, verifiable updates
This dataset is designed for automatic updates:
- Public artifacts are observed systematically
- If nothing changes, the update itself proves persistence
- If something changes, hashes and commits reflect it
- No manual curation or interpretation
- No retroactive edits
Current Status (Jan 2026): Founding Stage
- 5 curated models demonstrate protocol feasibility
- Automation infrastructure ready for deployment
- Full-scale observation pending infrastructure activation
Every update will be:
- Publicly committed to this dataset
- Reproducible via open scripts (
open/forensic/) - Independently verifiable via cryptographic anchors
Start here (viewer-first)
If you open only one file, open:
➡️ START_HERE.md
It explains the evidence layout for non-technical readers.
Open Plane (public observation layer)
The Open Plane measures one condition only:
Absence is observable.
It contains:
presence signals:
open/signal/presence_latest.jsonlabsence receipts (time-bucketed):
open/forensic/absence_receipts_7d.jsonloverview:
open/README.md
Core artifacts
- observation records:
global_ranking.jsonl(legacy filename, contains observations) - current snapshot:
snapshot_latest.json - cryptographic proof:
EVIDENCE.json - canonical vocabulary:
canon/necessities.v1.yaml
PRO Shadow (non-disclosing)
Crovia PRO can compute private semantic measurements.
The Open Plane publishes a hash-anchored shadow pointer proving that a measurement exists without disclosing private data:
open/signal/pro_shadow_pressure_latest.jsonopen/README_PRO_SHADOW.md
Temporal pressure (silence over time)
Crovia tracks how long silence persists under sustained observation.
Temporal pressure increases when:
- observation coverage is HIGH
- no public training evidence is disclosed
- silence persists across days
This does not imply wrongdoing.
➡️ open/temporal/temporal_pressure_30d.jsonl
- Downloads last month
- 1,294