Javad Taghia's picture
Building on HF

Javad Taghia PRO

telcom

AI & ML interests

text-to-image, image-to-image Unlearning, training models, model evaluation and safety/alignment benchmarking, alumni UNSW (PhD)

Recent Activity

reacted to kanaria007's post with 👍 about 9 hours ago
✅ New Article: *PoC Architecture for Education & Developmental Support* Title: 🎓 Building an SI-Core Wrapped Learning Companion - PoC architecture for education and developmental support 🔗 https://huggingface.co/blog/kanaria007/poc-architecture-for-education-development-support --- Summary: Most “AI tutors” are built as *LLM-first* systems. This article flips the default: * The LLM is treated as an *untrusted proposal engine* * *SI-Core owns* observation, consent, ethics, memory, and rollback * Teachers and guardians get *real oversight*, not just chat transcripts Scoped intentionally to *one subject × a small cohort (10–30 learners)*, this is a PoC you can actually ship—and audit. > Don’t ask: “Can an AI replace teachers?” > Prove: “Can we make an AI companion *safe, explainable, and governable* for real learners?” --- Why It Matters (for AI on real stacks): • *Consent & accommodations* are first-class (especially for minors / neurodivergent learners) • *Ethics decisions are logged* (ALLOW / DENY / ESCALATE) with traceable reasoning • “*Why this?*” explanations are built in for learners—and deeper inspection for adults --- What’s Inside: • A minimal reference architecture (frontend → SI-Gate → ethics/memory/logging → LLM APIs) • Non-negotiables for the pilot (SI-wrapped LLM, Effect Ledger, ethics overlay, dashboards) • Failure modes + safe-mode behavior • Implementation checklist + rough effort/cost ballparks (kept explicitly non-normative) --- 📖 Structured Intelligence Engineering Series A deployable pattern for taking today’s LLM tutor ideas and making them *auditable, overrideable, and rollback-safe*.
View all activity

Organizations

deegitals.com's profile picture PowergenAI's profile picture OpenFree_AI's profile picture