Skill library

Skill library
Legalskills/legal/eu-ai-act/SKILL.md

EU AI Act

Risk-based obligations for SPYN's AI Diaries and System Users — including Article 6 and Article 50 specifics.

EU AI Act

The EU AI Act is the first horizontal regulation specifically for AI systems. It classifies systems by risk and assigns proportional obligations to providers and deployers. SPYN's AI Diaries are a content-generation system and our System Users (AI personas that produce 24-hour diaries) are arguably within the Article 50 disclosure scope. This skill is the operational guide for staying compliant.

Risk classification

Every AI system we build falls into one of four buckets:

  • Prohibited — social scoring, real-time biometric identification in public spaces, manipulative subliminal techniques. We do not build these.
  • High-risk — systems used in employment decisions, access to essential services, credit scoring, education assessment, or critical infrastructure. Significant obligations apply.
  • Limited risk — chatbots, emotion recognition, content generation. Transparency obligations apply: users must know they're interacting with — or consuming output from — an AI.
  • Minimal risk — everything else. No specific obligations.

SPYN classification. AI Diary generation sits in limited risk. The System User personas that produce content also sit in limited risk. We do not currently operate any high-risk systems. This is signed off by the Head of Legal and re-evaluated each quarter — the AI Act is young, guidance evolves.

The classification is the first decision in any AI-touching Requirement Specification (developer/requirements-specification) and must be signed off by Legal before development begins.

Article 50 — transparency for synthetic content

This is the section that most directly governs SPYN. Article 50 requires:

  • (1) Providers of AI systems that interact with natural persons must ensure those persons are informed they are interacting with an AI, unless that is obvious from the context.
  • (2) Providers of AI systems that generate synthetic audio, image, video, or text content must ensure outputs are marked in a machine-readable format and detectable as artificially generated.
  • (4) Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake must disclose that the content is AI-generated.

For SPYN this means:

  • AI Diaries are labelled in-app with a persistent visual marker — currently a small "AI" chip in the diary header — and the underlying content carries C2PA content credentials in the file metadata.
  • System User profiles carry an "AI persona" badge on their profile and in every place their content appears. The badge is not removable by users and is preserved when content is shared externally.
  • The disclosure copy is reviewed against Article 50's clarity standard by Legal annually. "AI-generated" beats "Powered by AI" beats no label at all.

Article 6 — high-risk classification

The 2026 amendment to Article 6 narrowed several exemptions for "purely accessory" AI features. Features that previously sat under limited risk may now require high-risk treatment if they materially influence access to a regulated service.

For SPYN this most affects: any AI-driven moderation that could affect a user's ability to use the service (account-suspension recommendations), any AI-driven age verification, and any AI-driven content prioritisation that could exclude protected speech. Legal triages new features against the updated Article 6 criteria during spec review.

General-Purpose AI obligations

Foundation models (GPAI) carry their own obligations on top of the system-level risk classification. We do not host or fine-tune foundation models — we use OpenAI Assistants and Google Cloud Vision via their APIs. Our DPA with OpenAI confirms they are the GPAI provider for the Assistants service and we operate as a downstream deployer. This is the relevant compliance posture and we maintain it explicitly in legal/data-processing-agreements.

If we begin to fine-tune or self-host a foundation model — currently not on the roadmap — we inherit a subset of GPAI provider duties, including technical documentation of the model and a sufficiently detailed summary of training content.

High-risk system obligations (not applicable today)

If a system is classified as high-risk we must, before placing it on the market: complete a conformity assessment, register the system in the EU database, implement a lifecycle risk management process, maintain technical documentation, keep automatic logs, ensure human oversight is meaningful, and meet accuracy/robustness/cybersecurity thresholds. None of SPYN currently meets the high-risk threshold; this section exists so that we recognise the shift if it occurs.

Owned by

Head of Legal & Compliance. Audited by the Master skill on a 30-day cadence.