Responsible AI Charter
At Cyber Merchant, we believe that responsible AI is not a constraint on innovation — it is the foundation of it. Every AI system we build is designed to be trustworthy, transparent, and accountable. This charter formalises our commitments and holds us to them.
Our Commitment
Cyber Merchant Ltd commits to developing, deploying, and advising on AI systems that are safe, fair, transparent, and accountable — aligned with the UK Government's AI Safety principles, the EU AI Act (where applicable), and the emerging global consensus on responsible AI governance.
Our Seven Principles
1. Human-Centred Purpose
Every AI system we build must serve a clear, legitimate business purpose and benefit the people it affects — employees, customers, citizens, or patients. We will not build AI systems whose primary purpose is to deceive, manipulate, or harm.
2. Fairness and Non-Discrimination
We test all AI systems for bias across protected characteristics (age, disability, gender, race, religion, sexual orientation) and actively mitigate identified disparities. Fairness metrics are built into our project acceptance criteria — not bolted on at the end.
3. Transparency and Explainability
Where AI systems make or influence decisions that affect individuals, we design for explainability. Users should understand, in plain language, why an AI reached a conclusion and what factors influenced it. We avoid black-box deployments in high-stakes domains without appropriate safeguards.
4. Privacy and Data Protection by Design
We apply UK GDPR data minimisation and purpose-limitation principles to every AI project from the outset. We do not train models on personal data without appropriate legal basis and safeguards. We conduct Data Protection Impact Assessments (DPIAs) for all high-risk AI processing activities.
5. Human Oversight and Control
AI systems must not operate beyond the boundaries of their intended function without human review. For high-stakes or irreversible actions (financial transactions, clinical decisions, legal judgements), we require meaningful human oversight and clear escalation paths. We build kill switches and override mechanisms into every agentic system we deploy.
6. Security and Robustness
We assess AI systems for adversarial vulnerabilities, prompt injection risks, and failure modes before deployment. We apply the principle of least privilege to all AI system permissions. Our MLOps pipelines include continuous drift monitoring, anomaly detection, and automated alerting.
7. Accountability and Governance
We document all AI systems we build — their purpose, training data provenance, performance benchmarks, known limitations, and governance owners. We maintain an internal AI Register for client engagements. We commit to honest communication about what our systems can and cannot do, and we do not oversell AI capabilities.
What We Will Not Build
Cyber Merchant will refuse any engagement involving:
- Social scoring systems designed to rank or restrict individuals based on personal characteristics or behaviour.
- Subliminal manipulation systems intended to influence behaviour without conscious awareness.
- Real-time biometric identification in publicly accessible spaces without explicit legal authority.
- AI systems designed to produce disinformation, deepfakes, or synthetic media for deceptive purposes.
- Weapons systems or autonomous lethal decision-making.
- AI designed to circumvent legal obligations, regulatory oversight, or individual rights.
Our Governance Practices
Pre-project Assessment
Every client engagement undergoes a Responsible AI Assessment before work begins. We evaluate the risk classification of the proposed AI system, identify affected groups, and agree governance requirements with the client upfront.
Bias and Fairness Testing
We use a combination of statistical bias tests, adversarial probing, and representative test sets. For regulated industries (healthcare, financial services, legal), we apply sector-specific fairness frameworks aligned with FCA, MHRA, and ICO guidance.
Model Cards and Documentation
For every AI model we develop, we produce a Model Card documenting: intended use, performance metrics across demographic groups, known limitations, recommended safeguards, and maintenance requirements.
Incident Response
We maintain documented incident response procedures for AI failures. When an AI system we have built causes harm or produces unexpected outputs, we commit to prompt disclosure, root cause analysis, and remediation — shared transparently with the affected client.
Alignment with External Frameworks
Our responsible AI practices are informed by and aligned with:
- UK Government AI Ethics Guidance
- EU AI Act (applicable to UK-based organisations trading with the EU)
- ISO/IEC 42001:2023 — AI Management Systems Standard
- NIST AI Risk Management Framework
- ICO Guidance on AI and Data Protection
Reporting a Concern
If you believe a Cyber Merchant AI system has caused harm, produced biased outputs, or been used in a manner inconsistent with this Charter, please contact us:
- Email: contact@cybermerchant.co.uk (subject: "Responsible AI Concern")
- Post: Responsible AI, Cyber Merchant Ltd, 107-111 Fleet Street, London EC4A 2AB
All reports are treated confidentially and investigated within 10 business days. We commit to honest, non-defensive responses — including acknowledgement of failure where it has occurred.
Further Reading
For our detailed thinking on responsible AI in practice, see our article: Responsible AI as a Competitive Advantage →
For regulatory guidance on the EU AI Act for UK businesses: EU AI Act: Plain-English Guide for UK Businesses →