1. Purpose of AI Within Cyse
Cyse integrates AI Systems to enhance efficiency, improve matching between organisations and accredited professionals, assist in vulnerability triage, support automation workflows, and provide analytical insights.
AI Systems are designed to support (not replace) accountable human decision-making.
2. Human Oversight
Cyse maintains human oversight over AI-enabled processes where outputs may influence security, compliance, contractual, or operational outcomes.
Users remain responsible for reviewing, validating, and approving AI-generated outputs before acting upon them.
3. Limitations of AI Systems
AI Systems may:
- Generate incomplete or inaccurate outputs
- Produce probabilistic rather than definitive results
- Reflect bias present in training data
- Require contextual validation
AI-generated outputs should not be relied upon as sole authoritative advice.
4. Prohibited AI Usage
Users must not use Cyse AI Systems to:
- Generate unlawful, harmful, or deceptive content
- Conduct unauthorised offensive security activities
- Attempt to exploit vulnerabilities outside agreed contractual scope
- Reverse engineer or bypass AI safeguards
- Generate impersonation, fraud, or social engineering content
- Train external models using Cyse proprietary outputs without authorisation
5. Data Processing & AI
Where AI Systems process personal data, Cyse does so in accordance with:
- UK GDPR and Data Protection Act 2018
- EU GDPR (where applicable)
- US state privacy laws (including CPRA where applicable)
- Australian Privacy Act 1988 (APPs)
Cyse does not use client confidential information to train publicly available AI models unless explicitly agreed in writing.
6. Automated Decision-Making
Cyse does not rely solely on automated decision-making that produces legal or similarly significant effects without appropriate safeguards.
Where automated profiling or matching occurs, human review mechanisms are available.
7. AI Transparency
Where AI-generated content is presented within the platform, Cyse aims to clearly indicate that outputs are AI-assisted.
Users may request clarification regarding AI-assisted processes that affect them.
8. Security of AI Systems
Cyse implements technical and organisational measures to:
- Protect AI systems from misuse and adversarial manipulation
- Prevent prompt injection or data exfiltration attempts
- Monitor anomalous usage patterns
- Restrict access to sensitive AI capabilities
9. Intellectual Property
Ownership of AI-generated outputs will be governed by the applicable Terms of Use or engagement agreement.
Cyse retains ownership of proprietary AI models, system architecture, and platform algorithms.
10. Responsible Innovation
Cyse is committed to:
- Ethical AI deployment
- Security-first design
- Accountability and transparency
- Compliance with emerging AI governance frameworks
11. Enforcement
Cyse reserves the right to suspend or restrict access to AI Systems where misuse, abuse, or policy violations are detected.