Governance Steps

Every stage. Every check.

A reference guide to all 9 governance stages applied to AI-generated code — what we examine, what we test, and what we deliver at each step. For the narrative journey, see Our Process.

JUMP TO: 01 Code Review 02 Security 03 Compliance 04 Pen Test 05 Performance 06 Documentation 07 Accessibility 08 AI Validation 09 Legal ✦ Certification
01
Code quality & maintainability

AI Code Review & Refactoring

We review and refactor your AI-generated codebase for readability, modularity, and maintainability — applying industry coding standards, naming conventions, dependency auditing, and performance optimisation before any other governance work begins.

Full codebase review against industry standards
Modularisation and separation of concerns
Naming conventions and dead code removal
Dependency audit and version pinning
Performance optimisation
Technical debt register
02
Static analysis & security posture

Security Audit & Vulnerability Scanning

Static analysis for OWASP Top 10 vulnerabilities, authentication review, input validation, secrets management, and API endpoint exposure. AI-generated auth logic and permission structures are a common source of exploitable flaws.

OWASP Top 10 static analysis
Authentication and session management review
Input validation and injection vulnerability scan
Hardcoded secrets and API key detection
Third-party dependency CVE check
API endpoint access control review
03
GDPR, ISO 27001 & regulatory alignment

Compliance Checks

GDPR compliance review, data residency documentation, privacy-by-design audit, error handling and logging checks, and ISO 27001 alignment where required. Compliance gaps are invisible until they're regulatory findings.

GDPR consent and data minimisation review
Right to erasure and data portability check
PII in logs identification
Data residency documentation
Cookie and tracking compliance
Data processing agreement review
04
Active exploitation testing

Penetration Testing

Active OWASP-mapped penetration tests against your staging environment — authentication bypass, IDOR, privilege escalation, API abuse, business logic vulnerabilities, and data exposure. Every finding gets a severity rating and remediation plan.

OWASP-mapped penetration test
Authentication bypass and privilege escalation
IDOR vulnerability testing
API abuse and rate limit testing
Business logic vulnerability testing
Data exposure and over-fetching tests
05
Load testing & growth ceiling analysis

Performance & Scalability Testing

Load simulation, stress testing, database query performance under concurrent load, CDN and asset review, API rate limit mapping. We give you a specific user growth ceiling — with the architectural changes needed to extend it.

Load simulation at expected user volumes
Stress testing to failure thresholds
Database query performance under load
CDN and asset delivery review
API rate limit mapping at peak
Scalability ceiling and growth trigger report
06
Handover-ready codebase

Documentation & Maintainability

AI tools don't document. We do. Full inline and external codebase documentation, architecture decision records, API reference, developer onboarding guide, maintenance runbook, and a technical debt register — so any engineer can pick it up.

Full codebase documentation
Architecture decision record (ADR)
API documentation and endpoint reference
Developer onboarding guide
Maintenance runbook
Support escalation structure
07
WCAG compliance & usability

UX & Accessibility Review

WCAG 2.1 AA compliance audit, keyboard navigation and screen reader testing, colour contrast review, form and focus management, mobile responsiveness, and usability review against core user journeys. Accessibility is increasingly a legal requirement.

WCAG 2.1 AA compliance audit
Keyboard navigation and screen reader testing
Colour contrast and visual accessibility
Form label and focus management
Mobile responsiveness review
Usability review against core journeys
08
Fairness, ethics & reliability

AI Model Validation & Bias Checks

If your product includes AI components — recommendations, scoring, generation, classification — those outputs need validation. We test for consistency, bias across protected characteristics, hallucination risk, and edge case behaviour.

AI output consistency testing
Bias assessment across protected characteristics
Edge case and adversarial input testing
Hallucination and confidence calibration
Human oversight point documentation
Ongoing monitoring recommendations
09
IP, OSS licensing & legal risk

Legal & Licensing Compliance

AI tools may reproduce GPL or similarly licensed code. We audit your full dependency tree and generated code for IP risk, open-source licence conflicts, and legal exposure — before these become costly findings in an investor's legal review.

Open-source licence compliance audit
GPL / AGPL incompatibility detection
AI-generated code IP risk assessment
Third-party service terms compliance
Data processing agreements review
Legal exposure summary for founders and investors
Governance certificate & deployment guidance

Final Certification & Launch Readiness

When all stages are complete and findings remediated, we issue a final governance certification — a verifiable record of the governance process delivered by Logic Software Ltd — CREST Approved and Cyber Essentials certified. Then we help you launch — or set up the full CI/CD pipeline.

Final governance certification document
Executive summary for investor data rooms
Critical and high findings verified closed
Launch readiness checklist sign-off
Deployment guidance
Optional: CI/CD pipeline setup (DevOps add-on)
Ready?

Tell us what your product needs.

Not every product needs all 9 stages. We scope each engagement to your codebase, your timeline, and what you need to satisfy. Talk to us.

Get in touch → Service tiers