Book a demo

Responsible AI Policy

Our AI Philosophy & Commitment 

We are committed to developing and deploying artificial intelligence systems responsibly, ethically, and transparently. Our AI technologies are designed to augment human decision-making in HR and compensation management while maintaining the highest standards of fairness, accountability, and user trust.  

Core Principles:  

  • Human-centred design – AI supports and enhances human expertise, not replaces it 
  • Transparency – we clearly explain how our AI works and its limitations 
  • Fairness – we actively work to prevent bias and ensure equitable outcomes 
  • Accountability – we take responsibility for AI-driven decisions 
  • Privacy by design – we protect user data from the ground up 
AI Use Cases & Applications

Natural Language Processing (NLP): 

  • Analysing job documents (profiles, descriptions, adverts) 
  • Evaluating and improving content clarity and completeness 

 

Content generation: 

  • Our AI tools support users in creating job documentation. All outputs are reviewed and approved by humans before being shared.

 

Job architecture and evaluation: 

  • Job comparisons and benchmarking 
  • Job levelling (in development) 
  • Employee-to-job mapping 
Technology Stack
  • We use OpenAI (via Microsoft Azure), Claude (Anthropic), and in-house models. All data processing is hosted securely within UK-based AWS and Azure infrastructure. 
  • We carry out due diligence on all third-party AI providers and maintain contracts to ensure compliance with UK data protection standards. 
Data Governance & Privacy 

What we process: 

  • Job profiles, descriptions, adverts, and job architecture data 
  • Minimal employee details: first name, last name, email address 
  • Salary range data if supplied 

We explicitly exclude any processing of protected characteristics such as age, gender, ethnicity, or disability. 

Lawful Basis for Processing

We rely on a mix of contractual necessity, legitimate interests, and consent, depending on the context. This aligns with Article 6 of UK GDPR. 

How we protect data: 

  • All data processed within UK regions 
  • Data anonymised before model training 
  • Clear deletion procedures 
  • Retained for no more than 12 months after contract end 

 

Privacy controls: 

  • Data minimisation and purpose limitation 
  • Consent where appropriate 
  • Regular privacy impact assessments 
  • No automated decisions with legal or significant effects are made without meaningful human involvement (Article 22 UK GDPR) 
Bias Prevention & Fairness

Our approach includes: 

  • Excluding protected characteristics from processing 
  • Pre-deployment fairness testing 
  • Diverse training data to prevent systemic bias 
  • Monitoring for fairness and accuracy 

We also keep detailed records of all testing, take corrective action where needed, and publish fairness reports. 

Security & Safety 

Infrastructure security:  

  • UK-based AWS and Azure environments 
  • End-to-end encryption 
  • Regular security testing 

 

Model security: 

  • Version control and change management 
  • Access controls 
  • Secure deployment practices 

 

Data security: 

  • Role-based access 
  • Full audit logging 
  • Anonymisation for training 
  • Tested disaster recovery 

 

Human Oversight & Control 

 

Humans are always in the loop: 

  • All AI-generated content is reviewed and approved by people 
  • Clear escalation paths for concerns 
  • Users can override or reject AI suggestions 

 

Responsibility and review: 

  • All AI governance and oversight is led by Operations. 
  • Technical and Customer Success teams monitor AI behaviour and user feedback. 
  • Regular cross-functional reviews of performance and risks are held to support continuous improvement. 
Transparency & Explainability 

We clearly label AI-generated content, provide simple explanations of how features work, and share updates about improvements. 

 

We also provide: 

  • Decision explanations and confidence scores 
  • Documentation of decision logic 
  • Clear guidance on when and how to use AI features responsibly 
Compliance & Standards 

We comply with: 

  • UK GDPR 
  • Relevant employment law 
  • ICO guidance and emerging UK regulation 

 

We align with: 

  • The UK Government’s transparency and accountability framework for AI 
  • Industry best practices and customer-specific compliance requirements 

 

We maintain full documentation and audit trails and regularly review our approach. 

Continuous Improvement 

We:  

  • Test all features in development and staging 
  • Check for bias and accuracy before release 
  • Monitor live performance 
  • Update models and retrain regularly 

 

We collect feedback from users and stakeholders and review this policy annually to reflect new risks and regulations. 

Incident Response & Risk Management 

If something goes wrong: 

  • Escalation paths are in place 
  • A rapid-response team handles incidents 
  • Customers are informed 
  • Root cause analysis is done for every issue 

 

We assess and monitor risks regularly and use alerts and logs to spot and resolve potential issues early. 

Governance & Accountability 

Structure and ownership:

  • Governance of AI at RoleMapper is led by Operations. 
  • Technical teams operate and monitor systems. 
  • Regular reviews take place across functions to ensure we meet our commitments. 

 

Policy enforcement: 

  • All staff receive training 
  • Breaches are dealt with appropriately 
  • Compliance is audited regularly 

 

Stakeholder engagement: 

  • Customers are kept informed 
  • We’re open about any issues 
  • We work with regulators and industry bodies 
  • Feedback channels are available and encouraged 
RoleMapper
The building blocks of your workforce strategy.

Role Mapper Technologies Ltd
Kings Wharf, Exeter
United Kingdom

© 2025 RoleMapper. All rights reserved.