xWave Research — AI Policy
Last updated: February 2026
xWave Research uses artificial intelligence (AI) to enhance the speed, breadth and clarity of our work — never to replace human expertise. This page explains where AI is used, where it is not, and the safeguards we apply to protect accuracy, client trust, and intellectual integrity.
Summary (for skim-readers)
- ✔ Human-led research
- ✔ AI-supported thinking
- ✔ Manual verification for all classifications
- ✔ AI as editor, not author
- ✔ No client-confidential data goes into public AI tools
- ✔ Accountability remains with xWave
1. Scope of this Policy
This policy covers xWave’s use of AI across research workflows, dataset development, editorial processes, policy/operational copy, and internal productivity. It applies to all content published by xWave Research and to client deliverables produced by xWave.
(“AI tools” refers to large language models (LLMs) and related systems used for text generation, reasoning, drafting, and editing.)
2. Principles Guiding AI Use
- Human-Led: All primary research, interpretation, and conclusions are produced by a human analyst.
- Accuracy First: AI output is never accepted without manual verification where it affects data, categorisation, or conclusions.
- Transparency: We disclose where AI has meaningfully assisted content (e.g., “AI-assisted editorial review”).
- Accountability: All opinions, interpretations, and errors are xWave’s alone.
- Ethical & Secure: We protect client confidentiality and data integrity; we do not input client-confidential material into public AI tools.
- Client-First: AI is used to improve quality and clarity — not to generate volume.
- Continuous Improvement: We evaluate new capabilities carefully; AI supports xWave’s work, it does not direct it.
3. Where We Use AI
A. Ideation & Thought Development
- Brainstorming analytical angles
- Stress-testing assumptions
- Refining research questions and narratives
Human control: Final research direction and arguments are set by the analyst.
B. Taxonomy & Framework Refinement
- Comparing candidate classification structures
- Surfacing overlaps/gaps
- Tightening category definitions
Human control: The final taxonomy and definitions are authored and approved by xWave.
C. Assisted Categorisation & Entity Grouping
- Draft suggestions for classifying initiatives, releases or programme types
- Clustering similar items for review
Human control: Every AI-assisted classification is reviewed and validated before inclusion.
D. Editorial Support (AI as Editor, Not Author)
- Clarifying wording, tightening sentences, improving flow
- Harmonising tone with xWave’s voice
Human control: Substantive content, arguments and conclusions are human-authored; edits are accepted or rejected by a human.
E. Functional / Operational Copy (with Human & Legal Review)
- First-pass drafts of policy summaries, FAQs, cookie notices and high-level T&Cs summaries
- Normalising terminology across policy pages
- Cross-checking for inconsistencies between pages
Guardrails: All such text is human-edited; legal or compliance-relevant language is validated by a qualified professional. AI is not a source of legal advice.
F. Internal Productivity (Non-Confidential Inputs)
- Drafting project checklists, internal guidelines, or reformatting non-confidential notes
- Generating alternative headings/sub-headings for faster iteration
Guardrails: We do not paste client-confidential material into public AI tools.
G. Analytical Augmentation (Human‑Interpreted)
- AI can - but not always - assist in highlighting potential trends, clusters, anomalies or relationships within manually curated datasets.
- These suggestions can support early‑stage analytical exploration by accelerating the identification of possible angles or questions worth investigating.
Human control: All interpretation, significance and conclusions are made by a human analyst. AI‑generated analytical signals are never accepted without evidence, domain expertise and judgement.
4. Where We Do Not Use AI
- Primary Research & Discovery: Website sweeps, press-release discovery and source collection are done manually to ensure completeness and accuracy.
- Autonomous Content Generation: AI does not write research notes, draw conclusions or author opinion pieces.
- Decision-Making: Recommendations and interpretations are not accepted from AI without human evaluation.
- Legal Advice or Binding Legal Language: AI may help outline structure, but final legal/compliance text is reviewed by a qualified professional.
- Client-Confidential or Sensitive Data: We do not input client-confidential materials into public AI tools.
- Sensitive Personal Data: xWave does not process sensitive personal data through AI tools.
5. Safeguards & Quality Controls
- Human-in-the-Loop: All AI-assisted outputs that affect data, classification or meaning are manually checked.
- Source-First Method: We retain links and citations for findings; datasets and analysis maintain an audit trail.
- Versioning & Change Control: Material updates to datasets or policy language are versioned and recorded (see Change Log).
- Bias Awareness: We cross-check AI-assisted suggestions against domain knowledge and diverse sources; biased or generic outputs are discarded.
- Reproducibility: Important prompts and decisions are documented internally to reproduce outputs where relevant.
- Selective Adoption: New AI features are trialled in sandboxed workflows before inclusion in production processes.
6. Data Privacy & Security
- No upload of client-confidential data to public AI tools.
- Access control: Work accounts, MFA and least-privilege access are enforced for systems handling research content.
- Data retention: We retain only what we need for research integrity, project delivery and legal obligations; we minimise AI-tool retention where configurable.
- Third-party tools: We prefer enterprise-grade tools with contractual data-use protections. Sub-processors may be disclosed on request.
- Jurisdictional considerations: We aim to align practices with applicable data-protection expectations for our client base (e.g., UK/EU), without treating this policy as legal advice.
7. Intellectual Property & Attribution
- Human authorship: All original analysis, conclusions and narratives are authored by xWave.
- Respect for rights: xWave does not knowingly include proprietary third-party content without permission.
- Disclosure: Where AI provided meaningful editorial assistance, we may note “AI-assisted editorial review.”
- Ownership: xWave retains IP in its research deliverables unless otherwise agreed with clients in writing.
8. Client Benefits of Our AI Approach
- Dependable accuracy: Manual verification prevents AI-generated errors from entering datasets.
- Sharper analysis: AI helps explore angles; human judgement provides the final interpretation.
- Stronger taxonomies: AI helps pressure-test structures for clarity and scalability.
- Consistent voice: Editing assistance improves readability while preserving xWave’s viewpoint.
- Faster operational pages: Policy and FAQ text iterate faster — with human and legal review.
9. Governance, Reviews & Change Log
Policy owner: Paul Lambert, xWave Research
Review cadence: At least quarterly, or upon material changes in AI capability or xWave practice
Change Log
- 2026-02-07: Initial publication of AI Policy page
For questions about this policy or xWave’s use of AI, contact: info@xwaveresearch.com
10. Frequently Asked Questions
Do you use AI to write your research reports?
No. AI may assist with editing for clarity, but all research findings, arguments and conclusions are human-authored.
Do you feed client materials into AI tools?
No. We do not input client-confidential data into public AI tools. If a client requests AI-assisted processing, we will agree this in writing and use appropriate enterprise controls.
Which AI tools do you use?
Tooling evolves. We use reputable LLMs and productivity tools where they add value and provide appropriate controls. The choice of tool does not change our human-led approach or manual verification standards.
Can you disable AI assistance for my custom research project?
Yes. On request, we will disable AI assistance for client work and confirm that in the statement of work.
How do you ensure accuracy if AI is involved?
All AI-assisted classifications and edits undergo manual review. We keep source links, maintain an audit trail, and prioritise human judgement.
Transparency note: This AI Policy was drafted with AI-assisted editorial support to ensure clarity, consistency and alignment with commonly used policy structures. All content, principles and final wording were reviewed, edited and approved by xWave Research; accountability for all interpretations and errors remains with xWave.