How xWave Research Uses AI: A Transparent, Human-Led Approach to Telecom Insights
Over the past two years building xWave Research, I’ve watched AI evolve rapidly. With two decades writing about the telecoms industry - first as a journalist, later as an analyst - I had a solid basis to assess where AI helps, where it falls short, and how I could use it effectively - and responsibly - in a research business. This piece explains that journey and, ultimately, xWave’s approach to AI.
xWave Research AI Policy — At a Glance
xWave Research AI Policy — At a Glance
For readers who prefer the highlights, here’s our policy in brief.
| Human-led research | All primary research, sourcing, and verification are performed manually to guarantee accuracy. |
| AI-supported brain-storming | AI is used selectively for ideation, taxonomy refinement, and stress-testing analytical frameworks. |
| Assisted categorisation | AI accelerates classification tasks, but every entry is reviewed and validated by a human. |
| Editorial collaboration | AI acts as an editor, not an author. Final outputs always reflect human judgement. |
| Data analysis | AI helps surface possible patterns, correlations and anomalies in manually curated data, but all interpretation, significance and conclusions are determined by a human analyst. |
| Accountability | All interpretations, conclusions, and errors are mine alone. |
In essence: AI enhances xWave’s work — it never replaces human expertise.
From Early Experiments to Maturing Capabilities
My experience with AI falls into two phases: early 2023, when the first LLM chatbots arrived, and mid–late 2025, the time when agentic models, autonomous reasoning, and more reliable performance emerged.
xWave's AI Journey
Inconsistent results, hallucinations
→ Manual focus reinforced
Agentic models emerge
→ AI as tool for ideation & editing
Deeper integration in patterns & scenarios
→ Human-led always
Phase 1: Early Underwhelming Results (2023)
The first phase was underwhelming. Results were inconsistent, hallucinations were frequent, and the time spent coaxing a usable answer often exceeded doing the task manually. I left that initial encounter with the sense that AI wasn’t ready to support my research.
Phase 2: Valuable Augmentation (Mid-Late 2025)
A year later, during a short break from xWave Research, I used AI differently - more like an enhanced search engine. It excelled at explaining scientific concepts, historical debates, and technical subjects in a way traditional search could not: fast, nuanced, and consistently informative.
Its weaknesses were equally clear. Ask for subjective judgement - say, the “best” recording of a piece of music - and the results felt generic. Ask it to explain an obscure scientific argument, and it was superb.
This duality became the basis of my practice: use AI where precision, logic, and synthesis are required, and keep human judgement where interpretation matters.
Research: Still a Human Job
When I returned to xWave Research, I asked the key question: Given AI’s strengths in technical domains, could it enhance my research? The answer was consistent: not yet, not reliably.
LLMs are excellent at reasoning and summarisation, but they are generative models - not retrieval engines. They infer likely answers rather than systematically discovering information. As a result, tasks like “find all AI‑related announcements on this company’s website” produced incomplete or misleading results.
And while scrapers/ crawlers can collect pages, they can’t judge what counts as a launch, distinguish noise from signal, or track updates across the many fragmented places telco and AI activity appears.
For these reasons, all primary research at xWave remains human‑led. Manual sourcing and verification are the only way to guarantee completeness, relevance and accuracy.
Taxonomy Development: Where AI Started Adding Real Value
After finishing the 5G Services dataset, the turning point came while developing the taxonomy for the Telco AI Services & Impact Tracker. By late 2025, AI had become far more capable at structured reasoning. Conversations about frameworks, categories, and analytical distinctions began to feel like discussions with highly experienced colleagues.
I found the same rule applied: the quality of the output is entirely dependent on the quality of the input. As the prompts and dialogue improved, the suggestions became insightful - not generic - and genuinely helped refine the dataset’s structure and value.
In short, AI proved excellent for iterative ideation, especially for complex classification problems.
Categorising Press Releases: Promising but Uneven
After manually researching roughly 240 telco AI initiatives, I tested whether AI could help categorise them. I uploaded a large, manually coded section as training data and asked the model to classify a fresh batch.
The results ranged from excellent to borderline — sometimes within the same set of five items. Some classifications differed from mine in ways that were genuinely useful; others were clearly off. This inconsistency is expected because LLMs are generative systems whose reasoning is probabilistic, not deterministic. They produce the most likely answer, not a guaranteed one.
Conclusion: AI can accelerate categorisation, but every entry must be checked by a human. Used this way, it saved time, sanity‑checked my thinking, and subtly improved the dataset.
AI in Data Analysis: Helpful Signals, Human Interpretation
With all the data collected and categorised, I also tested whether AI could help identify emerging patterns across the initiatives. Here again, it proved useful - but only in a tightly controlled way.
AI was good at highlighting:
possible clusters of similar initiatives
recurring themes or trends
anomalies that might warrant a closer look
high‑level relationships between categories
But as with categorisation, these suggestions were uneven. Some were perceptive, others superficial, and a few simply wrong. What mattered was how they served as prompts: nudges to explore a pattern, pressure‑test an assumption, or revisit a grouping.
In short, AI helped me notice things faster and things I might have missed - but the interpretation, significance and conclusions were entirely mine.
This was analysis support, not analytical judgement.
Idea Development: A Collaborative Dialogue
Years of political studies and a career shaped by news, editorial and analyst brainstorms have trained me to spot “an angle” quickly.
When writing analysis, I used AI not to replace that instinct but to pressure‑test it. The back‑and‑forth often led to more nuanced insights. When I ignored its suggestions, it was usually because I knew the direction I wanted. When I followed them, I found the analysis became richer.
It felt collaborative: I was teaching it what matters in a certain domain, and it was helping me think more deeply about it.
Content Generation: Useful, But Never Autonomous
Content generation is sensitive territory. Having worked with excellent human editors at Informa, I know the value of a skilled editor. Could AI replicate that?
Sometimes, yes - astonishingly so. A paragraph would come back clearer and more fluent, exactly as a human editor might improve it. The next paragraph might be noticeably weaker.
Training the model on samples of my writing helped. The results moved closer to what an experienced editor would produce: not a rewrite, but a respectful refinement.
Does it save time? Some.
Does it improve the final product? Often.
Does the resulting text remain mine? Absolutely.
As with human editing, collaboration does not diminish authorship.
Operational Uses: Functional Content, Not Expert Insight
AI has also been helpful for purely functional tasks where the goal is clarity and completeness rather than interpretation. For example:
Drafting first‑pass boilerplate sections (such as website T&Cs summaries, FAQ copy, or AI Policy summaries) to accelerate initial drafting and draw on AI’s broad familiarity with commonly‑used structures, conventions and technical terminology.
This speeds up iteration while keeping all final legal or compliance‑relevant language under human and — where appropriate — professional legal review.
xWave Research’s AI Policy: Selective, Transparent, Human‑Led
So how does AI fit into xWave Research?
Research
All primary data collection is conducted manually. AI is not reliable enough for discovery tasks and is used only for augmentation—not sourcing.Ideation & Taxonomy Development
AI is excellent for stress‑testing ideas and refining frameworks, provided prompts are precise and the dialogue is iterative.Categorisation & Classification
AI is used selectively to accelerate classification, but every item is human‑checked.Data Analysis
AI highlights potential patterns and relationships in validated data, but only humans decide what they mean and whether they matter.
Writing & Editing
AI is used as an editor and thought partner, not a writer. Final outputs always reflect human oversight and judgement.
Overall: xWave uses AI as a tool - not a substitute - for human expertise.
As Jon Sagar notes in Science in the Twentieth Century and Beyond, scholars traditionally take sole responsibility for mistakes. The same applies here. For xWave Research, “all opinions, interpretations and errors are mine alone.”
Why This Matters for Clients
Clients rely on xWave Research for accuracy, clarity, and judgement - not volume. Our approach to AI reflects that. By using AI selectively and under strict human oversight, we ensure:
Dependable insights: All data is manually verified, reducing the risk of AI‑generated errors in strategic conclusions.
Sharper analysis, never automated: AI helps explore more angles; final interpretations come from sector experience.
Robust frameworks: AI helps pressure‑test categorisations, resulting in cleaner, more scalable structures for client use.
Consistent voice and standards: Using AI as an editor improves clarity while keeping the writing and perspective authentically xWave.
For clients, this is the best of both worlds: the efficiency and breadth of modern AI tools, anchored by human judgement, domain expertise, and complete accountability.
Looking Ahead: How xWave Will Evolve Its Use of AI
AI will remain an important - but carefully controlled - part of how xWave Research operates. As models improve, I plan to deepen their use in three areas:
Smarter classification and pattern detection
Next‑generation reasoning will likely improve categorisation consistency and surface cross‑dataset patterns that would otherwise take days to uncover.More sophisticated scenario exploration
Large models are becoming capable of structured “what‑if” analysis, helping test assumptions and explore alternative strategic interpretations - while keeping the final call human.Faster iteration on research frameworks
As taxonomies expand across CSPs, hyperscalers, and vendors, AI will help harmonise structures and highlight gaps or overlaps more quickly.
The principle won’t change: AI will support, not direct, xWave’s work. Human oversight, accountability, and judgement will continue to underpin every dataset, analysis piece, and conclusion we publish.