
Ciro Annunziata
UX Researcher
I explore how people learn, stay motivated, and grow in confidence.
I help teams turn insights into thoughtful product decisions.
Portfolio Highlights

Defining the Aha Moment in the Era of AI
A multi-phase research programme investigating when learners recognise value in AI-powered language learning — and why strong aha moments don’t always translate into Premium intent.
- 1
Led a four-phase programme combining usability testing, interviews, and surveys across Free and Premium users
- 2
Found that AI features trigger strong aha moments, but limited discoverability and clarifying framing reduce impact
- 3
Shaped Premium messaging, tier differentiation, and roadmap priorities for AI and vocabulary review iteration in H1 2026

Understanding Learner Motivation Over Time
A 4-week diary study exploring how adult language learners stay motivated, build habits, and experience AI-supported learning in everyday life.
- 1
Led a longitudinal diary study with 26 learners (A1–C1), combining in-context self-reports and product analytics
- 2
Identified strong intrinsic motivation alongside a clear mid-journey dip, revealing key moments for product intervention
- 3
Informed AI feature expansion and prioritisation across Q3–Q4 2025

Measuring Learning Impact at Scale
A large-scale efficacy study translating independent learning outcome data into actionable product insights, helping ground AI and Premium strategy in evidence rather than assumptions.
- 1
Led internal research synthesis and product interpretation of a six-language efficacy study conducted with an external research team
- 2
Analysed learning outcomes across 1.200+ users using pre/post testing, study time, and feature usage data
- 3
Informed evidence-based product strategy around AI features, Live Lessons, and Premium positioning

When AI Promises Learning — and Learners Expect Proof
A mixed-methods study evaluating whether AI-powered mistake review delivers meaningful learning and how trust, relevance, and visible progress shape perceived value.
- 1
Led end-to-end research combining moderated user testing and a large-scale global survey
- 2
Found that generic feedback and limited transparency erode trust, with learners expecting review of their own mistakes
- 3
Informed the redesign of Mistake Repair and review features toward more personalised, adaptive feedback