

An end-to-end UX project translating research insights into a clearer, more accessible, and more goal-oriented digital experience.
Owned the full UX process from discovery and usability analysis to wireframing, UI design, and user testing
Defined personas, journeys, and information architecture grounded in qualitative research
Delivered a research-backed redesign focused on accessibility, clarity, and mobile-first decision-making

A multi-phase research programme investigating when learners recognise value in AI-powered language learning — and why strong aha moments don’t always translate into Premium intent.
Led a four-phase programme combining usability testing, interviews, and surveys across Free and Premium users
Found that AI features trigger strong aha moments, but limited discoverability and clarifying framing reduce impact
Shaped Premium messaging, tier differentiation, and roadmap priorities for AI and vocabulary review iteration in H1 2026

A large-scale efficacy study translating independent learning outcome data into actionable product insights, helping ground AI and Premium strategy in evidence rather than assumptions.
Led internal research synthesis and product interpretation of a six-language efficacy study conducted with an external research team
Analysed learning outcomes across 1,200+ users using pre/post testing, study time, and feature usage data
Informed evidence-based product strategy around AI features, Live Lessons, and Premium positioning

A 4-week diary study exploring how adult language learners stay motivated, build habits, and experience AI-supported learning in everyday life.
Led a longitudinal diary study with 26 learners (A1–C1), combining in-context self-reports and product analytics
Identified strong intrinsic motivation alongside a clear mid-journey dip, revealing key moments for product intervention
Informed AI feature expansion and prioritisation across Q3–Q4 2025

A mixed-methods study evaluating whether AI-powered mistake review delivers meaningful learning — and how trust, relevance, and visible progress shape perceived value.
Led end-to-end research combining moderated user testing and a large-scale global survey
Found that generic feedback and limited transparency erode trust, with learners expecting review of their own mistakes
Informed the redesign of Mistake Repair and review features toward more personalised, adaptive feedback