Ciro Annunziata

Ciro Annunziata

UX Researcher

I explore how people learn, stay motivated, and grow in confidence.

I help teams turn insights into thoughtful product decisions.

Portfolio Highlights

A person sits at a white desk working on a laptop, resting their chin on one hand while using the trackpad with the other. A smartphone and a glass sit nearby, with indoor plants and soft lighting in the background, creating a calm, focused workspace.

Defining the Aha Moment in the Era of AI

A multi-phase research programme investigating when learners recognise value in AI-powered language learning — and why strong aha moments don’t always translate into Premium intent.

  • 1

    Led a four-phase programme combining usability testing, interviews, and surveys across Free and Premium users

  • 2

    Found that AI features trigger strong aha moments, but limited discoverability and clarifying framing reduce impact

  • 3

    Shaped Premium messaging, tier differentiation, and roadmap priorities for AI and vocabulary review iteration in H1 2026

A person wearing glasses studies at a small table, writing notes in a notebook placed on top of an open book. A tablet displaying highlighted text and a black mug sit nearby by a window with sheer curtains, creating a quiet, focused study atmosphere.

Understanding Learner Motivation Over Time

A 4-week diary study exploring how adult language learners stay motivated, build habits, and experience AI-supported learning in everyday life.

  • 1

    Led a longitudinal diary study with 26 learners (A1–C1), combining in-context self-reports and product analytics

  • 2

    Identified strong intrinsic motivation alongside a clear mid-journey dip, revealing key moments for product intervention

  • 3

    Informed AI feature expansion and prioritisation across Q3–Q4 2025

A person sits at a table reviewing printed documents and writing notes with a pen. A laptop, calculator, glasses, and a drink are arranged nearby, with plants in the background, suggesting a focused work or study moment in a bright, calm setting.

Measuring Learning Impact at Scale

A large-scale efficacy study translating independent learning outcome data into actionable product insights, helping ground AI and Premium strategy in evidence rather than assumptions.

  • 1

    Led internal research synthesis and product interpretation of a six-language efficacy study conducted with an external research team

  • 2

    Analysed learning outcomes across 1.200+ users using pre/post testing, study time, and feature usage data

  • 3

    Informed evidence-based product strategy around AI features, Live Lessons, and Premium positioning

A person sits at a long wooden table near a large window, working on a laptop while holding a cup. Several laptops, notebooks, and cups are spread across the table, with natural light filling the space and creating a relaxed, café-like work environment.

When AI Promises Learning — and Learners Expect Proof

A mixed-methods study evaluating whether AI-powered mistake review delivers meaningful learning and how trust, relevance, and visible progress shape perceived value.

  • 1

    Led end-to-end research combining moderated user testing and a large-scale global survey

  • 2

    Found that generic feedback and limited transparency erode trust, with learners expecting review of their own mistakes

  • 3

    Informed the redesign of Mistake Repair and review features toward more personalised, adaptive feedback