Elevating user retention: From chatbot to a trustworthy interactive platform
TIMELINE
Dec 2023 - Mar 2024
COMPANY
SmartPrep is an EduTech platform focused on transforming secondary education through AI-driven tools for language education.
ROLE
Product Designer working with CEO and 5 engineers
RESPONSIBILITY
As the first design hire, I led user research to uncover user pain-points, built human-AI experiences from 0-1 to boost user retention, and established a design system for efficient handoff with cross-functional teams
Impact
+31%
increase in engagement time
-21%
decrease in overall churn rate
$1.2M
secured seed round investment
Problem - Business Focus
Low retention rate among trial users post beta
1.1 User Retention Graph
Problem - User Focus
Low user trust leads to product abandonment, reducing product retention.
1.2 User Trust & Continuous Usage Rate
Solutions
A education platform that is familiar, controlled and transprent to teacher.
2.1 Assignment Generation
2.2 Question Card Interaction
Generate your assignment with AI
Teachers can generate entire assignments or specific questions using AI that aligns with their syllabus and learning objectives
2.3 Assignment Insights
Faster grading and assignment analysis
Teachers can access information like AI grading suggestions, question breakdowns, common misconceptions, and more.
Personalized insight for every student
Over time, the AI will analyze student data, helping teachers uncover insights into strengths and weaknesses, and suggest areas for improvement.
Context -- Beta Product
I started with a simple chatbot
3.1 Beta Demo
But teachers are not satisfied with just a chatbot :(
Time constraints led us to skim down the beta product, allowing teachers to upload assignments and student submissions for insights via chatbot
Research -- Data ANalysis
96% of trial users drop off before acting on AI generated insights.
Where... do we lost out user?
Mapping user retention across key touch points in the user journey to identify where we experienced the highest drop-off rates
Research -- Qualitative
Users are hesitant to rely on AI as there is too much uncertainty around the technology.
Why... do we lost our user?
To uncover factors behind low adoption and high drop-off rates, I cross-referenced interview notes from users who chose not to start a trial or dropped off early.
Design Explorations
Determining Product MVP
As we decided to expand our product beyond a flat-layered chatbot, I conducted a workshop with the team to develop the information architecture of the product and discuss which sections to prioritize.
Receiving Early Feedback
As the team operates under very tight deadlines, I’ve utilized many low- to mid-fidelity prototypes to gather early feedback on user experiences and assess implementation feasibility with the engineers.
Design Principles
2 core principles to reduce interaction uncertainty
Through interviews and early stage feedback, I identified specific patterns user struggled with, which contributed to their perception of the product as untrustworthy.
Familiar Logic
Design the interface and interactions to align with existing educational tools, reducing the learning curve and making the AI feel more approachable.
AI Transparency
Clearly entail how  AI makes decisions and where AI is implemented, providing users with understandable explanations and insights into the processes.
Familiar Logic -- Why Familiarity
Building familiarity is the fundamental of weakening of AI uncertainty
Familiar Logic -- Familiar Product Architecture
Teachers prefer products that offer a complete workflow, so we expanded the flow.
Expanding product coverage
Users were accustomed to using different platforms for assignments but disliked switching mid-task. Hence, we expanded our product to mimic their familiar workflow.
Familiar Logic -- Familiar Product Learning
Build up familiarization and prevent early stage drop-off with progressive disclosure.
AI Transparency -- Why Transparency?
Users trust AI only when they consistently receive reliability indicators.
Trust emerges from communication, but the question is how ...?
To explore ways to strengthen trust between users and AI products, I delved into HCI research on Medical AI Interaction, which, like EdTech AI, demands high user trust, and examined the concept of reliability indicators.
View the final solutions again
Retrospectives
Designing for Implementation Means Designing with Engineers in Mind
In a fast-paced startup environment, I learned that collaborating closely with engineers is crucial. From mapping out all edge cases to providing clear annotations and structured handoffs, making life easier for engineers means more of my designs get pushed out to users. Seeing my Figma work come to life is incredibly rewarding.
Navigating Ambiguity in Emerging Tech with Academic Research
Designing for AI comes with a lot of ambiguity due to the lack of established design patterns. Instead of relying on intuition alone, I found that academic research in HCI, especially on human-AI interaction, provided valuable guidance in shaping the cognitive models of AI interactions.
Prioritizing Designs that Drive Business Growth
At an early-stage startup, I had to prioritize designs that would have the most impact on business growth. Engaging closely with the business side, I realized that launching a functional version—even with minor design flaws—can be crucial for meeting client timelines, generating revenue, and gathering data to improve the product.