The goal of this project was to increase the amount of students providing reviews to their tutors post-lesson.
A secondary goal was to gather more information about what worked well and what didn’t in the lesson.
These two goals could be at odds with each other, so we focused on the primary goal as a first step.
My role in this project was interaction and visual design. I worked on this with a senior researcher, with feedback from the design director and lead designer. Stakeholders included product, community, business, and marketing.
Current design prior to this project shown to the left.
We surveyed students who had recent lessons and didn’t provide a review to better understand why they weren’t providing a rating for their lesson.
Time was the biggest reason students didn’t rate their tutors.
Poor lessons were the second, and the desire to not hurt their tutors feelings.
Why students don't rate lessons
The map shows the current ecosystem of ratings on the site. It’s a binary system using thumbs up (positive) or thumbs down (negative) ratings. It’s sprinkled in many different flows throughout the site, so we focused on the post-lesson experience for live lessons.
Looked at other company’s rating systems, namely Uber, Lyft, OpenTable and others in the EdTech space. It was interesting to see how they handled the top-level feedback about the overall experience and then dig into categories for what went well or what didn’t.
Quickly sketched different top-level ratings, including different options for binary, three-point or likert scales.
version A, iteration 1
Explored different rating scale systems in low fidelity wireframes to test with students, including opening up to a three-point scale sans iconography.
version B, iteration 2
Another exploration was an emoji-based scale. Different emojis meant different ratings to people, so we abandoned this idea.
version C, iteration 1
We also tested the widely used five-star scale.
user research round 2
version A, iteration 2
Tested a one-click button interaction of subcategories to gather more information about what worked well and what didn’t in the lesson.
Students didn’t feel overwhelmed by this extra step, but said they probably wouldn’t do the extra button click right away.
These four categories seemed to embody most of what constituted a good lesson. We tested a lot of different categories and specializations to arrive at these four.
a future state
Instead of adding subcategories to dig into what worked well and didn’t in a student review, it’d be interesting to test a more interactive way of getting in-depth feedback, such as adding a photo of your grade or a selfie of how much happier you are after getting help.
Also think it’d be a good next step of how to explore the five-star interactions and simplifying with less text.
- When the A/B test got interrupted by SEO Google impact of switching rating system before reached significance
- But did see an uptick of about 17% in ratings and the project had been running for several months
- After that got sorted, would take next steps to integrate subcategory feedback for deeper understanding in reviews
- Implement areas of delight that got stripped out, such as animated stars to make reviews more enticing to provide feedback
- Test different areas of the platform and timing (e.g., not just post-lesson) of when might be good to give feedback