Ratings

hero-ratings2.png

project overview

About 50 percent of lessons didn’t get ratings. Reviews from students help tutors get other students and provide better tutoring. We also wanted to hear more from students why they weren’t reviewing their tutors.


Control-1.jpg

goals

The goal of this project was to increase the amount of students providing reviews to their tutors post-lesson.

A secondary goal was to gather more information about what worked well and what didn’t in the lesson.

These two goals could be at odds with each other, so we focused on the primary goal as a first step.

My role in this project was interaction and visual design. I worked on this with a senior researcher, with feedback from the design director and lead designer. Stakeholders included product, community, business, and marketing. 

Current design prior to this project shown to the left.


UER

We surveyed students who had recent lessons and didn’t provide a review to better understand why they weren’t providing a rating for their lesson. 

Time was the biggest reason students didn’t rate their tutors. 

Poor lessons were the second, and the desire to not hurt their tutors feelings. 

Why students don't rate lessons


flow.jpg

high-level flow

The map shows the current ecosystem of ratings on the site. It’s a binary system using thumbs up (positive) or thumbs down (negative) ratings. It’s sprinkled in many different flows throughout the site, so we focused on the post-lesson experience for live lessons. 


7.jpg

inspiration

Looked at other company’s rating systems, namely Uber, Lyft, OpenTable and others in the EdTech space. It was interesting to see how they handled the top-level feedback about the overall experience and then dig into categories for what went well or what didn’t.


sketches.png

sketches

Quickly sketched different top-level ratings, including different options for binary, three-point or likert scales.


wireframes

wire3.png

version A, iteration 1

Explored different rating scale systems in low fidelity wireframes to test with students, including opening up to a three-point scale sans iconography. 

wire1.png

version B, iteration 2

Another exploration was an emoji-based scale. Different emojis meant different ratings to people, so we abandoned this idea.

wire2.png

version C, iteration 1

We also tested the widely used five-star scale. 


1-no-star.png

learnings

The five-star system tested well in the initial user research. Students preferred the granularity of a five point scale since lessons weren’t always simply a good or bad experience, there was a lot more color in the middle.


user research round 2

option1.jpg

version A, iteration 2

Tested a one-click button interaction of subcategories to gather more information about what worked well and what didn’t in the lesson.

Students didn’t feel overwhelmed by this extra step, but said they probably wouldn’t do the extra button click right away. 

These four categories seemed to embody most of what constituted a good lesson. We tested a lot of different categories and specializations to arrive at these four.

option2.jpg

version B, iteration 2

Also tested the subcategories in a five-scale spectrum as well. Students liked being able to dig into a granular scale for the subcategories, but it did add cognitive load. 


student-communication3.jpg

MVP

For the first step of this project, the five-star rating system was A/B tested versus the current thumbs up/down reviews to see if the wider spectrum did meet the goal of increasing student feedback. 

This modal helped communicate the test to students.


future.png

a future state

Instead of adding subcategories to dig into what worked well and didn’t in a student review, it’d be interesting to test a more interactive way of getting in-depth feedback, such as adding a photo of your grade or a selfie of how much happier you are after getting help. 

Also think it’d be a good next step of how to explore the five-star interactions and simplifying with less text. 


Next steps

  • When the A/B test got interrupted by SEO Google impact of switching rating system before reached significance 
  • But did see an uptick of about 17% in ratings and the project had been running for several months
  • After that got sorted, would take next steps to integrate subcategory feedback for deeper understanding in reviews 
  • Implement areas of delight that got stripped out, such as animated stars to make reviews more enticing to provide feedback 
  • Test different areas of the platform and timing (e.g., not just post-lesson) of when might be good to give feedback