Content training tool for educational talking robot
ML content training tool platform.
An internal tool designed for content training of the educational robot content.
Role & Duration
Lead design from end to end
Cooperate with researchers, content strategies and engs
May 2017 - April 2018
Woobo is building an artificially intelligent talking robot for 4 - 8 years old kids' education. Kids could play, ask questions and interact with robot. One of important functions of the “ASK WOOBO QUESTIONS” that encourage kids to ask the robot any question they are curious about. For this reason, the team cared about the validation of suitable content for kids' education.
I, as the UX designer, worked with our researchers, content strategies and engineers, we were trying to build a machine Learning content training tool that allows researchers to send questions for manual labeling. In this way, the robot could have a better ML model to classify the question contents.
* To comply with my non-disclosure agreement, I have omitted and obfuscated confidential information in this case study. The information in this case study is my own and does not necessarily reflect the views of Woobo.
Why we need the manual labeling?
In our team, backend engineers used natural language processing techniques to process and analyze large amounts of natural language data. As the robot contents related to kids, we cared about the accuracy of the content classification. We wanted to make sure the description of the problem content is suitable for children from four to eight years old.
However, most of the questions were “tricky”, and hard to classify the content. To train a content classification model, a supervised learning method is preferred. Labeling the training data is a must for supervised learning, which is time-consuming but essential.
What researchers and reviewers need?
Research hoped the tool could easily see the reviewers' process and easy to distribute and organize the unlabeled questions. Reviewers were looking for a clear, recognize, and recovered form error process, which the current interface not provided yet. I worked with researchers to discover the potential solution. Here are the current interface problems.
How to improve the efficiency of labeling?
1. Make sure the visibility of system status for everyone.
The current interface did not provide the status for both researchers and reviewers that some users feel unconfident about their process and action. Understanding the system's current state is all about allowing the users to feel in control which important in this training tool design.
2. Allow reviewers' control and freedom on the labeling process.
Common feedback from the original version was there no back button to back to the last step. And lots of reviewers wanted to back out of the unwanted processes when they felt their choice not good.
3. Follow the consistency and standards.
People spend a lot of time on using different tools in their daily life. This training tool should follow the external consistency to decrease the user learning time.
4. The consistency design experience
User experience design is not only about the UX, it also includes the content experience, product experience, and brand experience. Even this tool is the internal tool, the entire design style should make the visual and interaction consistency with the main product.
(More details will coming soon)
Next project ...
Emotional interface design for robot to communicate with kids in intuitive way.