Validation and iteration
When it comes to validating the interface, I would use a variety of
resources. It's crucial to keep in mind the initial objectives to
define success criteria.
I would start with in-person usability testing with 5 to 6
participants. This sample size typically provides reliable
statistics to identify strengths and weaknesses of the prototype.
Ideally, these tests would be conducted in a real-life contextβon an
airplane, in this case. Context is key to uncovering potential
issues with navigation flows, design, etc. If this isn't possible, I
would aim to replicate the situation as closely as possible.
Defining the success criteria for each task in the script is
essential to know if users achieve their goals.
Additionally, I would use user behavior tracking platforms like
Amplitude, Hotjar, Metabase, and others. These tools help gather
valuable insights into user conversion rates. Key metrics to track
would include:
Number of successful bookings
Abandonment rate before booking
Monetary value of successful bookings
Most booked activity
Least booked activity
Areas of the screen most clicked
A post-purchase survey rating users' satisfaction would also be
crucial to understand their experience and whether they are
satisfied with the process. The survey bounce rate is another
important metric to consider, as it will help gauge the engagement
level with the feedback process.