ScriptsMaps: UX Research Case Study
ScriptMaps provides a service that connects aspiring screenwriters with paid professional support. While the site has visitors, few are choosing to sign up for any of the listed services. In an effort to increase turnover, my goal with this research was to understand how to reduce the amount of time it takes for a visitor to understand the available services and make a selection. I also wanted to gain an understanding of their pain points with the current resources they use, and what features they would expect to see in a product like ScriptMaps.
Writer’s Block
The first step in kicking off my research was an in depth discussion with the main stakeholder of the product. His first priority was to understand how to increase conversion and gain an insight on how to build out his product further to provide tools that aspiring screenwriters need. We came to this conclusion after discussing the amount of traffic his site would regularly get compared to how many people actually signed up for one of the options being offered.
Act 1: Goals
We established a few very specific goals after the stakeholder kick-off.
Identify general usability issues and recommendations for a more intuitive service selection process.
Determine a new interface design that yields higher turnover and user satisfaction.
Gain an understanding of who the users are and what they need.
Qualitative User Study + Demographics
The first step of my research was to get an understanding of the demographics of early-career screenwriters, and understand their potential needs. Available information on screenwriters in general was sparse, but I was able to find a large survey from the screenwriting subreddit called r/screenwriters, with a subscriber base of over one million. A general demographic survey from 2019 with almost six hundred responses provided some valuable insights. Here are some of the top level insights:
73% male
60% caucasian
75% identify as straight
Total results can be found here: r/Screenwriting Demographic Survey
I found this community to be a valuable resource in my discoveries and designed a more focused survey. This link can be followed to the survey. I received a total of eight responses, and while this number seems small, I believe it resulted in very valuable information.
Six out of seven respondents reacted positively to the idea of paying for a service to help them on their screenwriting process.
Respondents also provided information on what features would be expected from a paid service like ScriptMaps. These ideas are important for future development and may also be used to increase turnover.
Empathy map
This empathy map reflects the feelings, thoughts, and actions of a single user that was tested. There is some overlap with the information in the user persona, but the empathy map provides additional data in regards to an individual’s emotions and psychology. This can be passed onto other team members and be referred to when considering new features and designs.
Chekhov’s A/B Test
To reduce friction in the service selection process, I wanted to test a more conventional and easier to use process that would contribute to higher turnover. My first thoughts were to design the order of options from a top-down list of items to one that resemblers more of an F-shaped pattern. The F-shaped pattern is the way Western readers consume digital or printed text; left to right, then down, left to right, and so on.
“In the F-shaped scanning pattern is characterized by many fixations concentrated at the top and the left side of the page…Users first read in a horizontal movement, usually across the upper part of the content area. This initial element forms the F’s top bar.” -Nielsen Norman Group
Control
The original design has each option listed from top to bottom with lengthy descriptions. This design was used during the test to compare against the treatment design.
Treatment
I created this design as the treatment and adhered to the top portion of the F-shaped pattern, and reduced the length of the descriptions.
My hypothesis was that the treatment would fare better than the control design in terms of time it takes to make a selection and ease of use because of an adherence to the F-shaped pattern design and easier to read text. To test my hypothesis I used mTurk as the tool to recruit respondents and administer the test, and used Qualtrics as the actual mechanism to test the designs and collect data. Because of constraints on this project this was the best option I had for actually delivering user tests.
I devised a within-subject usability test with two separate tests to control for order effects. Order effects or practice effects can occur when testers “warm up” and improve their performance over time. For example, if one A/B test was administered to 100 participants and item B had significantly better results, it cannot be said with confidence that item B is actually the better design because user responses can be attributed to familiarity with the design that they didn’t have when they were first shown item A. To control for this, I administered one test with the control condition first and the treatment condition second to the first 25 participants, and one test with the treatment condition first and the control condition second to the second group of 25 participants. Each test was administered to 25 participants, with a single task for both interface designs, followed by two questions about the tester’s satisfaction with the amount of time it took to complete the task, and their satisfaction with the ease of the task.
control snapshot
The main metric I was testing for was the speed in which the tester is able to make the selection prompted in the task. I was able to include a “first click” timer which would measure the amount of time total it would take for the tester to read the instructions, scan the page, and make the selection. Although reading instructions is not something they would encounter on the actual site, it is included on both the control and treatment so the time it takes to read them is not relevant. Upon completing the task, testers were asked to rank their satisfaction in the amount of time it took to complete and their satisfaction of ease of completing the task 1-5.
Testing Results
Although there were 50 total participants, there ended up being only 14 useable test results because some test times were too quick for someone to actually have read the instructions and answered the follow up questions honestly. This was the main constraint I found in using mTurk. With the data that was helpful, here were the key takeaways:
The treatment condition achieved a 17% (2 second) quicker response time than the control.
The treatment condition averaged a 4.5 out of 5 rating on ease of use, while the control averaged 4.4.
Third Act: Insights, Recommendations, and Beyond
In the pursuit of understanding how to increase turnover with a more refined and focused overall user experience, there are a few very helpful takeaways to be updated into ScriptMaps. Based off of the the qualitative and quantitative data, and user testing results my recommendations are:
Consider building an interface similar to the treatment condition to increase user response time and increase ease of use. Both the F-shaped pattern and simplified options are likely contributors that made the treatment condition the overall favorite.
Include examples of product validation like previous success stories to prove efficacy and value.
Consider the user’s inexperience in the field and work on features around networking and user to user interaction.
Further testing and ideation should be done with a focus on the user journey from start to finish, UX writing, and a very strong focus on how to validate the product in the user’s eyes. I believe a moderated or unmoderated user test with a test group of individuals that more closely align with the demographic research above would bring great understanding on how to improve the product further.