User tests value
Why do user tests matter? Why allocate a budget or invest time and effort into learning how to conduct them?
User tests aim to identify usability issues, understand user preferences, and gather insights to improve the overall product. These tests focus on gathering feedback from participants while they interact with the product’s prototype or real product, often under the supervision of a researcher tasked with summarizing findings.
User tests provide feedback on the product's usability, functionality, and user experience:
- Product managers can validate assumptions and prioritize features;
- IT teams can identify technical issues e.g. early in the development process;
- …and the entire company can become a user-centric culture and find a better market fit.
To conduct user tests, you can hire a specialized company or consultants, or leverage internal expertise (e.g., there is an experienced researcher within the company’s structure). However, when you do not have any of these options, there is also a way: you can do it by yourself.
In this article, I will explore the importance of user tests through three case studies, each highlighting different budget levels. These case studies not only demonstrate the significance of user tests but also offer insights into what can be learned from them, illustrating their value across various organizational contexts.
The budget and three case studies
When the budget is no problem
One could think that the big budget can solve all the problems when it comes to user tests. However, based on observation, I can only partially agree with this statement.
In general, the case I would like to share is a successful one. Yet, it also had its shortcomings, and it is not my favorite user research I have ever done.
The business needs were complicated for the reason that we needed to prepare a new financial product because of the change in the law. The product needed to be compliant and therefore was more complex. Moreover, the UX of this product was a puzzle, as we needed to include a lot of legal documents, notifications, and disclaimers.
To evaluate the product, we decided to conduct proper user research. Thus, we hired a known company (one of the Big Four) and passed through all the cycles of recommended preparations for user tests, then iterations of tests, and lastly, we had a complete summary, with all the findings. Recommendations from The Company included not only tests results but some general tips, based on our case (compliance with the changing law in finances).
The results were great, because after tests we knew what to fix. Especially one finding was meaningful for the company, as it was unexpected what we found. It occurred that when it came to the complex financial product, our assumption to simplify whatever we can and fasten the path for users was wrong. Users were scared when the process of obtaining a financial (and complicated) product was too fast and easy. We found that users felt more comfortable and secure when there were more ‘stops on the road.’ This is something that usually in e-commerce is perceived as annoying, but here was necessary, to improve overall customers’ experience. This finding helped us to improve the product, it also allowed us to be less worried about the complexity.
To conclude, I truly admired the work consultants did. This is also because the entire process was a great lesson to the whole product team. However, I would argue that it was the best user test in which I was involved. The reason is that it consumed a huge budget and lasted a few weeks in total. It was not a very agile approach, and in this case, we needed to act fast to beat the competitors. Therefore, we did not act fast, and I felt we missed an opportunity.
When the budget is constrained
The second story is about a raid-hailing (taxi) app company, where we had a bunch of products that we needed to develop. The budget for any user tests was limited. I decided to use it in a not typical way – to conduct accessibility tests.
According to the plan, tests were prepared internally, so the company did not have to spend money to hire an external consultant. However, we did spend money to hire external testers. This group included a blind person that was using our app and noted problems within the product, and testers with vision problems that were members of an association gathering people with this kind of disabilities. Obviously, we paid for all the rides for testers on top of their fees.
Importantly, the feedback we gathered helped us improve the usability of our product for customers – not only for people with vision problems. Moreover, we were also able to fix communication flaws and navigation issues. These results confirmed the importance of the approach to accessible design and development. Interestingly, solving problems for minorities solves them also for the majority.
It is worth noting that we conducted internally other tests with our apps’ users. This process, including the approach to tests, is described in the other article. Especially part “Quality assurance” describes the approach I decided to use in this situation:
"We divided our tests into phases:
- Firstly, two testers checked if the application worked properly in the office and if not, they reported anomalies.
- Secondly, testers went out on the streets. The app might behave differently in realistic and dynamic conditions, and other aspects of the app, like view or functionality, could have had a bigger impact. Testers rode with our cab drivers, holding phones in their hands and watching what was happening in the application.
- Thirdly, we organized a group of 60 cab drivers to check the performance of the application in all possible weather conditions on the road, during the day and at night, and on a variety of devices. Drivers could contact us by talking to us in person, calling a dedicated hotline, or sending an email. This testing proved difficult because it required analysis of various types of comments, problems, and subjective opinions. Aggregating requests and choosing which ones required attention from programmers and which should be observed further required a lot of focus and involvement from the entire team."
To conclude, by prioritizing accessibility testing and utilizing internal resources creatively, we were able to gather valuable feedback and address key issues affecting both minority and majority users.
No budget at all
First, there is no such a thing as a zero budget. Your time and your team’s time is already “the budget” you have. However, the situation, when you lack a budget to hire external company, or consultants, can be used as a great opportunity. You can use your time to learn.
In general, I believe that product managers should know how to conduct user tests and product managers should roll up their sleeves and do it from time to time by themselves. Without that it is hard to maintain contact with the product and business reality.
What I observed is that, whenever specialization is kicking in, then teams start to work in silos, separately, and overall picture is blurred, from too high level. User tests done by external consultants are great, but findings are read, talked about and then a bit forgotten, as nobody from the team did not truly hear customers, talked to them, felt emotions. What is on paper is a bit detached.
This opinion is based on my experiences in a company where the budget was fluctuating. First years we had the money to hire user tests company – and we did it. But, later, the budget started to be constrained; yet, as a team, we already knew how valuable user tests are and we did not want to resign from it. Within a team we decided to go with it and do it by ourselves, and try two different paths: internal user tests, and limited tests with customers.
A) Internal user tests
As an IT team we were responsible not only for the products for customers, but also internally developed and used software. So, we were aware of internal users – our own coworkers’ needs and problems. Yet, we based our development roadmap on general business KPIs (e.g., sales representatives’ productivity KPIs, margins KPIs, and so on). So, our sprints had a list of tasks to fulfill general KPIs, we also maintained tasks regarding bugs, security, or overall improvements; but we were lacking tasks about user experience.
We did not have a budget to hire a company to do internal user tests (especially since it was not a high-level of importance problem from the business perspective). We decided to use the knowledge we had within a team. A team member with experience in user tests decided to prepare material on what are user tests, how to prepare them and then supervised those who wanted to learn and try their hands at it.
User tests were treated as an internal project, with a plan and due date. We assigned a person that was responsible for organizing the whole project; then asked internally within the team – who would like to volunteer and learn how to conduct the research. Surprisingly, three team members wanted to try themselves – it was great as we could then also compare the performance and talk about it internally.
The additional outcome of this exercise was that the team learnt not only about user tests, but also about each others’ personalities and capabilities. For example, the best people to talk with users were not people from the development team, but from helpdesk team, as they are in general experienced in difficult talks with users or customers!
Thanks to internally conducted user tests, IT team found out not only about usability improvements that internal products needed, but also about business processes that were not existing anymore (and in the systems they still weren’t removed), or about new ideas and it was just great to know to be able to prepare for the future tasks.
Here, in this case, we also used Chat GPT - a fantastic way to learn for team members, at least to start the topic. And then, we used Chat to help us create a template of user tests scenario (a general one). Having this, we added specific additional questions, as our goal was to gather user’s feedback about our internal products usability, good features, and problems.
The costs of these tests were: the time of the IT team. So, it was not for free. But learning was valuable; the motivational aspect of the internal knowledge sharing and learning – was of even greater value.
B) Limited tests with customers
The other case was our need to validate and discord assumptions about a completely new product for customers, before an official lunch.
Feedback I gathered was supposed to be used to understand users’ needs vs our own business KPIs. So, the questions incorporated in the tests were not only about usability (I was hunting not only for UX flaws), but also connected to the market fit and general impressions about upsell options.
I followed general rules of conducting research. Tests were done online. What was great about this approach – conducting research without any external parties involved – was agility.
For example, after two tests meetings I noticed that the scenario is too long, so I shortened a bit some questions and the next tests were smoother for participants. Or, the other quite surprising finding was, that participants lacked a part where I would describe the company mission and goals. It was particularly important for participants because the company I worked for then and did this research for was a green energy company, with a vision of helping to fight climate change. Participants asked cautiously about our new product, how it relates to the mission and overall goals. What was even more surprising was that after hearing the context, participants were more open to talking about their needs – meaning upsell possibilities for the company. That was great knowledge to gain during usability tests.
The costs of these tests were: the time of one IT team member. So, again, it was not for free. But learning was valuable. Customers feedback helped to align the product roadmap. Also, we were assured that in the context of this product, differences between countries are minimal and there is no need to create local versions. This outcome meant lower costs of development and future maintenance.
To conclude, not only did we gather users’ feedback, but also this approach turned out to have a very effective motivational aspect, that brought the team together.
(Photo from my desk, work-in-progress wireframes, and notes after an interview with one of the customers, first on paper, then moved to the excel spreadsheet for further analysis).
Lessons learnt
Lesson one: do you really need a big budget to do user tests?
I do not think that to conduct user tests you need a big budget. Somehow, the best way is when the budget is limited and the team starts to be creative as must find ways to gather feedback to be able to better decide what is important to take care of.
When there is some budget available, you can use it for tests with users, or use it to train your team rather than hire external companies (or consultants) to do the tests. Your team will expand skills (use the motivational aspect of it!), it will be useful in the future. Of course, this should be treated as any other task. It is worth to plan learning part within sprints and give some space for volunteers – so they can learn.
Lesson two: listening is hard, but communicating results is even harder
Listening to the users’ feedback is hard, no matter if external consultants conduct the tests, or you do them in-house. It is because, as a person that was engaged in the creation of this product, you are attached to it. You also know the whole backstory. “Why there is a button, not there.” “Why there are no buttons.” “Why we did it in this way and not that way.” All of these you know – but users do not, and they can criticize all the decisions made by the product team, meaning also: you. It hurts! And it is terribly hard to just listen, note and not defend the product.
But what hurts even more is then communicating results internally – to the team, and to the stakeholders. This part in my opinion is particularly hard.
First, choosing which feedback to show and which mark as not relevant – is a huge responsibility and you can spend hours just pondering what to choose.
But what is after that is even harder. How to present results, so the meeting or email will not be omitted or boring? How to explain results in a way that business will want to listen to, especially if the results show the company needs to re-think approach or strategy? How to convince it is true feedback and is relevant, even if you evaluated with namely a few users, not thousands? How to back-up qualitative user tests with quantitative data?
A lot of “how’s” to consider. Finding out answers is very satisfying though.
Lesson three: there are many ways to gather your users / customers feedback
Talk internally with other teams, including customer care, call center, sales, helpdesk, marketing etc. This is because not only you will gather feedback about customers, but also on the test process. Moreover, you will assess how to test :) Furthermore, you also will know better your company, goals of other teams and people in general.
I found especially useful to talk to helpdesks and call centers – asking about problems users report. It was a useful source of knowledge regardless of whether I was looking for feedback from company’s customers or gathering data about internal products used by coworkers. These two places are where people tend to report a problem they face and checking all the tickets is a great way to make sure you do not omit an important issue.
To wrap up
Agile and fast-moving teams sometimes need a stumbling block to slow down a bit. The idea is that encountering an obstacle can sometimes be beneficial as it forces you to pause or proceed more cautiously.
In software development, I noticed that sometimes user tests can be used as this kind of “stumbling block” that forces the whole team to think about the whole product as well as tested bits.
Food for thought :).
Key | Value |
---|---|
Title | Balancing budget for user testing in product management: three case studies |
Author | Małecka, Katarzyna |
Publisher | Mind the Product |
Year | 2024 |
Link | https://www.mindtheproduct.com/balancing-budget-for-user-testing-in-product-management-three-case-studies/ |