Collaborate Better Handbook

Understanding Your Customers

Run in the right direction

by Aarron Walter


Working together to launch successful products feels like a relay race. Each team carries the baton forward swiftly, striving to make your company the first to market with a new product. But being first won’t guarantee success. What if your product isn’t what customers need or want? What if you’ve been running the race in the wrong direction?

This is how Laura Martini, a staff UX designer at Google and one of our first guests on the podcast, thinks about customer research. “You win a race at the finish line,” she told us, “not the starting block.” Though most of us instinctively feel like we should start building immediately, taking time up front to understand our customers improves our odds of reaching the finish line with a product customers will love.

In business, speed is important—but when following the wrong trajectory, it leads to entropy.

We heard a similar perspective when we spoke with venture capitalists like Vas Natarajan, a partner at Accel. According to Vas, understanding customers is not only essential to success in the market, but it also informs his investing decisions: 

What we wanna see at this early stage is a product hypothesis that’s been clearly informed by talking to end users. There’s a substantial difference between companies that pitch to us having done no customer research and those that have done a hundred customer interviews. Design will ultimately come from this deeper company value of wanting to put customers first. That’s really what we’re looking for when choosing to invest. Vas Natarajan, Accel

As we learn about customers and their behaviors, we can better understand and potentially anticipate their needs. This information helps de-risk investments of time and money into a new product as it increases the likelihood that the product you’re bringing to market will attract customers. 

There are many ways to learn about customers, but all fall into two categories, which we’ll look at next.

Categories of customer research

In his 1963 book Informal Sociology: A Casual Introduction to Sociological Thinking, sociologist William Bruce Cameron wrote that “not everything that can be counted counts, and not everything that counts can be counted.” He was speaking to the challenge of boiling down sociological research into a tidy, definitive summation. But when studying people, some things we can easily count (how many times they visited a web page, how long they stayed in your app, how many minutes it took them to complete a task), whereas other things (love of a product or trust inspired by your homepage) are more difficult to quantify. Whether countable or not, it’s all data and it’s all potentially informative for teams working on customer-centric products. 

Data we can sum up mathematically is called quantifiable data. Data that can’t be counted, because it illustrates the quality of an experience, is referred to as qualitative data.

Let’s look at how the two differ and when they’re most useful:

We can sum up the difference between these data types very succinctly: quantitative data shows us what customers do (behavior), and qualitative data shows us why they do it (motivations).

Irene Au, an early design leader at Google who championed customer research in the company, occasionally got pushback from colleagues on qualitative research. 

Here’s how she handled it:

As much as people want to see data, qualitative user research is another form of data. I’ve heard people dispute the validity of qualitative user research, because it’s not large-scale. It’s not “scientific.” But qualitative user research tells you the why behind the what and it’s just as valuable. It tells a deeper story beyond just what the numbers tell you. If you only look at time spent on a particular page or a feature, you don’t really know what’s going on. Are they spending a lot of time there because it’s really interesting and engaging? Or are they spending a lot of time on that page because they’re totally confused about what to do next? 

Qualitative research can tell you a deeper story around what’s going on there. People might be skeptical initially; I think people are very easily turned on and engaged when they start to see the compelling stories. Whether that’s in the form of videos, or taking people out into the field and having them witness it themselves. That’s the approach that I’ve always taken, is to engage stakeholders very personally so that they can relate to the people that they’re building for. —Irene Au, formerly of Google

The clearest picture of your customer emerges when quantitative and qualitative research methods are combined to help you see what customers are doing and why. Used alone, either one of these research types can mislead us.

So what types of research methodologies should you use to better understand your customers? We’ve written about this topic in our free book Principles of Product Design. It’s a useful resource if you want to dive deeper. 

Customer interviews and usability testing are staples in a researcher’s toolbox, but as Eli and I have learned from our guests, some research methods present thorny problems that you should be aware of.

Problematic customer research methods

Christian Madsbjerg, cofounder of ReD Associates and author of The Moment of Clarity, has strong opinions about how we should conduct customer research, opinions forged through decades of work with top global brands. As he told us, poor customer research can doom products to failure: “Most products fail because they’re based on the wrong set of techniques to understand people.”

User experience advocates have relied on surveys and focus groups for decades to provide insight into customers’ perspectives, but Madsbjerg told us these methods are dangerously misleading because they rely on a customer’s recollection of their experience and their ability to synthesize meaning from it. When people are not in the natural context of using your product and they’re forced to recall details that Madsbjerg calls “below the threshold of awareness,” they fill gaps in their story with assumptions and observations they think will sound good to researchers: 

If you were invited into a focus group about SUV vehicles, you would sit in that room, which isn’t the normal place for you to sit, and talk to people you would not otherwise have talked to thinking about things that you wouldn’t otherwise have thought about, and probably coming up with things to say that sound smart or reasonable in the moment, but isn’t in any way what you would actually do in the situation. That happens in focus groups, but it also happens in surveys.

If you ask people 85 questions online about what they think about this and that and the other, most of those questions refer to things that are below the threshold of awareness and where people are not able to answer with any precision or any sort of quality because they’re taken out of their natural environment. That’s a problem the original creators of surveys and focus groups observed. 

Businesses of the world are spending between $30 and $40 billion a year asking people what they think about things, even though the founders themselves of those techniques warn against doing it. They’re asking people to reflect on things that they wouldn’t otherwise have reflected on. That’s why we make so many mistakes in the business world. —Christian Madsbjerg, cofounder of ReD Associates

Surveys and focus groups appeal to us because they give us the sense that we’re getting deep customer insights quickly. And that is precisely what makes them dangerous. Erika Hall, friend of the show and author of Just Enough Research, sums up the dangers of surveys well:

It is too easy to run a survey. That is why surveys are so dangerous. They are so easy to create and so easy to distribute, and the results are so easy to tally. And our poor human brains are such that information that is easier for us to process and comprehend feels more true. This is our cognitive bias. This ease makes survey results feel true and valid, no matter how false and misleading. And that ease is hard to argue with. —Erika Hall, author of Just Enough Research

Poorly crafted surveys often ask us to recall what we thought or how we felt in the past when using a product or interacting with a company. But the human mind is poorly equipped to analyze details from the past. We get better data when we observe customers in action through contextual inquiries and usability testing. And there’s no research more compelling than firsthand observation of customers. It gives us a direct connection to the emotions they feel when using a product. 

The common criticism of observational methods of research is that it takes too much time and slows down a product launch. Christian Madsbjerg acknowledged this point when we spoke with him, but pointed to other ways we might think about this:

The problem with people is that you can’t just wrestle questions out of them super fast. You have to do it at a pace that is normal for them, or for us, which means that you need to get down to the pace of people. And that means you have to observe them and you have to track them and you have to understand what’s going on in their life. And there’s a limit to how much you can speed that up. 

But I think once you understand it, you can speed up other processes. I think the development process of products is much faster if you know what you’re developing and you know how it fits into the lives of people. —Christian Madsbjerg, cofounder of ReD Associates

Product managers are often the keepers of a product’s timeline and are most mindful of where progress is slipping. So you would think that Marty Cagan, one of the most seasoned and admired product executives in Silicon Valley, would advocate keeping research brief. But you would be wrong.

Research frequency

Marty Cagan has helped many companies improve their collaboration and production processes over two decades. So we were excited to have him on the show to learn what methodology trends emerged from his support of so many teams. We were fascinated to hear how central customer research—both quantitative and qualitative—is in the guidance he offers companies. 

I’m one of those people who believes every good product team does both qualitative and quantitative research every week. I believe that the product team needs to be going to the customer more than three hours a week. And if you’re working on something intense, like a redesign or a new app, you’re probably doing it more like five to ten hours a week. Now the real question is what are you doing when you meet with them? And you could do it in lots of ways. 

You can go to their home, you can have them come to your offices or your lab, or you can meet them at a comfortable location, like a Starbucks. These are normal ways of doing qualitative testing. 

And then we have quantitative techniques, like an A/B test is the standard quantitative technique, which is definitely testing with our customers. And it’s got a huge advantage over qualitative, but it’s also got a huge disadvantage over qualitative. The quantitative techniques tell us what’s happening, but there’s really no way for them to tell us why. So the qualitative can tell us why, but don’t let it fool you. We can’t tell you if you’ve really solved the problem yet. We need to test that quantitatively. So we need both. —Marty Cagan, founder and partner at Silicon Valley Group 

Cagan is a proponent of iterative product work that is regularly informed by customer inputs. He recommends weekly customer contact because it keeps the focus on real, not assumed, customer needs and behaviors. Research is a calibration tool. It prevents a team from wasting time on seductive but irrelevant features.

Taking the long view

Customer research can deliver more than just insight into the usability or desirability of a product. What about research that looks beyond existing customer cohorts and products being developed? Customer research can also give us a longer view of emerging shifts in culture and slow-moving trends that can sneak up on a company at glacial speed and put them out of business. Just as Blockbuster Video was slowly crushed by Netflix as video streaming emerged, businesses will face a day of reckoning if they don’t keep a vigilant eye on the distant horizon.

Christian Madsbjerg told us that companies like General Motors, which have significant investments in the current systems of production, sales, marketing, and purchasing experience, have to conduct additional, ongoing research to see emerging trends and distant possibilities that could affect their business. Because large companies, like an aircraft carrier, turn slowly, they need a vanguard looking well ahead for disruptions and signaling for a change in course before it’s too late. 

We’ve talked about how customer research can inform the products you’re building now and help you see far into the future, but there’s another important type of research that falls right in the middle. As Airbnb learned the hard way, it’s not to be overlooked.

Our User Research template in Freehand can help consolidate and centralize your user research findings, to test your assumptions and help discover similarities and patterns across your users.

The existential risks of not getting to know customers

Airbnb has a lot going for it. It’s a customer-focused company guided by a clear mission to create a world where anyone can belong anywhere, and it has thoughtful leadership. But despite all of these advantages, it encountered what Brian Chesky, CEO and cofounder of Airbnb, called an “existential risk” stemming from a dearth of insight into how the company’s platform was used after it was launched. 

The founders of Airbnb bet their whole company on the belief that people can trust one another enough to stay in one another’s homes. As Joe Gebbia, cofounder of Airbnb, explained in a TED talk about how the company designs for trust, it all starts with first impressions. 

Hosts see a guest’s name, photo, and reviews from previous Airbnb stays along with a short message providing some context for their visit. It’s a thoughtfully designed system to help defuse our stranger-danger instincts and inspire trust. As Gebbia mentions in his talk, they partnered with Stanford on a study that helped them formulate the system. But despite the deep research and good intentions that went into the system’s design, biases and discrimination surfaced. 

African American guests saw a higher rate of booking refusals from hosts compared to white guests. The hashtag #airbnbwhileblack began trending on social media with stories of racial discrimination.

The problem, Airbnb discovered, was multidimensional. But showing a guest’s full name and face in the booking request were key contributors to racially motivated refusals. The Airbnb team did their research in the product design process, but they missed a key step: following up with customers to see what outcomes resulted from the good intentions the team brought to its work. 

Eli and I spoke with Brian Chesky, CEO and cofounder of Airbnb, about this:

The company made some big changes in both the design and the policies of the platform. They withheld photos and last names until a booking was approved to reduce racial discrimination. They also required all users to accept a community compact with zero tolerance of discrimination. As a result, a million accounts were removed from the platform for not agreeing to the community compact terms. 

The company took many other steps, more than can be summarized here. But it’s likely that the risk of discrimination could have been diminished if Airbnb had asked themselves: “What’s the worst that could happen on our platform and to whom?” and conducted immediate followup research with customer cohorts.

As Jahan Mantin and Boyuan Gao, the founders of Project Inkblot, told us on the show, good intentions can manifest in negative impact for customers if we don’t build post-launch research into our process.

The gap between good intentions and actual impact

The majority of people creating products for customers bring good intentions to the work. But the purity of your intentions can easily become misaligned with actual impact on customers if you don’t investigate how your product affects people. Gao and Mantin described the difference between intentions and impact to us like this:

Intention is personal to you and your team. It’s just something that you’re ruminating about and having conversations with people on your team about, whereas impact is how it plays out in the world with actual people. —Boyuan Gao, cofounder of Project Inkblot

Mantin and Gao advise companies to ask a simple question to consider where things could go wrong for customers:

One of the things that we always advocate for is just asking very simply the question, “What’s the worst-case scenario and on whom?” That could be a whole brainstorming exercise that you do with your team. And it’s not that we can tell the future, but it does align our brain in a different kind of way to think about extending out who we think that we’re creating these things for, and even why. It might change our entire thinking around why we’re building this thing, or even what our approach will be around it at a very, very early stage, which can potentially alleviate a lot of the later issues. —Boyuan Gao, cofounder of Project Inkblot

Gao and Mantin have created a framework called Design for Diversity to make it easier for teams to surface assumptions and biases. The framework can help reduce the gap between your good intentions and actual impact on your customers. 

Of course, spending time with customers after your product has launched, especially in underrepresented groups, will help you find and respond quickly to issues you didn’t foresee.

Investing in customer research early and often will help your teams create successful products and detect warning signs that a change of course may be needed. But coordinating everyone to execute together can be a significant challenge. In Chapter 5, we’ll look at how mission, vision, and principles give teams the structure they need to work independently together. 

About the Authors

Eli Woolery
Senior Director of Design Education / InVision

Eli Woolery is the Senior Director of Design Education at InVision, and co-host of the Design Better Podcast. His design career spans both physical and digital products, and he has worked with companies ranging from startups (his own and others) to Fortune 500 companies.

In addition to his background in product and industrial design, he has been a professional photographer and filmmaker. He teaches the senior capstone class Implementation to undergraduate Product Designers at Stanford University. You can find Eli on Twitter and Medium.

Aarron Walter
Author and co-host of the Design Better Podcast

Aarron Walter is the co-host of the Design Better Podcast, and author of Designing for Emotion. He was the Director of Product on the COVID Response team at Resolve to Save Lives, and prior to that, the VP of Design Education at InVision. He founded the UX practice at Mailchimp where he helped grow the product from a few thousand users to more than 10 million. He’s the author of a number of books, the latest of which is a second edition of Designing for Emotion. Aarron’s design guidance has helped the White House, the US Department of State, and dozens of major corporations, startups, and venture capital firms.

You can find Aarron on Twitter and LinkedIn.