In the past five years, the Ad Council’s digital team has conducted more than 250 studies for a wide range of campaigns with our partner Feedback Loop. These studies have complimented our other methods of research: focus groups, moderated UX research and communications checks, to name a few.
With Feedback Loop we’ve identified five best practices for agile UX research.
1. Get curious about your audience
All great research starts with curiosity. Ask yourself: What do I think I know about my audience? What am I unsure about? Getting curious early on helps you build empathy for your audience. It also helps you identify your biases and assumptions. To learn more about cognitive biases that impact design, check out David Dylan Thomas’s book, Design for Cognitive Bias, and his podcast, The Cognitive Bias Podcast. You can also check out our interview with him.
2. Identify your assumptions
The challenge with assumptions is you might not know you’re making them. Developing awareness about assumptions you make is a skill anyone can cultivate. Pressure test what you think you know: Do I know this because of testing we did? Have we done a recent landscape review? Assumptions you identify can inform your learning objectives.
3. Define your learning objective
Your learning objective is what you want to learn from a study. Learning objectives should ladder up to higher level goals. Consider: What decisions do you need to make based on results? Which stakeholders need to be involved in test creation and results?
4. Determine the best time and way to test
The project life cycle of a digital product includes different phases. To figure out the best time to test and which method to use, ask yourself: What stimulus is needed to answer my questions? Do we need to know what people think and feel? Or what they do? Read “When to Use Which User-Experience Research Methods” by Christian Rohrer on Nielsen Norman Group’s website for a comprehensive landscape of user research methods to reference.
5. Create action standards before you test
This best practice is one we’re experimenting with, thanks to Feedback Loop’s advice. Action standards are commitments to action that you will take depending on the results of your test. This could look like the team agreeing that if 20% or more of respondents are confused by creative, you’ll create alternatives. The key to this one is determining what threshold you find acceptable for your learning objectives in advance and following through after the fact.
If you get curious, identify assumptions, and define your learning objectives early on, you can test earlier in the project life cycle and prevent problems that need to be fixed later (when it’s more costly and difficult to do so). I have joked before that the research cycle should be a pattern of testing, learning, acting and celebrating, because with each research-based decision, you’re creating value for your company and your audience. That’s always worth celebrating!
For more on this, check out the webinar we recently presented about our learnings, including case studies from our campaigns.