Chapter 4. Experiencing Your Own Product
[T]he designer of a new system must not only be the implementor and the first large-scale user; the designer should also write the first user manual… If I had not participated fully in all these activities, literally hundreds of improvements would never have been made, because I would never have thought of them or perceived why they were important.
Donald Knuth
New users are frustrating beasts, like skittish deer. They are fickle and flighty and may bolt at the slightest snag in your product funnel. And how to even spot one in the first place? How to track them down and get them to stand still long enough to understand your product vision and provide you with helpful feedback? It’s much easier for most engineers to stay in their code editors and churn out more code than it is to actually seek out users and get feedback.
In this chapter, I’m going to let you off the hook a bit, to focus on getting feedback from the users that are easiest to pin down—yourself, your teammates, and others at your company. These people share in the success of your product and are pre-incentivized to help you out. Then, in [Link to Come] and [Link to Come], we’ll dive more into learning from the most important users—those in the broader world.
One of the most frequent dilemmas I face is how to divide my time between talking to users and leaning on my own insights and those of my teammates. If we know the scenarios, including the target personas and the simulations of their use cases, like an all-seeing crystal ball, shouldn’t we be able to work everything out without slowing down to talk to users? Or perhaps we simply shouldn’t think too hard, instead quickly getting iterations of our product out to users and hearing their feedback.
Facebook found early success with a “move fast and break things approach” because they had a large, forgiving user base and the most powerful software distribution mechanism ever invented—the news feed. They could ship experimental products—polls, game invites, events, groups, videos, etc.--because it was extremely easy to get them out to users and collect metrics and feedback. Many of these experiments failed, but users forgave the failures. Other experiments were massive hits that more than made up for the duds.
On the other end of the spectrum, self-driving car startups Waymo and Cruise iterated for years before shipping anything to customers. They hired professionals to mind their cars while collecting data until they had several nines of reliability.
These approaches have very little in common. The one shared aspect is this: they were both validating their products.
Note
Validate your product.
Validating your product means answering the main open concerns about it and also discovering problems you didn’t think of. Validation ranges from user surveys to testing to feedback from real-world usage.
Different types of validations have different desirable aspects:
-
Simulates intended real-world usage.
-
Simulates edge cases and subjects the product to unintended pressure.
-
Gets feedback from a target persona.
-
Can be obtained early in the product cycle.
-
Is cheap to obtain.
-
Doesn’t have adverse consequences such as harming users or eroding their trust.
Various strategies excel on different axes. Sharing storyboards with one’s target persona excels at #3, #4, and #6, but it can be hard to round up folks and get thoughtful feedback, and storyboards won’t often get at important details. Shipping prototypes to real users is great at #1 and #3, since it’s particularly good at surfacing problems you didn’t even think to worry about, “unknown unknowns.” However, it can be quite expensive and can take a long time to get to working prototypes. Different types of automated testing are often good at #5 and #6, and have other advantages depending on the type of testing, which I’ll explore below.
In this chapter, I want to focus on validation techniques that excel on the “early, cheap, and harmless” axes. These happen when we, as product designers and teammates, engage with and use our own products.
Note
Be your own first customer. This will validate your product early, cheaply, and harmlessly.
When team members use their own product, this is for some reason called dogfooding, a term popularized within Microsoft as it developed a culture of using the latest versions of its own operating systems and compilers. Dogfooding is effective because it gets feedback from users incentivized to help the company’s success.
Dogfooding takes a surprising number of forms. If you’re an agritech company, it may or may not be practical to own your own farmland to test out your products. But maybe you can simulate those conditions in a laboratory or on a computer.
There’s an underappreciated dogfooding aspect to writing documentation and tests that I will highlight as I survey three types of dogfooding:
-
Write scenario tests and realistic samples that exercise the product, mirroring real world usage.
-
By virtue of writing usage guides, you will lay out what it’s like to use your product. If you’re attentive, this will give advance warning of problems.
-
Write, or get teammates to write, “friction logs” which detail your experience as you dogfood will both motivate you to dogfood and produce more useful output.
Figure out the mix of these techniques that will cheaply validate your product every step of the way. Apply so-called “product pressure” at all times to what you have in the works. You’ll reduce risks and ensure that the feedback you get from users couldn’t have easily been figured out on your own. Let me sell you on a few.
Tests and samples as dogfooding
Writing tests is one of the best ways of experiencing your own product once you’re ready to write code. One of the tenets of Test-Driven Development, as espoused by Kent Beck, is that tests are the boss. Rather than writing code and then writing the test to pass based on what you’ve written, write the tests before you’re biased by what you did. From a product perspective, this helps you stay closer to the user scenarios. You’ll work harder not to pervert your product goals by what was easy to implement.
In this section, I want to show you how to apply tests as dogfooding opportunities. We’ll use them to verify correctness in key scenarios, polish edge cases, and document the intended usage. In doing so we’ll force ourselves to think about what we’re building. Importantly, because tests must be maintained, they are uniquely powerful and long-lived among the dogfooding techniques.
What kinds of tests should you write?
So we can all get on the same page, I’m going to define several types of tests you may have heard of. These tests types are a blend of “system tests” vs “product tests”, in the sense that some are more targeted at finding infrastructure issues vs common user-facing problems, and vice-versa.
-
Unit tests test small bits of code. They are used early during development to cheaply probe edge cases and encode contracts. They run very quickly and are great for the innermost development loop. However, being so low-level makes them of limited value for product testing. For example, each mock makes them diverge from what users experience, making it harder to assess their value. Because they aren’t being “tugged on” by the product puppeteer, they are the most susceptible to bias from the author and can end up testing irrelevant implementation details.
-
Functional tests focus on product behavior of small features of your product. They may use mocks, or preferably fakes, to simulate dependencies like other services and databases, limiting the degree to which system issues can make the tests slow and flaky, but also limiting their comprehensiveness.
-
Scenario tests are like functional tests, except they are indexed by user scenario rather than feature and thus may hit multiple features. They ensure that complete user tasks can be completed, but again they may fake out dependencies.
-
Integration tests recognize that it’s often the interactions between components that are the most fraught. They deliberately do not stub out many dependencies so as to better sniff out system problems. However, they don’t emphasize product scenarios in the same way that scenario tests do.
-
End-to-end (e2e) tests are the most comprehensive automated tests of user-facing behavior, and they also can find unforeseen system integration problems. They exercise complete user scenarios and don’t stub out anything. They are relatively expensive to maintain, and run, and can be flaky, so they tend to be used for the most critical scenarios. Tested samples are a form of e2e testing that also helps as documentation for your users.
-
In user acceptance testing, real users—you know, creatures with sensory organs—are putting your product through its paces and reporting problems. I’ll discuss this further when I introduce friction logging.
Your reaction might be: “Um. I don’t do all these things. I don’t even do most of them.” Which tests your team should emphasize will be an individual team decision, and should be based on which types of tests offer the biggest bang for your buck.
To help sort through it, we can boil down the problems these tests address as system risks and product risks. Figure 4-1 roughly plots the relative strengths and weaknesses:
The effort of writing each type of test is proportional to the distance of the dot from the origin. End-to-end and user acceptance tests are the most challenging and comprehensive, unit tests are the easiest and least thorough.
Maintenance has different considerations. The more comprehensive tests are more challenging to debug because they cast a wider net. That said, unit tests that hinge on implementation details often end up needing frequent maintenance as the codebase evolves, whereas e2e tests tend to stay evergreen if the product stays backwards compatible. Then again, overly-fussy e2e tests, for example that make very specific assertions about HTML output or enforce tight timeouts, can be challenging to maintain, too. There’s one general rule to keep in mind:
Note
All other things being equal, the more the test asserts behaviors that directly matter to your users, the easier they are to maintain.
Okay, so we’ve thought about the “buck,” but what about the “bang?” What you get out of tests depends on where most of the risks are for your product:
-
If you are writing infrastructure that is a drop-in replacement for older infrastructure, you may want to focus more on tests high on the system axis such as integration tests and functional tests. You might be able to rely on existing tests in your codebase to do most of the product testing.
-
If you’re writing novel infrastructure that hasn’t been adopted by anyone yet, you’ve got both product risks and system risks galore, and will need a spectrum of test types, including end-to-end testing.
-
Suppose instead you’re integrating widely used open source infrastructure and wrapping it in APIs that adapt it to the narrower purposes at your company. The biggest challenges are probably integrations, meaning product risks likely outweigh system risks. You might focus on scenario tests with some functional testing.
-
If you’re building a novel product, unremarkable in its system complexity, using hardened infrastructure and best practices or “paved roads” at a mature tech company, you can focus more on product risks. Lean on scenario tests.
As you can see, many situations call for applying product pressure. I don’t want to undersell system testing, but since this is a book on product thinking, I’m going focus on scenario tests, end to end tests, user acceptance tests, and finally tested samples, a special form of e2e testing.
Scenario tests as automated dogfooding.
Scenario tests are designed to capture real-world product use cases in test form. They are quite likely to catch important bugs simply because they represent what users will actually do. You can easily spot missing scenario tests when you encounter basic bugs. I recently used a food-tracking app that doesn’t trim whitespace from its food search terms. Suppose you search for “pot” and autocomplete to “potato “--because autocompletes add space so you can start typing the next word—it won’t find “potato” because of the whitespace mismatch. A scenario test or user acceptance test would have caught this. And I would have been more likely to keep using the app.
Such tests are also great way to document what your product is supposed to do. If you’re using scenario-driven design ([Link to Come]), they can directly capture the scenarios you discovered during design and validate that your code achieves what you set out to build. Otherwise, to write a test, simply start with a user flow and translate that into a test.
So, how is that different from a functional test? While both test user-facing functionality, scenario tests are indexed by use case rather than feature. A functional test for the food searches would not have contemplated an autocompletion, but a scenario test author would have thought more deeply about the user experience, with testing autocompletion being a fairly straightforward idea.
Here are some examples.
Suppose you’re building a searchable dashboard that displays recent workflow jobs run in your internal infrastructure. People can search for jobs by owning team, completion time, and current status—completed, running, failed, etc. The default query when people land on the page yields the fifty most-recent workflows starting with the newest. Buttons allow people to go to the next and previous set of results.
Here’s a set of functional tests, having fed the system with some test data:
-
Check that various types of queries return correct results.
-
Test for edge cases and make sure users get good diagnostics.
-
Tests that queries can paginate properly when given a hardcoded cursor.
A set of scenario tests focuses less on edge cases, unless those are very common, and more on commonality:
-
Runs the specific query used on the landing page, as it’s by far the most frequent. It would make sure it’s correct and runs within a certain latency budget so that the page load is fast.
-
Selects a team name from a dropdown menu and then adds a search filter on that. This test illustrates that, via your dropdown menu, you’re not assuming that users magically know what team identifiers to search for.
-
Pulls the click target from the “Next page” button and feeds that directly into a paginated query. This tests the next page button, paginated queries, as well as the combination of the two. Testing them individually wouldn’t make sure that the link rendered on the page actually leads to the right page transition.
One way to think about scenario tests is “the product equivalent of integration tests”; that is, integrating or chaining various user actions together and making sure it works holistically.
Scenario tests eliminate the most important product bugs from your product as well as some system bugs. Use them in conjunction with functional tests which round out your product tests by being more thorough. To save time, sometimes you can skip functional tests in favor of scenario tests. For example, the “landing page” and “next page” tests may obviate dedicated functional tests on pagination.
The process of writing each of these tests may alert you to product problems in the same way that manual dogfooding does, but avoids a lot of manual dogfooding in perpetuity and gives you confidence that you’re not regressing the most important user scenarios during each deploy.
When writing them, capture the key elements of the scenarios. Your “user” should have a plausible intent and perform a likely chain of actions. Capture that intent in the comments as a way for other engineers to learn what really matters about your product.
End-to-end tests as automated dogfooding
End-to-end tests are like scenario tests except they are more all-encompassing of system behaviors.
Continuing our workflow search example, these tests might use a headless browser to simulate actual user clicks on buttons, rather than testing the functional layer underneath the UI. They may also operate on a production or QA database rather than a fake database.
These tests tend to be flakier and fewer in number, but occupy an important niche in the testing landscape for the most important scenarios.
The most important way to keep them relatively maintainable is, as I mentioned above, to stick to assertions and assumptions that represent reality. Avoid making additional assumptions that don’t reflect reality so that your test doesn’t fail for reasons you don’t care about. One of the classic blunders is hardcoding a sleep and hope some step will be finished by the time the sleep is done, which in my experience usually leads to intermittent timeouts. Better to receive a signal when it’s done or at least poll in a loop.
User acceptance testing as dogfooding
In user acceptance testing, you give some employees or volunteers some test scenarios to run through in pre-production environments, and then require those tests to pass before shipping. I haven’t seen this form of testing used formally very often. One place I’ve observed it was for internationalization and localization, where speakers of different languages in different locales verify product behavior across a variety of circumstances that would be too dizzying for engineers to test themselves. I’ve also seen it when required for compliance purposes.
Needless to say, this form of testing can be slow and expensive, but it can paper over deficiencies in automated test frameworks.
More often, I see people mining user feedback in othre ways, which I’ll explore in [Link to Come].
Samples
As I transition to talking about documentation, I want to consider tested samples, which are a hybrid of end-to-end tests and docs.
Writing samples is a great way to let your users know what to do. They extend your product thinking beyond the bounds of your company so that outside users can benefit.
Here are a few tips for writing useful samples. Here, I’ll suppose your product is a spreadsheet program like Microsoft Excel or Google Sheets.
-
Think beyond “Hello world” and show samples that accomplish actual user needs, such as a budgeting template. This is both useful to sample readers and a great way to dogfood.
-
Subject all of your samples to automated tests so they can stay working for users. Make sure your template loads without errors, plug in some numbers, and check the output.
-
Be specific, using concrete scenarios and language. Prefill your data with a software company like “Initech” with plausible purchases like red staplers and roach killer. Users have a much easier time extrapolating from concrete things than they do disambiguating abstract terms like “item 1” or “foo.” This is why we prefer “Hello, world!” to “A message.”
-
Capture why you made certain choices in your samples so you can teach users, as you would with any comments.
-
Think about the discovery scenario—how will users find the right sample given a problem they are facing? You could list your template in a template catalog, and if you have a lot of templates, file it under “accounting” or “finance.”
The lessons in the next section on documentation will also help you write good samples.
Writing documentation.
Have you ever heard the proverb, “the best way to learn something is to teach it?” That’s true of teaching products as well, and documentation is one of the most effective ways to do that. Writing down how your product works will help you see how it looks to your users and therefore how to improve it.
Even if your product is simple and doesn’t ship documentation, writing down the steps to accomplish tasks helps you internalize what your users will be facing as they interact with your interface. You’ll notice problems and improve the product. To the extent that the last section was about Test-Driven Development, this one will be about a topic I’ve found equally useful: Documentation-Driven Development. Drafts of documentation can be written earlier and more cheaply than tests—yes, before the product is ready—making them a rich source of insights.
Product documentation isn’t merely a means to an end, and alas, there are a lot of bad docs out there. We can’t just blame technical writers even if we are lucky enough to have them on staff. Often, only engineers, and sometimes product managers, tend to know the ins and outs of our products well enough to make the key decisions about what to call attention to.
Of course, there are many important aspects to technical writing, such as grammar, tone, and flow. Rather than rehashing good books about them, I’m going to be more targeted. I’ll help you apply product thinking to create useful and discoverable docs that answer the right questions at the right times. You’ll use the same techniques as usual: thinking outside-in and internalizing the various scenarios that readers face.
Scenarios for documentation readers.
Software engineers, to the extent that they write documentation at all, naturally produce reference documentation, which in its rawest form is just code comments. It’s easy to take what you’ve built, go feature by feature, and explain those features, perhaps even humble-bragging about the smart decisions you made when you implemented them.
But how often do you actually use reference guides? Of those instances, how many times did you try other types of documentation before you fell back on them?
I took my first university programming course in C++. The textbook was Bjarne Stroustrup’s The C++ Programming Language, and in the 3rd edition, the book was a comprehensive overview of the language. I was just trying to stay afloat while learning the language, and instead I got chapter after chapter of the author showing off all these cool and esoteric features of the language.
I’m actually not sure if it was a good book. Perhaps as a seasoned engineer with years of C++ experience, I might have gotten more out of it, but it was not useful for my persona then—an undergrad computer science major learning C++ from scratch. I wish my professor had had more of a concept of personas and made another choice. I wouldn’t be surprised if many promising software engineers have dropped out of Computer Science because of poorly targeted learning materials. A product as complicated as C++ begs for a careful introduction.
The book was essentially a reference guide when I needed a getting started guide. I needed to understand C++, the product.
Note
In your documentation, present the product, not the system.
Reference guides have a place, especially for complex products. But otherwise, it’s better to be “self-documenting” such that once a user has contact with a particular piece, they can figure it out without docs.
What users need most is all the stuff that’s not described on your product surface. They need to know how the pieces fit together, where to start, and what to do if something goes wrong. In other words, users need you to be editorial. Which bits really matter to most users? What are the best practices?
Especially if you’re working on a technical product, your docs people may not be equipped to understand the product well enough to be editorial. They can often help you structure your docs, improve your technical writing, and fit the house style, but they are less likely to help you understand user scenarios and make good choices about what information to present.
What, in particular, do users want? Think about a time when you’ve recently needed documentation, or needed to ask a chatbot or a coworker for advice, and I’d wager your scenario would fall into one of four categories:
-
Discovery - you know what your goal is, but not which product or feature to use to do it. Your persona is often a “new user.”
-
Usage - you know which product or feature to use, but are struggling to use it correctly. You may be a new user or an experienced one if you’re trying to use advanced features.
-
Learning - you want to deepen your knowledge of your product, learning reusable concepts to make them more effective. You are likely intending to become a sticky user, if you aren’t already.
-
Operations - something has gone wrong, or is about to go wrong, and you need to recover. You have very likely never encountered this failure mode before, so you are on unfamiliar ground.
Each of these scenarios requires different types of docs, as we will see below.
Discovery
When users are discovering your product, they want to map their scenario onto it.
First, they want to know if they are the target persona. Is your product is “for them?” This offers them assurance that, as they work through its details, your design choices will work for them.
One of the most common and effective ways to do this is to provide a “persona menu” on your app or website. Today, when I go to lyft.com, a ride-hailing site, I see headings for “Rider,” “Driver,” and “Business.” These immediately clue to me that I’m welcome if I fall into one of these personas, and I can click in to more specific documenation learn more.
Here’s another common one: Users at large companies will want to know whether your company is going after large enterprises. Talking about a specific compliance or security feature, which they may or may not know or care about, might not help as much as knowing that, in general, you’re working for enterprises and addressing the types of challenges that large companies face. Then, within that context, list the specific features as examples.
Note
Make it clear which personas you are targeting and tailor documentation to each persona.
Usage
Users who’ve moved beyond the “persona” stage of discovery may struggle to find the relevant corner of your product. A user of presentation software who wants to make items on slides appear when they click a button needs “animations,” though new users would probably not come up with that term. For this reason, Microsoft Powerpoint’s documentation has a section called “Animate or make words appear one line at a time.” Web searches for “make words appear” will now link to those docs.
Docs like those are start with the use case and linking to features and steps needed to accomplish it. As such, it’s a usage guide. Usage guides bridge the gap between what the user wants and what they’re supposed to do—the what and the how.
Categorize your most prominent use cases and create usage guides for the most important ones. The most classic and important usage guide is a getting started guide that shepherds users through initial setup and download steps and gets them to the “hello world” stage. Referring back to the Stripe example, once users realize what persona they are, they click into the corresponding getting started guide.
Build usage guides for other important scenarios as well. Alongside or in lieu of templates and samples, they are an effective way to help users accomplish key tasks. They show your users best practices and market features users may not have known about.
In the same way that scenario tests aren’t going to cover every feature in detail like functional tests do, usage guides won’t cover all the details either. Usage guides can link out to reference guides which cover the product feature by feature or concept by concept. Reference guides can talk details, corner cases, and advanced usage.
Learning
In the intro, I wrote that users want to allocate as little brain space as possible to learning your product. Usage guides help them because they can follow instructions without needing detailed memorization work.
But there are times when users need to take a step back and actually learn things. When I was recently attempting to build a new Docker container, I got hung up trying to accomplish my goal, and usage guides weren’t helping. I realized I needed to understand Docker better to make further progress, without knowing exactly what. So I read through a conceptual guide that presented important Docker primitives. I learned something very valuable for my use case—that Docker containers could nest within one another. Knowing this, the solution to my problem became straightforward.
This nicely illustrates that conceptual guides are sometimes necessary, but users consider them a backup plan. Unless users are students who want to learn your product to pass a test or get a certification, they are trying to use it to accomplish something. Thus, they only slip into “learning” mode when absolutely necessary, and the transition can be painful because it’s not always clear when that point has been reached.
As with all documentation, you need to be able to tell a story of how users will find your conceptual guide. Link to it liberally from usage guides and reference guides.
Note
Conceptual guides help users with primitives or concepts that they will use multiple times throughout your product.
Operations
Some of the most critical user scenarios are when something goes wrong and they need to get it working again. These frustrating, emotionally-charged situations can make or break the user’s day. They are memorable to boot, so they make or break the public perception of your product.
Ideally, your product itself documents what to do via actionable error messages as I discussed in Chapter 3. Okay, okay, even more ideally, you can design your product to avoid errors in the first place. Barring such unicorns and rainbows, you need operational guides.
Operational guides encompass troubleshooting guides, runbooks, or maintainer’s guides, depending on the domain of your product.
Here are a few quick tips for writing and organizing operational guides:
-
They are indexed by failure scenario. This can be phrased as “If X is happening…” or provided via an error code.
-
They should be discoverable, such as via search engine or page searches. Use terms likely to be searched for, and include synonyms.
-
They should be deep linkable so that support personnel, error messages, and automated bot responses can link to remediations for specific problems. And, keep those links stable!
-
Provide an easy way for users to give feedback on the guide, or even contribute to them, so that they can be continually improved. “Did this help?” style widgets are a simple example of such a mechanism.
The web of documentation
Learning should take place when it is needed, when the learner is interested.
Don Norman
Just as there are multiple types of tests, documentation is multi-faceted. Marketing docs help users determine their persona. Usage guides help people discover and use features, whereas reference guides help them once they are ready for details—and not before. Conceptual guides teach them the core primitives they need to become experts, and runbooks help when they’re having a bad day.
And yet, users seamlessly glide between these scenarios as they do their work. Discovery becomes usage once they settle on an approach. Usage becomes operations as soon as something goes wrong. They switch from usage to learning as soon as they can’t figure out how to do what they want with your usage guides and need to be able to arrive at it from first principles.
Therefore, the most important organizing principle of docs is that they should be heavily interlinked with one another and from your product. Build “trails of breadcrumbs” that help users escalate from usage or troubleshooting scenarios to learning modes, and from problems they are facing to solutions. Usage guides should link to conceptual guides when referring to important concepts. Conceptual guides should link to illustrative samples. And so forth.
Note
Build a web of interlinked documentation.
Writing documentation as dogfooding
Imagine you’re writing an open source developer tool and you’re authoring a getting started guide telling people how to install your software package. You’ve set up a fresh environment and are installing it from scratch on your machine. You get a failure because you didn’t set up an environment variable properly. As you write down instructions for users, you realize they face a choice as to what to set the variable to. You decide to fold this into your installation script by asking users a question from which you derive the correct answer, and you delete this step in your guide.
Note
Measure the quality your product by how short the best docs would be.
Or perhaps you are oncall for your SaaS service and have just supported some users whose processes ran out of memory and were struggling to debug what happened. As you create a runbook entry to document what they should do, you realize you could have avoided the problem entirely by letting them set certain resource limits for themselves. You ship the runbook entry, but then you set about demolishing the need for it by adding the configurations.
The process of writing documentation forces us to switch perspectives and consider the user’s journey. We get annoyed when what we’re writing feels tedious or like it’s hard to explain. We want nothing more than to just delete them.
If you write thoughtfully, you can channel this pain into improving the product. The main trick is to do it early when it’s still cheap to make adjustments. Here are a few examples:
-
Write discovery guides before you start designing. Amazon famously writes fake press releases at the beginning of projects that predict how they’ll communicate publicly once the product is launched. These explain to users the value proposition of the product just like a normal press release would. Adopting this “marketing-first” practice will help you understand your target persona and evaluate your product’s value to them.
-
Write usage guides early while using early versions of your product. Noting unneeded steps or tricky decisions will catch usability problems when it’s still cheap to fix them.
-
Write conceptual descriptions of key abstractions. If it’s a struggle, perhaps you need to rename or reframe. If the concept doesn’t seem like something the user should care about, maybe you need to present something higher-level.
-
Force yourself to explain to users how to avoid problems that will cause them operational toil and safety problems. This will expose weaknesses, and it may even be cheaper to fix safety issues than it is to document, communicate, and support customers.
Recently, an engineer on my team was writing a usage guide for a regression testing scenario. A user would query some recent runs from production and then pass them into a regression test function to make sure they would still work with new code changes. After he wrote up the snippet, we realized that it wasn’t paginating the production runs, and in fact that pagination wasn’t exposed at all. Therefore, most people in the real world with lots of production runs couldn’t easily accomplish that scenario. He decided to add pagination support and then come back to ship the guide.
Note that we already had reference documentation for both of the functions involved; it wasn’t until we wrote a proper guide that we realized they didn’t link up well and gained conviction that it was high priority to fix.
This kind of Documentation-Driven Development can be integrated early and deeply into your development process. Let the docs dictate what goes into a milestone. “We’ll know this product is ready when we can write this usage guide or that sample.”
Friction logging
Okay, we’ve delayed actually getting users to use your product long enough. We’ve written scenario tests and authored guides, often while using early versions of our product to make sure we’re doing it properly. But even the best product thinkers will be frequently surprised by what other humans get up to.
Proper dogfooding means using software developed by others at your own company or team, and friction logging is a great way to go about it.
A friction log is an informal document that you write as you use a product that demonstrates any confusions or roadblocks you experienced. When you’ve finished, you hand it off to the engineers and PMs responsible for the product. In the next two subsections, I’ll explain how to write a friction log and then talk about how you can leverage others to friction-log your product.
Writing friction logs
A friction log is a journal of a scenario with emphasis on the friction. Your goal is simply to report your experience with the product, which the product team will use as they see fit.
Start by introducing yourself. Note the persona you represent—are you new to the product? Or pretending to be new? What priors do you have that might inform your experience? For example, if you’re writing a friction log for an Instagram feature, your persona might be that you’re new to the platform but you’ve used TikTok before.
Next, express your intent. What are you trying to accomplish? Perhaps you’re trying to edit and post a silent Instagram Reel—a short video. This will help readers understand why you made each choice and understand when product gaps prevented you from achieving your goal. And the product team can figure out how likely it is that others will have a similar intent.
The bulk of the document will outline what you experienced. Perhaps your original video had sound, and you struggled to figure out how to turn the sound off.
Make sure you’re describing your experience rather than giving prescriptive advice—it’s likely that the team who owns the product will have a better idea of what’s possible, and they may find it easier to stomach hearing your story rather than your advice. If you do have a suggestion, you could say “I was expecting there to be a button to mute the audio like on TikTok.” This is less presumptuous than “You should add a button to mute the audio.”
Of course, it’s always helpful to praise places when the product experience suited you. This contextualizes the friction and lets readers know what you appreciated.
When you’re done, hand it over to the team, expecting that they’ll sift through the feedback and triage any action items. (And make sure your manager knows about your good work.)
Why does the practice of friction logging improve dogfooding? In many companies, dogfooding is a more aimless pursuit with no concrete outcome. Without specific norms, providing feedback to a team can feel aggressive or critical, and it can be unclear about whether you’re expecting them to drop what they’re doing and fix your problem. If you nitpick, you may worry that others will find you pedantic. Unclear expectations heighten worries about offending people.
It’s easy to feel anxious about giving product feedback, especially if you’re not on the team or new to the team. Friction logging is a safe space in which you are free to comment and nitpick on others’ products. If it’s already part of your company culture, you’ll be in good company. If it’s not (yet), leverage the fact that it’s a predefined format that you can find on the internet. When I introduced friction logging to my current company, I referred to existing blogs as a way to explain what I was doing, and it was well-received. Soon after, I noticed others starting to do friction logs.
Receiving friction logs
It’s very useful to receive friction logs. Knowing the ground truth of how a user experienced your product is gold.
But how do you get other people to engage with your product? Even though friction logging can be fun, people are busy and they won’t do it for nothing in return. Here are some people to consider:
-
Ask your early adopters to write friction logs.
-
Invite new teammates to friction log as a way to ramp up on the details of the product.
-
Suggest that managers and executives friction log as a way of staying grounded as the product develops.
And when you get the feedback, don’t be defensive, and treat it as a gift:
-
As mentioned above, don’t ask people to open tickets or follow any kind of process. This puts up hoops that make people hesitant to spend the time. It also sends a signal that their feedback isn’t worth your time.
-
Close the loop. If somebody’s friction log results in an improvement, let them know, ideally in a way that their manager can also take note of.
-
If you or your team are feeling overwhelmed, don’t shoot the messenger. Deal with the time crunch when triaging the product improvements implied by the feedback.
Recap
Friction logging is a fun and productive way to engage in dogfooding, and it’s something you should solicit as part of your product process.
Of course, it’s just one element of getting feedback from users, as not all of your customers are going to be willing or able to write a complete a friction log. You’ll learn to get feedback up from your broader user community in more depth in [Link to Come].
Chapter summary
Throughout product design and development, seek ways to use your product and put “product pressure” on it, ensuring it will stand up to real world usage.
Testing, documentation, and friction logging are all effective ways to do this.
Not all tests are created equal—some are better suited for product testing, others for system testing. Scenario and end-to-end tests reliably document your product and ensure it continues to function, while functional tests round out your test coverage.
Similarly, not all documentation serves the same purpose. While reference docs act as a useful fallback, usage, conceptual, and operator guides are better aligned with key user scenarios. These docs should be interconnected to guide users no matter where their journey leads.
Finally, friction logging is an excellent way to get actionable and candid experiential feedback from your coworkers.
Exercises
-
Choose a Wikipedia page you’re interested in, and a line from that page. Find out who the original author of that line is and record your experience with a friction log.
-
Read the friction log I wrote in answer to the first question. Suppose you were new to the Wikipedia development team and tasked with processing that feedback and polishing the UI. What information would you gather and what improvements would you look into?
-
You’re on the WikiBlame feature crew and shoring up test coverage. (You can go to WikiBlame by picking an interesting Wikipedia page, clicking “View History”, and then selecting “Find addition/removal”). Make a test plan with at least two of the most important scenario tests. Assume your test framework allows you to simulate UI interactions.
Answers
-
Here’s my friction log for trying to blame a line of Wikipedia. Make sure yours recorded a bit about your persona and intent and that it reported your experience without being too overt about suggesting concrete improvements.
I’ve mostly used Wikipedia as a reader, with very few edits to my name, so I’m very new to the power user tools. I noticed that the listing of one of the Puzzle Hunts had an issue with the links from one of the entries that I’m not sure how to fix. I’d like to contact the original editor.
I’m a software engineer used to git blame, so I’m hoping for something similar. I first clicked “View history”, but it wasn’t in a usable format, and there was no “blame” button. I thought maybe “View source” would do the trick, and looked for “blame” and similar, but that didn’t work either.
I went back to the revision history and looked more carefully. “Find edits by user” didn’t work because I didn’t know the username. At this stage, I consulted Google, who told me to click “Find addition/removal.”
On that page, entitled WikiBlame, I found a search box for a term, so I picked a phrase from the line of text I was interested in and searched for that. I found a list of entries like so:
Comparing differences in 02:48, 31 August 2005 between 210 and 211 while coming from 196:OO
Comparing differences in 04:34, 11 August 2005 between 217 and 218 while coming from 210:XX
I didn’t understand much of this, especially the numbers and the XX and OO, but at the bottom of the list, I found:
Insertion found between 02:37, 31 August 2005 and 02:41, 31 August 2005: here
Insertion sounded like the origin of the line, so I clicked the here link. I couldn’t find a user name anywhere! After a minute of searching, I found an IP address, and I realized that it meant that an anonymous user had posted the edit from that IP. (It would have been clearer if it were labeled.)
If I gave this to the Wikipedia team, they would sift through my friction log. They might ignore my feedback that there was no “blame,” reasoning that most editors aren’t software engineers. On the other hand, my confusion that the IP address was standing in for a username seems generally valid, and it might prompt them to put a label there such as Author.
-
If I were on the app team, I would tend to ignore the feedback about blame because only software engineer persona would resonate with that terminology unless it’s more common among Wikipedia editors than I think. I’d be more inclined to clean up the “Comparing differences” text, which seems fairly esoteric. And labeling the IP address/username field as an author seems like a fairly straightforward usability win, especially if anonymous editors are common.
-
Here are the two tests I chose. Are the ones you picked as, or more, common?
I assume most people arrive at WikiBlame via the View History page, so I want to test that flow. I’d create a simple test page with a couple edits, then would render the corresponding View History page. I’d search the page for “Find addition/removal” and pull out the corresponding link to WikiBlame. Now I want to make sure a simple search will work, especially given all those pre-populated boxes which seems easy to break. So I would plug in a search term into the Search box and simulate a click of Start. I’d do a very light verification of the results, as I’ve probably got functional tests that check the results list more thoroughly.
I imagine the most popular pages are also very popular for people to use WikiBlame, so I want to test scalability. I’d look at how many edits there were to a page like Taylor Swift and programmatically generate a test page with a similarly long edit history and edits scattered throughout the document. Then I’d initiate a search query on that page and make sure it completes within the timeout. I’d also check the results and make sure the correct number of edits for that term were returned.
Get The Product-Minded Engineer now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.