Metrics, Strategy and Getting More Satisfaction

Over the past few months I’ve been attempting to isolate metrics that best inform the context models I use for content strategy projects.

I’ve made this attempt because clients have been asking our community to quantify the value that content strategy is bringing to their annual spend. It’s also one of the things I hear new content strategists (both the independents and agency folks) asking about. We all know our work is important, but justifying it to clients is how we continue to show relevance.

Measurement and Optimization Cycle for Contextually Relevant Content Strategy

Not surprisingly, there is no measurement plan that will prove without doubt that content strategy is responsible for a site or business’ success, but based on the implementation of a few different measurement plans, I think it’s safe to say that content strategists can lean on at least four types of metrics to accurately demonstrate the fruits of their labor. They are:

1. Measures of user perception (satisfaction)
2. Task completion (user defined)
3. Measures of key business objectives (traditional metrics measurement)
4. Post visit behavior

The combination and examination of the of the four data sources is not only valuable; it’s crucial to the optimization of a sound communication strategy.

We need what I typically refer to as “perception measures” to show the true value of content strategy because success isn’t as simple as providing insights into what drives improvement in business behaviors. As I’ve noted several times in past posts, content strategy has to do heavier lifting by adding contextual relevance to business goals, which equates to informing the creation of content that helps improve the bottom line AND satisfy user tasks. Task completion becomes crucial because it directly correlates to something even more valuable that quarterly gains — loyalty.

Beyond The Bottom Line

Most companies have well-established business success metrics for their websites and measure for them consistently but few measure the quality of the site experience as a separate and distinct concept. That’s a mistake, because it’s in the qualitative measures where you can make a more informed decision on whether your content strategy is providing any real return on investment.

Without dedicated perception metrics, it’s nearly impossible to determine whether an experience actually got better or how if changes in a content strategy influenced the site’s impact on business performance.

Perception metrics should reveal which aspects of the experience customers aren’t happy with and what prevents visitors from accomplishing their tasks. While we don’t get direct insight into what the exact issues and solutions might be, perception metrics shed light on areas where content strategists are likely to provide value and we can also postulate the correlations this data has with shifts in our KPIs.

The Value of Perception

Site perception (satisfaction score) in its most basic of form is a user’s critique of the overall quality of the site’s content. So it follows that a clear understanding of perception will provide us with the best qualitative data needed to adjust our site/content/brand to be more contextually relevant for users.

These metrics become crucial to separate from KPI metrics, because while business metrics help us to understand current market conditions and how to optimize for the current demand, it’s perception that gives us a window into long term brand health, loyalty and consideration. As someone who always attempts to tie his work to the bottom line, I always enjoy seeing lift in KPIs, but tend to concern myself more with the qualitative measures. It’s the latter that ultimately keep businesses, nonprofits, etc. in business.

What Should Be Measured?

There are a variety of tools out there that can assist in gathering perception and user satisfaction with a website. Intercept surveys, panels and user interviews are most common, but whether you’re using a vendor-based solution or using a do-it-yourself approach, a good content strategist or analyst must pose the right survey questions that gather:

Satisfaction as it relates to the overall site experience
Gathering overall site experience satisfaction is the most sought after metric when measuring perception might seem like the no-brainer, but I’m consistently shocked to participate in discussions with clients and learn that they’re doing no post visit surveying or have no idea of how a site is performing beyond the old standbys. Time on site doesn’t necessarily equal engagement and I’d argue that nine times out of ten it equates to confusion. Understanding a user’s general feeling about the site, its navigation and how they’re left “feeling” after they’ve experienced it is huge.

Satisfaction as it relates to task completion
It’s a little known fact that people use websites to do stuff and complete some kind of task. This might seem like a novel concept, but it seems to escape a lot of designers, content creators or content managers that users are arriving at our sites with questions that need answers, causes that need effects and darkness that needs light. Is your site giving them all the information they need to leave feeling satisfied? Are things organized in a logical fashion? Are labels correct? Do users expect to find content that is missing? Is there too much ‘window dressing’ preventing the completion of tasks?

Satisfaction as it relates to content quality
Why don’t more researchers ask if people think the content is shit? Marketers especially (and I’m speaking as a marketer remember) are terrible culprits of this. It can’t be the creative. I’m a copywriter! I’ve got an Effie! Who the Effie Cares? Is the content written using the user’s common phrases and language or your client’s? Does the content leave the user with more questions or provide them with a clear path for deeper engagement if applicable?
The quality and task completion measures should always be joined in your reporting documents, because more often than not, if you have a quality problem, it will cause problems with task completion and ultimately, overall perception.

Satisfaction as it relates to “other” factors
I typically dislike “other” categories, but it really best sums up what we’d typically like to understand from a user’s post site experience. It’s in surveying these behaviors that we can better understand what user’s do with your content AFTER they visit your site and what their intent is in using another source of content in to complete their tasks. This is especially useful for e-commerce, higher education or non-profit. If they added donations or completed parts of an application did they return at a later time? Did they find another experience that was more satisfying? If so why?

Keep It Simple, Then Evolve

Basically, it boils down to the questions you ask your users. I’ve long said content strategy needs to channel its inner anthropologist to better understand our users. Taking quarterly stock of site satisfaction and perception is just one way we can all start to better understand our user’s unique needs and tasks. It doesn’t take a lot to get started. Simple surveys that take less than five minutes to complete are the most appropriate ways to get an early read on user satisfaction.

Once a baseline is established, kick your governance and optimization plan into high gear and measure, measure, measure some more. Getting more satisfaction is more than swapping out the creative. Our field is becoming increasingly scientific and understanding these basic user perception metrics is the first step in developing stronger use cases for our content strategies.

Wanna talk about it or start sharing some testing methodologies? Comments are below … lets start the conversation.

Content Strategy Gut Checks: First Impressions Testing

Content Strategy Gut Checks: First Impressions Testing is the third in a series of six posts discussing the testing of content and content strategy models in usability and user testing. Did you miss the first two posts?

Read Part One: The Café Test
Read Part Two: The Focus Group

You’ve got butterflies in your stomach. It’s a nervous, happy, scared out of your mind (but deliriously excited all at the same time) rush. You’ve spared no expense in sprucing yourself up and have taken care to be sure everything is enticing to the eye.

No doubt about it, you’re looking hot. But when users start knocking at your virtual door for their first date, will your content be the horrible garlic breath that turns them off or will they find the spark that keeps them coming back for more of what only you can truly offer?

Just as in dating or a job interview, a first impression can be the most lasting, which is why taking the time to test for them is crucial — both for the visuals and the content.

When To Use First Impressions Testing

As far as I know, “First Impressions Testing” isn’t exactly a formal “usability” test. I’ve always used it as a field test that can be combined with, or performed separately from, the Café Test.

They’re best used early in the web design process or when you need to capture first impressions on a new addition to a site. I also find them valuable for form and e-mail testing. The first impressions gathered are analyzed to determine whether initial reactions have colored a user’s feeling about the remainder of the site/email/etc. First impressions testing that is specific to content should be focused on subjective measures, which could include:

• A user’s satisfaction or dissatisfaction with page content
• A user’s comfort and understanding of content concepts
• A user’s thoughts and impressions about the tone and understanding of the context of the content within the design
• A user’s self-reported thoughts about the purpose of the site and content

How To Get Started

First impressions testing can be performed in a variety of environments and in a variety of ways. There are a few remote services that provide this type of usability testing (e.g. Optimal Workshop’s ChalkMark). You could also contract a testing lab if you don’t have a lot of strong experts in house, but I’m of the opinion that more often than not you don’t need a formal lab to perform first impressions testing.

Testing by Trinity

Setup for a first impressions test is similar to the café test. You can stage in a high traffic area, like a café (preferably one where your target user might be) to approach potential users or invite a section of existing users to a conference room in your office, etc. You can also do this test remotely through a conferencing application. Just be sure to test users one at a time.

If they’ll allow you to do so, take video or photos. If you’re using a laptop, use the onboard camera to record facial expressions. You don’t need a separate moderator, but it helps to have someone take notes when you reach the question portion of the test.

Your willing participant should be seated facing your device of choice with nothing on the screen and then shown the homepage/page/application/etc. for five to 90 seconds. If I were only testing the design, I’d do five to ten seconds maximum, but since we’re talking content here, give them a bit longer to see what they focus on first.

Once the time is up, hide the site and ask the user to begin relating everything they can recall from the page.

Questions, Questions, Questions

When asking the participant to relate their first impressions, focus your questions on subjective measures. Be sure not to be too leading or to use any language that might influence their answers. You want a true first impression, not something you’ve potentially influenced. Ask them to recall everything they can from their short experience with the testing material. Questions can include but aren’t limited to:

• What was the purpose of the [content] on the site?
• What were the key takeaways of what you read/saw/heard?
• Did you understand the content on the page?
• What were the first things you noticed when the page appeared?
• Can you recall or describe the mood of the site?
• How does your overall impression of this [content] influence your perception of the site/product/etc?


Key deliverables from a first impressions test will be qualitative reports. It’s fine to detail a day’s worth of testing into a single report, but sessions can be broken out by individual if you wish.

If you videotape the session, use clips and captures in your reporting to bring back to designers and content stakeholders. Just make sure you capture all of the thoughts, feelings and end with how those impressions color a user’s opinion of what the experience is as a whole.

Summing It Up

Testing first impressions for the content of the site is tricky because a user may naturally be drawn to site visuals prior to diving into the content. That being said, any qualitative data you gather during first impressions testing should be taken for what it is — a field test.

Use those impressions to be sure you have the right calls to action, the right amount of space allocated for content and the right mix of visuals to put content in the right context based on user expectations. No one wants to be the one with the garlic breath and you don’t want your user’s first impressions to cloud his or her perception of what you have to offer down the road. So test to be sure you can make a first good impression before you toss yourself to the world.

“Disgust” (photo) by Jeremy Brooks. Used via CC BY-NC 2.0 License.

“Testing” (photo) by Rebecca Partington. Used via CC BY-SA 2.0 License

Content Strategy Gut Checks: The Focus Group

Content Strategy Gut Checks: The Focus Group is the second in a series of six posts discussing the testing of content and content strategy models in usability and user testing. Did you miss the first post? Read Part One: Content Strategy Gut Checks: The Café Test.

“The wise man doesn’t give the right answers, he poses the right questions,” — Claude Levi-Strauss

Conference Room - Austrian National Library, Vienna Augustinertrakt (photo by Stefan Strahammer)

The way I see it, Claude Levi-Strauss’ statement sums up how I view testing, and (in a way) content strategy. The questions we craft and ask of users are crucial for informing the type of data we ultimately use and produce for our digital experiences. And when it comes to posing those questions to gut check our content and content strategy, one of the best tools at our disposal is the focus group.

For starters, I think it’s important to point out that focus groups are NOT usability tests. Focus groups are what I refer to as user tests or user dialogues, which have very different goals from usability testing. A good usability test should focus more on observation and provide us answers with how well a user was able to use and experience both the interface and the data itself. Conversely, if we want to assess users thoughts, feelings or attitudes about a product or our Web site, we’d leverage a focus group.

It’s the difference of what users say vs. what they do — and in the best of all possible worlds, we’ll be able to leverage both insights when planning for content strategy.

When To Use Focus Groups

I personally believe focus groups should be performed early in any web project to both help discover insights into your target audience and prove out any assumptions you might have made regarding their Personal-Behavioral Context or Situational Context (did you really think you’d get through a post on my site without me making a plug for context?).

Also, consider using a focus group if:

  • You need more insights into specific user situations that may require content
  • You have little or no knowledge about your target market, its content expectations or its web/wireless/mobile habits
  • You’re developing new part of the site or rolling out a new content feature but aren’t sure what the reaction will be
  • How To Get Started

    Invite 6 to 12 people to participate in for each focus group session. Depending on your budget or the scope of your project you may need several sessions to get a representative sampling of your targets. Pre-screen to be sure participants are from your target via questionnaire. It is absolutely crucial that the people you invite be from your target demographic. If they aren’t, you’re wasting time and client dollars.

    In your invitation to participants, do your best to provide a high-level agenda and include any issues that you’ll be tackling during your session. The focus group itself should last 60 to 90 minutes (any longer and you’d best buy them a meal and plan to bring them back the next day).

    Prepare up front. State the purpose of the focus group and provide an outline to the day’s activities. If it’s required of your client (or if you just want to cover all your bases) have them sign a consent form after you’ve given your explanation (e.g.: [PDF]) and bring it to the group. Collect your consent prior to giving them access to other members of the group or the focus group room.

    Set up the focus group in an room or location that offers little to no distraction. You want the participants’ full attention since the end results will be analytical reports. Try to set your test group around a table to encourage conversation. You want the group to have the ability to make chatter and if it’s a circular or ovular table, you’ll have a better vantage point to document facial reactions or pose immediate follow up questions.

    Before you begin the questioning, ask the participants to introduce themselves and/or wear nametags. Focus groups are a tag team effort (you need a strong moderator and someone to document findings, discretely if possible). It’s the moderator’s job to be aware of the energy in the room. They also need to step in if one person is dominating a conversation and allow for cognitive breaks when it appears they’re needed. The moderator has to keep discussions flowing and keep the group focused on the issues you want to document.

    The recorder should only focus on documenting the findings, and he should capture facial expressions, audio, notes on findings etc. I’ve always found it helpful to color code notes per participant or attach a headshot to individual notes when I’m acting as a recorder in a focus group.

    Questions, Questions, Questions

    A focus group will only be successful if the questions asked are open and neutral. The wording is crucial, so channel your inner Claude Levi-Strauss and be the wise man. Pay special attention to the inflection and tone taken posing a question to the group. The wrong wording or inflection might taint the responses.

    Ask the target audience about how they use the web, what their expectations are of what types of content would be on your Web site. If it’s a new section, get their thoughts on how successful their efforts have been in seeking the proposed content. If they have used that type of content, document their experiences. What worked? What didn’t? What would they have preferred to see? It’s here that we start to find the bits that we can apply to situational context and scratch at those oh so elusive behaviors.

    Other questions a moderator could pose to a focus group that help influence content strategy include:

  • What types of content do you expect to find when accessing [BRAND, SERVICE, TASK] using a [NAME of DEVICE]?
  • Describe a positive experience you’ve had with [BRAND, SERVICE, TASK]. What made it a positive experience?
  • Describe a negative experience you’ve had with [BRAND, SERVICE, TASK]. What made it a negative experience?
  • Describe a [SITUATION, NEED, TASK] that required [BRAND, SERVICE, WEB SITE, TASK].
  • There are tons of questions and paths that can be followed, but that would be a MUCH longer post.


    Focus groups are for gathering thoughts, feelings or attitudes. That means you need qualitative analysis reports. These should be written for each session. The reports should contain the relevant background of participants who attended individual groups. Dedicate a single report to each session and be sure to have any questions you may have gathered in the screening questionnaire included in your reporting as well.

    If you videotape the session, use clips and captures in your reporting to emphasize thoughts and support any hypothesis you had about content, data and design needs.

    The more data you have to work with, the easier it will be to make relevant Behavioral/Situational personas to apply to your content strategy project. Client deliverables might include an executive summary, or quotes and images from the session, but the full report should be more useful to your design team and content/digital strategists.

    Summing It Up

    Focus groups are meant to help predict consumer responses to a site or feature. It’s crucial to know how consumers feel about a project prior to really getting down to the heavier design phase and focus groups are a great forum to getting those feelings out into the open. They require patience and a really solid moderator who can manage conflict and keep the group on the task at hand.

    I fully believe great focus groups can be done independent of agencies that specialize in it. An independent content strategist need only be sure he or she has specific plan and goals in mind prior to doing the focus group. If you don’t think you can handle the moderation, find someone internally who can manage conflict or multiple personalities.

    Even if you do select an agency to perform your focus group testing, make sure you have influence over the questions asked. A good content strategist should walk away knowing the situations that will call for content and have a better idea of the mix that will be needed to address those situations.

    What are your experiences with focus groups? Do you find them useful in planning for content strategies? Drop your thoughts into the comments below.