Metrics, Strategy and Getting More Satisfaction

Over the past few months I’ve been attempting to isolate metrics that best inform the context models I use for content strategy projects.

I’ve made this attempt because clients have been asking our community to quantify the value that content strategy is bringing to their annual spend. It’s also one of the things I hear new content strategists (both the independents and agency folks) asking about. We all know our work is important, but justifying it to clients is how we continue to show relevance.

Measurement and Optimization Cycle for Contextually Relevant Content Strategy

Not surprisingly, there is no measurement plan that will prove without doubt that content strategy is responsible for a site or business’ success, but based on the implementation of a few different measurement plans, I think it’s safe to say that content strategists can lean on at least four types of metrics to accurately demonstrate the fruits of their labor. They are:

1. Measures of user perception (satisfaction)
2. Task completion (user defined)
3. Measures of key business objectives (traditional metrics measurement)
4. Post visit behavior

The combination and examination of the of the four data sources is not only valuable; it’s crucial to the optimization of a sound communication strategy.

We need what I typically refer to as “perception measures” to show the true value of content strategy because success isn’t as simple as providing insights into what drives improvement in business behaviors. As I’ve noted several times in past posts, content strategy has to do heavier lifting by adding contextual relevance to business goals, which equates to informing the creation of content that helps improve the bottom line AND satisfy user tasks. Task completion becomes crucial because it directly correlates to something even more valuable that quarterly gains — loyalty.

Beyond The Bottom Line

Most companies have well-established business success metrics for their websites and measure for them consistently but few measure the quality of the site experience as a separate and distinct concept. That’s a mistake, because it’s in the qualitative measures where you can make a more informed decision on whether your content strategy is providing any real return on investment.

Without dedicated perception metrics, it’s nearly impossible to determine whether an experience actually got better or how if changes in a content strategy influenced the site’s impact on business performance.

Perception metrics should reveal which aspects of the experience customers aren’t happy with and what prevents visitors from accomplishing their tasks. While we don’t get direct insight into what the exact issues and solutions might be, perception metrics shed light on areas where content strategists are likely to provide value and we can also postulate the correlations this data has with shifts in our KPIs.

The Value of Perception

Site perception (satisfaction score) in its most basic of form is a user’s critique of the overall quality of the site’s content. So it follows that a clear understanding of perception will provide us with the best qualitative data needed to adjust our site/content/brand to be more contextually relevant for users.

These metrics become crucial to separate from KPI metrics, because while business metrics help us to understand current market conditions and how to optimize for the current demand, it’s perception that gives us a window into long term brand health, loyalty and consideration. As someone who always attempts to tie his work to the bottom line, I always enjoy seeing lift in KPIs, but tend to concern myself more with the qualitative measures. It’s the latter that ultimately keep businesses, nonprofits, etc. in business.

What Should Be Measured?

There are a variety of tools out there that can assist in gathering perception and user satisfaction with a website. Intercept surveys, panels and user interviews are most common, but whether you’re using a vendor-based solution or using a do-it-yourself approach, a good content strategist or analyst must pose the right survey questions that gather:

Satisfaction as it relates to the overall site experience
Gathering overall site experience satisfaction is the most sought after metric when measuring perception might seem like the no-brainer, but I’m consistently shocked to participate in discussions with clients and learn that they’re doing no post visit surveying or have no idea of how a site is performing beyond the old standbys. Time on site doesn’t necessarily equal engagement and I’d argue that nine times out of ten it equates to confusion. Understanding a user’s general feeling about the site, its navigation and how they’re left “feeling” after they’ve experienced it is huge.

Satisfaction as it relates to task completion
It’s a little known fact that people use websites to do stuff and complete some kind of task. This might seem like a novel concept, but it seems to escape a lot of designers, content creators or content managers that users are arriving at our sites with questions that need answers, causes that need effects and darkness that needs light. Is your site giving them all the information they need to leave feeling satisfied? Are things organized in a logical fashion? Are labels correct? Do users expect to find content that is missing? Is there too much ‘window dressing’ preventing the completion of tasks?

Satisfaction as it relates to content quality
Why don’t more researchers ask if people think the content is shit? Marketers especially (and I’m speaking as a marketer remember) are terrible culprits of this. It can’t be the creative. I’m a copywriter! I’ve got an Effie! Who the Effie Cares? Is the content written using the user’s common phrases and language or your client’s? Does the content leave the user with more questions or provide them with a clear path for deeper engagement if applicable?
The quality and task completion measures should always be joined in your reporting documents, because more often than not, if you have a quality problem, it will cause problems with task completion and ultimately, overall perception.

Satisfaction as it relates to “other” factors
I typically dislike “other” categories, but it really best sums up what we’d typically like to understand from a user’s post site experience. It’s in surveying these behaviors that we can better understand what user’s do with your content AFTER they visit your site and what their intent is in using another source of content in to complete their tasks. This is especially useful for e-commerce, higher education or non-profit. If they added donations or completed parts of an application did they return at a later time? Did they find another experience that was more satisfying? If so why?

Keep It Simple, Then Evolve

Basically, it boils down to the questions you ask your users. I’ve long said content strategy needs to channel its inner anthropologist to better understand our users. Taking quarterly stock of site satisfaction and perception is just one way we can all start to better understand our user’s unique needs and tasks. It doesn’t take a lot to get started. Simple surveys that take less than five minutes to complete are the most appropriate ways to get an early read on user satisfaction.

Once a baseline is established, kick your governance and optimization plan into high gear and measure, measure, measure some more. Getting more satisfaction is more than swapping out the creative. Our field is becoming increasingly scientific and understanding these basic user perception metrics is the first step in developing stronger use cases for our content strategies.

Wanna talk about it or start sharing some testing methodologies? Comments are below … lets start the conversation.

Blowing With The Wind Of Chaaaaange….

Scorpions!What a difference a year makes. At this time last year, I was working on the biggest content strategy project I had ever taken on while doing work for a client that had recently announced they were ending their 91 year relationship with my agency.

As I began wrapping up my time working on the Chevrolet project I was met with more change. I took on new clients in the Centers for Disease Control, Carrier, Olympic Paint and Stain and OnStar.

Along the way I also learned that I was going to be a father for the first time.

During the past 8 months I’ve been very fortunate to have the time to begin writing about things I love (context, neuro, etc.), working on projects that I’m passionate about and preparing for the arrival of my son. However, I was recently approached with the opportunity to take on a new challenge working on a brand that I have long admired. This opportunity allows me to be much closer to home and my family, while giving me the chance to work with team of digital professionals leading the way in terms of online communication.

So it’s both with a heavy heart and an excited mind that I announce that I’ll be joining Team Detroit to serve as an Enterprise Digital Strategist working on the Ford North America account. Though I no longer have content strategy in my title, rest assured that I’ll continue to be practicing it and forcing it upon my new friends and colleagues.

I can’t begin to say enough good things about the team at Campbell Ewald. The decision to leave the agency and a firmly established content strategy practice was not an easy one to come to, but the promise of an 11 mile commute and the chance to help shape one of the world’s most iconic brands was an opportunity that I couldn’t pass up. The good news is that my departure has left a vacancy in a fantastic group. Should you want to take an opportunity to work here in the Mitten with the likes of Chris Moritz, Jinita Shah, Arthur Mitchell and tons of more smart folks, you should check out this job description and speak kindly of me when you inquire.

What does this change mean for this space? Absolutely nothing. I’ll continue to be talking about context, testing and presenting my independent research and thoughts here. I’m recovering from Confab and hope to post lots and lots before baby Eizans greets us. Stay tuned.

Content Strategy Gut Checks: First Impressions Testing

Content Strategy Gut Checks: First Impressions Testing is the third in a series of six posts discussing the testing of content and content strategy models in usability and user testing. Did you miss the first two posts?

Read Part One: The Café Test
Read Part Two: The Focus Group

You’ve got butterflies in your stomach. It’s a nervous, happy, scared out of your mind (but deliriously excited all at the same time) rush. You’ve spared no expense in sprucing yourself up and have taken care to be sure everything is enticing to the eye.

No doubt about it, you’re looking hot. But when users start knocking at your virtual door for their first date, will your content be the horrible garlic breath that turns them off or will they find the spark that keeps them coming back for more of what only you can truly offer?

Just as in dating or a job interview, a first impression can be the most lasting, which is why taking the time to test for them is crucial — both for the visuals and the content.

When To Use First Impressions Testing

As far as I know, “First Impressions Testing” isn’t exactly a formal “usability” test. I’ve always used it as a field test that can be combined with, or performed separately from, the Café Test.

They’re best used early in the web design process or when you need to capture first impressions on a new addition to a site. I also find them valuable for form and e-mail testing. The first impressions gathered are analyzed to determine whether initial reactions have colored a user’s feeling about the remainder of the site/email/etc. First impressions testing that is specific to content should be focused on subjective measures, which could include:

• A user’s satisfaction or dissatisfaction with page content
• A user’s comfort and understanding of content concepts
• A user’s thoughts and impressions about the tone and understanding of the context of the content within the design
• A user’s self-reported thoughts about the purpose of the site and content

How To Get Started

First impressions testing can be performed in a variety of environments and in a variety of ways. There are a few remote services that provide this type of usability testing (e.g. Optimal Workshop’s ChalkMark). You could also contract a testing lab if you don’t have a lot of strong experts in house, but I’m of the opinion that more often than not you don’t need a formal lab to perform first impressions testing.

Testing by Trinity

Setup for a first impressions test is similar to the café test. You can stage in a high traffic area, like a café (preferably one where your target user might be) to approach potential users or invite a section of existing users to a conference room in your office, etc. You can also do this test remotely through a conferencing application. Just be sure to test users one at a time.

If they’ll allow you to do so, take video or photos. If you’re using a laptop, use the onboard camera to record facial expressions. You don’t need a separate moderator, but it helps to have someone take notes when you reach the question portion of the test.

Your willing participant should be seated facing your device of choice with nothing on the screen and then shown the homepage/page/application/etc. for five to 90 seconds. If I were only testing the design, I’d do five to ten seconds maximum, but since we’re talking content here, give them a bit longer to see what they focus on first.

Once the time is up, hide the site and ask the user to begin relating everything they can recall from the page.

Questions, Questions, Questions

When asking the participant to relate their first impressions, focus your questions on subjective measures. Be sure not to be too leading or to use any language that might influence their answers. You want a true first impression, not something you’ve potentially influenced. Ask them to recall everything they can from their short experience with the testing material. Questions can include but aren’t limited to:

• What was the purpose of the [content] on the site?
• What were the key takeaways of what you read/saw/heard?
• Did you understand the content on the page?
• What were the first things you noticed when the page appeared?
• Can you recall or describe the mood of the site?
• How does your overall impression of this [content] influence your perception of the site/product/etc?


Key deliverables from a first impressions test will be qualitative reports. It’s fine to detail a day’s worth of testing into a single report, but sessions can be broken out by individual if you wish.

If you videotape the session, use clips and captures in your reporting to bring back to designers and content stakeholders. Just make sure you capture all of the thoughts, feelings and end with how those impressions color a user’s opinion of what the experience is as a whole.

Summing It Up

Testing first impressions for the content of the site is tricky because a user may naturally be drawn to site visuals prior to diving into the content. That being said, any qualitative data you gather during first impressions testing should be taken for what it is — a field test.

Use those impressions to be sure you have the right calls to action, the right amount of space allocated for content and the right mix of visuals to put content in the right context based on user expectations. No one wants to be the one with the garlic breath and you don’t want your user’s first impressions to cloud his or her perception of what you have to offer down the road. So test to be sure you can make a first good impression before you toss yourself to the world.

“Disgust” (photo) by Jeremy Brooks. Used via CC BY-NC 2.0 License.

“Testing” (photo) by Rebecca Partington. Used via CC BY-SA 2.0 License