Category: testing
Shamir Duverseau

Maximize Data with Lean Thinking

Build. Measure. Learn. Those three words are at the core of Lean methodology, a way of doing business that incorporates elements from Six Sigma, agile development, design thinking, and other sources. Lean methodology is a modern application to business that has a longer history in the manufacturing industry, originating in the Toyota Production System in the 1950s. It has since been used by successful startups and large corporations alike, across industries. Lean’s continuous improvement cycle enables companies to make meaningful progress by getting the best use of customer data and intelligence.

When it comes to the digital experience, Lean thinking can be a tool of immeasurable power. From acquiring qualified traffic to converting those prospects into customers to retaining those customers to build lifetime value, a Lean viewpoint can help optimize every touchpoint of the customer journey. As this is especially the case in a considered purchase industry, Lean is now at the heart of how we at Smart Panda Labs are helping our clients drive customer lifetime value.

Here’s how.

Build

Everyone knows that building products and services that meet customer needs is a primary goal of any business. But customer needs are varied and nuanced, requiring answers to a long list of questions. If you wait to answer all the questions at once, or worse, assume you already know the answers, you risk high costs and wasted time at best. At worst, you risk the failure of an initiative, a division, or an entire organization.

This is why the term “minimal viable product,” or MVP, has become so popular and so important. A tenet of Lean and Agile methodologies, an MVP is a product with just enough features to satisfy early customers and provide feedback for future product development.  Each iteration of this streamlined product or service is meant to answer a question or two, or meet a set of demands, but not all demands at once.

We have learned the value of MVPs for our clients’ products as well as our own. So, we build new services and processes, not as fait accomplis, but as MVPs in order to ensure that are meeting client needs.

Measure

Objectivity does not come easily to modern day organizations. While gathering unbiased data is becoming easier, there remains a persistent risk of a biased interpretation of the data.

Lean accounts for this through customer-centric experimentation and measurement, allowing customer interactions and feedback to live at the center of the story. Actionable metrics inform whether your customer is experiencing your product in the way you hypothesize, or if you need to pivot. Either way, customer data and creative intelligence are guiding your decisions, thus maximizing the results.

Our own actionable metrics include feedback from our clients. How do they feel our innovation is helping them? Is it making things easier or harder? Is it aiding them in meeting goals or communicating with teams? The answers to these questions, along with many others, will help us to know whether or not we are moving in the right direction. And these decisions can be based on real feedback, and not simply cool ideas that we fall in love with but bring no benefit to the client.

Learn

“If you cannot fail, you cannot learn.” Eric Reis, the author of The Lean Startup, makes this simple but important point. Not everything works out the way you envisioned. Lean tells us that with every failure comes a wonderful opportunity to learn and iterate. The key is to embrace the opportunity.

For example. One of our clients engaged us to run an experiment on their website. The first test we helped them run failed miserably and quickly. It was designed to be a quick win … but turned out to be far from it. However, the resulting learnings from this failure yielded another experiment that was impactful in both its effect on the business goals (adding seven figures of incremental revenue for the year) and the additional customer insights it yielded.

Failure can’t always be the primary concern. Whether or not we are learning from these failures is what matters. We use our learnings to improve products and services on behalf of our clients, and also to improve the client experience we provide. What makes us better at our jobs also makes for better relationships.

Build. Learn. Measure. This is the backbone of how we harness data and creative intelligence to help our clients drive value from their customers, and it is becoming the method by which we serve our clients, period. If you are reading this, you are more than likely someone’s client. Should you expect any less?

 

Key Takeaways:

  • Lean methodology is a continuous improvement approach that enables companies to make meaningful progress by getting the best use of customer data and intelligence.
  • A key tenet of Lean is the “minimum viable product,” or MVP, which encourages the release of a product with just enough features to satisfy early customers and provide feedback for future product
  • Lean also emphasizes customer-centric experimentation and measurement, so that customer data and creative intelligence are guiding decision making.
  • Lean tells us that with every failure comes a wonderful opportunity to learn and iterate. The key is to embrace the opportunity.
  • As applied to digital marketing strategy, a Lean viewpoint can help optimize every touchpoint of the digital experience—from acquiring qualified traffic to converting those prospects into customers to retaining those customers to build lifetime value,
  • Lean and its backbone of Build, Measure, and Learn is now at the heart of how we improve products and services for clients. It also informs how we improve the overall experience we provide our clients.
Category: testing
Shamir Duverseau

Data, Diversity, and Design

In his best-selling 2005 book Blink: The Power of Thinking Without Thinking, Malcolm Gladwell’s discusses how humans think without thinking. Choices that seem to be made in an instant—in the blink of an eye—actually aren’t as simple as they seem. 

How does this process impact the digital experience? Does diversity in design make a difference? What key role does design play in this process? And if so, how do we measure this and tie it to meeting and exceeding business goals? 

These were some of the questions we tackled earlier this month at the dmi: Diversity in Design conference in Washington D.C. Smart Panda Labs Co-Founder Cheryl Myers and I led a session on how design—in particular, design representative of diversity—can and should be informed by data gleaned from digital experimentation.

Rapid cognition and thin-slicing

We began our session with an anecdote Gladwell presents in his introductory chapter of Blink. In 1983, an art dealer named Gianfranco Becchina approached the J. Paul Getty Museum in California claiming to have a marble statue known as a “kouros,” dating from the sixth century B.C. Becchina’s asking price for the statue was $10 million. The Getty took the kouros on loan and began a thorough investigation to authenticate it. From scientific evidence of its age to the bevy of documentation of the statue’s recent history and provenance, there was ample proof of the statue’s authenticity. The Getty concluded its investigation and agreed to buy the statue.

The kouros went up on display, receiving glowing reviews. However, the statue did not look right to a few people – namely an Italian art historian Federico Zeri (who served on the Getty’s board of trustees), Evelyn Harrison (a foremost expert on Greek sculpture), and Thomas Hoving (the former director of the Metropolitan Museum of Art in New York). They were each taken to the see the sculpture, and in what seemed like an instant, they all came to the conclusion that there was something off about the sculpture. All concluded that it was a fake.

The Getty launched a further investigation and found inconsistencies in the documents that supposedly proved the kouros’ provenance. It discovered that the statue actually most resembled a forged kouros that came from a workshop in Rome in the early 1980s. It turned out that dolomite could be aged in a matter of a few months using potato mold. The sculpture was indeed a fake.

“When [the art historians] looked at the kouros and felt an ‘intuitive repulsion,’ they were absolutely right,” writes Gladwell. “In the first two seconds of looking—in a single glance—they were able to understand more about the essence of the statue than the team at the Getty was able to understand after fourteen months.”

At the heart of Blink is the concept of rapid cognition, or “thin-slicing,” the process by which people make quick assessments using a limited amount of evidence. For better or worse, a staggering number of our decisions result from thin-slicing and instinctive hunches about how to act. While the conscious mind is good at studying a wide range of evidence and drawing conclusions from it, our “adaptive unconscious” is adept at assessing a very small amount of evidence about the external world—a “thin slice”—and then forming an instinctive response.

Gladwell is clear in the fact that rapid cognition is often imperfect and sometimes dangerous. After all, this how many prejudicial decisions are made. However, he argues that rapid cognition plays a valuable role in human behavior—a role that’s too-often ignored.

Designing with diversity in mind

As part of a firm specializing in optimizing digital experiences, my colleagues and I must be keenly aware of the rapid cognition and thin slicing that happens as a very natural part of digital engagement. Just as the art and antiquities experts brought their own expertise and personal experiences to bear in their snap judgment of the kouros, consumers are similarly informed by their own knowledge and experience when they interact with a brand’s website, for example. Everything about us, including our ethnicity, gender, geography, and age affect our world view. In our digital exchanges, we must be aware that the impressions made on users may not be the effect intended.

So how does this understanding of human cognition square with our roles as designers and digital strategists? And what do brands and businesses need to bear in mind? Just as our workforces need to be diverse and inclusive in order to better reflect the perspectives of our audiences and consumers, so should our digital experiences reflect the realities of those for whom we are designing.

During our session at the dmi conference, we shared a series of stock photos and website landing pages and asked our audience to share their impressions. The exercise helped to embellish upon our previous discussion on thin-slicing, and it also demonstrated the fact that diversity is relative.

What is diverse to someone from a rural and perhaps less racial diverse area of the country or the world is markedly different from someone from an urban center teeming with diversity. How do you balance such relativity with a desire to make design as personal as possible?

In pursuit of digital experiences that resonate, be data driven

What we see matters. But the question is, how much? Instead of making assumptions about your users, think of yourself as a student of the digital experiences you provide.

Experimentation, testing, and choosing a “learn-it-all” mindset over a “know-it-all” one (see Stanford psychologist Carol Dweck’s best-selling book, Mindset) is winning at some of the largest and most successful companies.

Take Microsoft CEO Satya Nadella, who recently said about the mindset he is implementing at Microsoft: “Some people can call it rapid experimentation, but more importantly, we call it ‘hypothesis testing.’ Instead of saying ‘I have an idea,’ what if you said ‘I have a new hypothesis, let’s go test it, see if it’s valid, ask how quickly can we validate it.’ And if it’s not valid, move on to the next one.”

Or Amazon founder Jeff Bezos, who says, “Our success is a function of how many experiments we do per year, per month, per week, per day.”

Or Mark Zuckerberg, who said of Facebook, “One of the things I’m most proud of, and I think the key to our success, is this testing framework we’ve built.”

If you want to understand to what degree diversity plays a role in the products or services you’re offering, test it, and let the data reveal the answer. For example, change the images on your site to demonstrate differing kinds of diversity, such as gender, ethnicity, age, ability, and intersectionality—overlapping aspects of social categorizations—as much as possible. You may also want to highlight ADA compliance, as another example. Facebook data may be helpful to you in terms of understanding some of the interests and perspectives of your target audiences, and you can consider including some of that content on your site. Throughout this process, we recommend keeping your key performance indicators (KPIs) top of mind and maintaining authenticity—your goal here is to surface diversity without being disingenuous.

Now it’s time to put your efforts to the test. Here are the five steps we suggest in the experimentation process:

  1. Define your audiences
  2. Consider what diversity is for each audience
  3. Test—A/B testing, focus groups, and usability labs are all examples of types of test
  4. Read reactions, not explanations (think “adaptive unconscious” vs. conscious)

On this latter step, the point I am trying to make is that a user’s initial reaction, in the form of a rating, for example, is more useful data respective to a digital experience than a conscious explanation; that instant reaction more closely mirrors how decisions are made in such a context. In Blink, Gladwell shares examples of how this works in other contexts as well.

The impact of the changes you are testing can be measured in many ways, such as overall satisfaction (feedback, surveys, net promoter scores), site engagement, social media engagement, and conversion rates. Analyze the data to see if changes you’re making to your digital experience are moving the needle and helping you meet your KPIs.

Then, use your findings to evangelize the value of diversity throughout your organization.

KEY TAKEAWAYS

  • Rapid cognition plays a valuable role in human behavior and has a lot to do with how consumers experience digital. “Thin slicing” happens as a very natural part of digital engagement.
  • Everything about us, including our ethnicity, gender, geography, and age affect our world view. In our digital exchanges, we must be aware that the impressions made on users may not be the effect intended.
  • If you want to understand to what degree diversity plays a role in the products or services you’re offering, test it, and let the data reveal the answer.
  • The impact of the changes you are testing can be measured in many ways, such as overall satisfaction (feedback, surveys, net promoter scores), site engagement, social media engagement, and conversion rates. Analyze the data to see if changes you’re making to your digital experience are moving the needle and helping you meet your KPIs.
  • Just as our workforces need to be diverse and inclusive in order to better reflect the perspectives of our audiences and consumers, so should our digital experiences reflect the realities of those for whom we are designing.
  • Use your findings to evangelize the value of diversity throughout your organization.
Category: testing
Shamir Duverseau

What We Learned from Opticon ’17 — and Office Space

My colleagues and I recently returned from Optimizely’s annual conference, Opticon 2017, in Las Vegas. The focus of this year’s event was on building and scaling a culture of experimentation across teams, channels, products and device. The three-day conference was attended by more than 1200 executives, product managers, marketers, testing novices, experimentation gurus and developers. As well as four smart pandas.

Experimentation—from simple A/B tests to rigorous analysis—is at the core of everything we do at Smart Panda Labs, and it’s how we’re able to optimize digital experiences that drive ROI and customer loyalty for our clients. Suffice it to say that we were like kids in a candy store (or, pandas in a bamboo oasis) at this year’s Opticon.

Here are our top three takeaways from the conference. And because we love drawing analogies to cult classics almost as much as we love testing, we’re waxing nostalgic for Office Space on this one.

 Don’t relegate testing to the basement

Successful experimentation doesn’t happen in isolation. It’s not Milton in the basement, alone with his red stapler. Testing requires buy-in from leadership and a team-wide understanding of its value among. Give testing a seat at the table and it can become an integral, mission critical part of an enterprise’s ability to meet goals, solve problems and make decisions. Of course, getting this level of buy-in takes time and effort. In the overwhelming majority of organizations, this means it will be a journey, not a light switch.

Call in “the Bobs”

Sometimes testing strategy and effective CRO requires the help of experts. Even the notorious consultants from Office Space have a place sometimes. Believe it or not, Optimizely supplements their own team of testing gurus with the B2B experts at FunnelEnvy to help them increase the number of tests they can run, the speed with which they can implement them and the resulting insights and iterations. The right outside experts can help companies achieve much greater ROI from their testing efforts.

Avoid printer rage

While testing may start with button colors and headline copy, it’s much more than that. As your experimentation evolves beyond simple tests, so grows its complexity. You may need to factor in elements like revenue management, inventory or pricing algorithms. Without the right tools or planning, testing can become a behemoth undertaking that never works right. It’s the always-jamming printer that you love to hate. However, with the right technology, proper implementation and continuous integration you can manage these complexities and execute tests that improve the ROI of an entire process. And you can leave your baseball bat at home. (You know you want to see that scene again.)

If you’re ready to up the ante on your company’s experimentation but could benefit from expert guidance, Smart Panda Labs can provide smart testing, thoughtful analysis, marketing technology support and strategic optimizations. Don’t hesitate to contact us. We promise we’re not at all like the Bobs.

Key Takeaways:

  1. Successful experimentation doesn’t happen in isolation. Take the time and effort to get by-in from leadership and the entire marketing team. This is how testing can become an integral part of your company’s decision making.
  2. Ask for help. Even Optimizely enlists outside consultants to help achieve the greatest possible ROI from their tests.
  3. As testing sophistication increases, so does complexity. You need the right tools and expert implementation planning to keep your tests from underperforming (or failing altogether).

 

Category: testing
Cheryl Myers

Small Step For Conversion, Giant Leap For UX

This is not our first public display of affection for testing—we love tests, and we love to talk about them. For example, we’ve shared a compelling testing case study on Optimizely’s blog.

Now, it’s my turn.

As a specialist in user experience (UX) and content strategy at Smart Panda Labs, I have developed a close relationship with testing. This invaluable strategy places data at the heart of digital decision-making, providing insights that inform all kinds of choices. For example, the test I’m about to share with you was not designed to focus on your typical testing metric, conversions, but rather to understand how design changes can measurably impact the way customers interact with a user interface.

50 (ok, 2) Shades of Grey

A large real estate client had a series of landing pages, designed by a third party, showing floor plans of available luxury apartments. The landing pages were clean and minimal, and a series of numbered circles indicated how many floor plan options were available to view. However, while a medium grey represented the active floor plan, the circles that represented the subsequent plans were presented in a lighter shade of grey. When the client asked us to improve the design to help increase user interaction, we immediately gravitated to the circles.

We felt that the lack of contrast between the circle that represented the floor plan being actively displayed and the other circles was not enough; the minimal contrast made it seem as if the circles weren’t clickable and instead merely visual cues of the number of available floor plans. We pitched a test to see if greater color contrast would achieve the desired goal.

It’s important for me to mention that we didn’t want to measure the effect this change had on the time users spent on the site, as time on the site can be both good and bad (they can be spending more time because they are engaged or because they are confused and searching!).  So, we focused purely on interactions—were they or were they not looking at all the floor plan options?

BaselineBaseline

Variation 1Variation 1

Variation 2Variation 2

We tested the original floor plan module design against two new variations that each created more contrast, one using a slightly darker shade of grey and one that incorporated a maroon color. This test rapidly showed that the darker grey and the maroon were outperforming the original. Interestingly, the new variations were a dead heat with each other, showing almost a 12% improvement over the original. Users were clearly more likely to view additional floor plans when there was less ambiguity.

Comparison of Baseline and Variation results

This test demonstrates how testing can be used not only to optimize for conversions but to help solve creative problems, like UX design and user interaction. A simple change, paired with a deliberate testing strategy, can help make the subjective objective and inform future design decisions. A test like this can also help make the case for simple and relatively inexpensive optimizations that help improve an experience, perhaps while other large scale changes are in progress or when there’s little time or resources.

CRO and UX Go Hand in Hand

The moral of the story: small changes are easy to test and can lead to significantly more engagement and, therefore, a better overall user experience. These improvements can make a site or campaign more intuitive to the user’s needs and a user journey more enjoyable. This approach, in turn, can prove to be a great supplement to your CRO efforts. In other words, focus on the person and all else will follow.

Key Takeaways

  • Use testing to help make subjective decisions. Allow the visitor’s opinion to carry the most weight.
  • People need to have a good experience before they can make a decision or convert. Use testing to optimize not only the decision but also the experience.
  • Make sure the goals of your test are clear and unambiguous—don’t test for results that could be misinterpreted.