Matches, Dots, and Glasses: The Psychology of Problem Solving

The first step in solving any problem is getting a good handle on what the problem really is in the first place.  That’s harder than it sounds.

What gets in the way our ability to do this effectively is being blinded what is…versus what could be.  Our own world view gets in the way.

The way around this?  You must reframe the problem.  Diversity of perspective is a powerful approach to not being blinded by your own perspective.  Check out this great video by Edward O’Neill that illustrates this point.

Deliver Insights Hemingway Style

Blaise Pascal once said “I have made this letter longer than usual, only because I have not had time to make it shorter .”  It is much harder to write something that is succinct than a long and rambling communiqué.

The same can be said for insights.  After all, as CX and insights professionals we are in the business of delivering and acting on information, not a slinger of unexamined data and tabs. I think more importantly though, we need to deliver insights that are persuasive, engaging, and catalysts for action. The most advanced and eloquent analysis is worthless if not communicated well.

Stacks of analytical output and spreadsheets can be impressive, but equally bewildering to the consumer of the information. What people want to hear is what has been learned and what should be done. We are bombarded by data everyday.

Our job is to sift through the deluge of data and turn it into information. However, you still often see to be the 80-page PowerPoint deck.  Thunk!

The logic being, surely we are not doing our job if we don’t display every aspect of an issue and show that we have done our homework by looking at every possible angle. The heft of the deck seems to fit the bill for some.

For most clients that is way off of the mark. Like good UX design, strip away everything that is not contributing or useful.  Executives want to know “what did you find and what should we do?” If you can explain that in five minutes in a persuasive way then you have done your job. That’s not easy though.

The Heath brothers have an excellent tutorial on how to get to the sticky ideas in Made to Stick and Jon Steel nails how to distill it down in Perfect Pitch. The hard work in delivering insights isn’t in data prep or even analysis…its extracting the story. It takes a lot of time and thought to get down to the essence and the implications.

The business of consumer insight isn’t 4th grade math where you have to show your work. We are hired because, ostensibly, because we know the craft. We don’t have to prove it by boring our audience with slide after slide of support. That is behind the curtain. The tables have been examined, the analyses have been run, the charts have been examined. This pre-presentation work has been done.

What now is required is to perfect the story. To distill down to the bare essence and find a parsimonious story that is compelling, entertaining, and persuasive. We are scientists. We are artists too. The effective ones are are also talented journalists and entertainers.

If My Answers Frighten You: How to Write Good Questions

Will people like my new product idea?  What do  customers like or dislike about their current experience? How can I improve on my existing product?  These are all great questions, which are hard to come by through secondary research.

In my previous article, I discussed cheap and inexpensive ways to conduct secondary research (research that someone has already collected for you). Unfortunately, many times you can’t get the exact questions answered that you need, so you have to get the feedback yourself.  In the discipline of consumer behavior this is called primary research.

If My Answers Frighten You…

As any attorney will tell you, getting good answers requires asking good questions.  In my first article on this topic, I talked about the importance asking the right business question. In this article I will discuss the necessity of asking the right survey questions.Jules

Writing survey questions seems like a straightforward endeavor, but there are head-smacking mistakes you can make that can’t be corrected after the data is collected.  By way of illustration I tried to write the World’s Worst Survey question as an example. Let’s take a look.


Dissecting the World’s Worst Survey Question

While somewhat farcical, I have seen questions not far off from this.  So let’s dissect this atrocity one piece at a time.

The first problem is — it’s a loaded question.  Much like a lawyer asking Colonel Mustard, “…so where were you after you killed Professor Plumb with a wrench in the study?” on the witness stand, it would be bounced out of court before the good Colonel could open his mouth.  It’s a loaded question.  The question assumes that the respondent buys suntan oil. In the northern latitudes that is a rare purchase, and for many, not a purchase at all!

Next, I have conflated price and value for the money.  They are related but distinct concepts.  You must tease the concepts apart, otherwise neither you nor the respondent will have a clear idea of what you are evaluating.

In addition to mixing price and value together, we have no reference point for price. How much is it?  In addition, offering up a price with no reference point challenges the consumer to make any judgment.  So when looking at price specifically, it should have other price points for customers to evaluate.  I don’t know about you, but I don’t regularly keep track of suntan oil prices.

The larger issue, in general, is that asking pricing questions is tricky business.  No one likes to spend money, so there is always an inherent bias around the importance of price that drives imprecise results.   When possible, it is best to get at the pricing issue through indirect methods such a choice studies.  People are just not honest when it comes to “how much would you pay” and other direct pricing-type questions.  It has low predictive validity.

Next we turn to the scaling.  First, the scale is reversed with the good rating (exceeds my expectations) on the left while the poor rating (below my expectation) is on the right.  While a subject of some debate, most researchers would agree that a low to high order is preferred.  Why?  Almost everything in the Western world is righty-tighty, lefty-loosey.  That is: higher is always on the right, and lower is always on the left.  It makes it easier for the respondent and will ensure more accurate measurements.

Which brings me to the issue of scaling.  Expectations scales are notoriously poor in the measurement of evaluative attitudes. The reason being is that the starting point is a mystery.  For example, say you expected this article to stink, but you feel that it is merely mediocre.  I have exceeded your expectations!  Does that mean it’s good?  No, sadly it does not.

Finally, I can fill a four drawer metal file cabinet with articles about what the right number of points on a scale might be.  Let me save you some time and summarize the outcome of this corpus of research. The answer is: it depends.  The considerations are: the level of involvement the respondent has with the issue in question, the levels of differentiation (also known as just noticeable differences) that can be detected, and the goal of the research in the first place.

If I am conducting a compliance study and want to know the cleanliness of the bathrooms, I might ask “Were the restrooms clean?” with the response scale being “☐ no ☐ yes.”

In the case of taste testing a new Central Coast Pinot Noir with sommeliers, I might ask “How dry would you evaluate this Central Coast Pinot Noir?” and might provide a 10-hedonic-point scale anchored by “Extremely Dry” to “Extremely Sweet” with several other anchor points in between.  Experts who are highly engaged in a product category will see shades of distinction other might not.

Reconstructing the World’s Worst Survey Question

So let’s fix my World’s Worst Survey question.  Let’s see if we can resurrect the intent in a more accurate form..unnamed (1)

As I mentioned, the pricing question is a whole different kettle of fish where we can use a within or between experimental design.  Alternatively, there is a class of analytics called Choice Models or Discrete Choice models, which utilize techniques such a Conjoint Analysis to dampen unwanted bias in the data when it comes to understanding price elasticities.  Also, MaxDiff can be adopted for this use which is easier to deploy and analyze.   However, all that is a topic for another day.

So as you can see, writing a survey can seem simple — but you will want to avoid common mistakes.  There are several good books on the topics, with alluring titles such as Improving Survey Questions by Floyd Fowler and Internal, Mail, and Mixed-Mode Surveys, which is a classic by Don Dillman and Jolene Smyth.  Both are excellent sources. Of course you always contact me.  I will be sure to exceed your expectations.