PMG Digital Made for Humans

BEHAVE 2016: One Paid Marketer's Takeaways

5 MINUTE READ | May 19, 2016

BEHAVE 2016: One Paid Marketer's Takeaways

Author's headshot

Christian Buckler

Christian Buckler has written this article. More details coming soon.

In April 2016, I went undercover. Tasked with dual missions of acquiring testing strategies for laymen and learning more about on-site testing & conversion optimization, I snuck into bought a ticket and attended the BEHAVE Conference in our sunny capital of Austin, Texas.

Overall, the BEHAVE conference was a great experience, including a mix of speakers and panels from in-house and agency teams — big names and small — accompanied by keynotes from researchers and authors in the human behavior space. Largely, the conversations flowed to a few key areas:

  • Improving testing methodologies and practices

  • Takedowns and reviews of site interfaces to spark testing ideas

  • Consumer funnels and and UX

  • Key concepts for understanding consumer behavior

In this brief dossier, I’ve assembled my key takeaways as a paid marketer in this environment, along with thoughts helpful for future attendees of BEHAVE events.

On the first day of the conference, I had the opportunity to attend pre-conference “bootcamp”-style sessions in a smaller breakout format. The Optimost team gave a stellar overview of how they brainstorm, organize, plan, measure and report on tests. This session was a wealth of information, but one of my favorite pieces was their emphasis on standardized, concise hypotheses for tests:

Because we observed[data] and [feedback],

we believe that [doing]

will result in [outcome].

This format turns the friction-inducing step of outlining a hypothesis into a Mad Libs-esque approach of understanding basic data points and plugging them into a standardized, scalable format. Simple, actionable takeaway. I won’t spoil their fun, but when viewed as a piece of the Optimost team’s testing process, standardizing hypotheses like this is invaluable.

“If you torture your data, it’ll confess to anything.” – Peep Laja, Conversion XL

Peep Laja gave enthusiastic, entertaining presentations for both Bootcamp sessions and larger presentations, and everyone (myself included) appreciated his to-the-point, no-B.S. approach to testing and analysis. Peep outlined the basics of his approach to auditing site analytics for testing ideas, which I will definitely be reviewing for some of our own clients. But again, one key takeaway for testing stood out as an immediately actionable insight.

Peep cautioned against being overly reliant on p-values alone, and instead recommended gauging test results by three quick factors:

  1. Did the test gather sufficient volume? (based on previous volume, did the test group reach calculated sufficient volume to hit statistical significance? Put another way, did you end your test too early?)

  2. Did the test run for at least Two Business Cycles (To account for weekday/weekend and seasonal trends, run the test for at least two full business cycles — for most businesses, this is a Sunday-Saturday week)

  3. Then, What does the p-value indicate?

One of the last keynote speakers was Jonah Berger, Wharton School professor and author of Contagious: Why Things Catch On. One of the key emphases he made in his talk was to avoid what he referred to as “the drunk under the streetlight problem,” where a drunk man is discovered looking around for his keys under a street lamp late at night. When the man is asked why he is looking in that particular place, instead of answering “this is where I think I left them” he instead answers “this is where the light is.”

As an advertiser, even in a direct-response context, it can be painfully easy to fall into the trap of only driving users to site — and stopping there. Often our standard data views don’t account for site outages, landing page tests or long-running UX issues. However, if we paid marketers are to best serve our clients, we need to be aware of the on-site environment. It would be helpful to ask more questions like:

  • Do our clients have an on-site testing or conversion optimization team? If so, can we set up a call to meet them?

  • Do we have access to a calendar of all scheduled site flips and changes, along with an idea (in advance) of how on-site creative will change?

  • Have we run test purchases on multiple devices to observe the user flow? How do our remarketing audiences line up with stages of the on-site customer journey?

  • Are we being, or could we be, notified of all site outages and issues?

And this is just scratching the surface. In our search to be more effective marketers, growing our understanding of all influences on the brand and the customer experience will almost always net out in a stronger client partner relationship.

Stay in touch

Bringing news to you

Subscribe to our newsletter

By clicking and subscribing, you agree to our Terms of Service and Privacy Policy

Of course you should! I deeply enjoyed my time at the BEHAVE conference, and gained a lot of understanding both from the presenters and from my fellow attendees. This conference sits in the sweet spot of lending actionable advice to seasoned CRO teams as well as approachable discussions for newbies and visitors in the CRO world.


Related Content

ALL POSTS