Member Sign In or Register
SEARCH SITE
Marketing
5 bad habits of digital marketers
5 bad habits of digital marketers
By: iMedia News Bureau

In my former life, I taught college-level physics at Brandeis University. While teaching was not always easy, we did have a simple accountability system: passing and failing. If you don't do the work, or don't do it sufficiently, you don't pass. It is a very simple concept. As chief scientist of my company, I see many marketing campaigns and the actions taken within them. I often find myself comparing my experiences teaching to my current work. As an industry, we've adopted some bad habits that make the academic in me scratch my head. With that said, here are five topics that would probably get you a failing grade in my class.

Spurious logic

Digital marketers continue to scorecard themselves with the last-touch attribution model. Flash back to your Intro to Macroeconomics class. If you wrote a paper on demand creation with the thesis "that people buy what they buy solely because of the last advertisement they saw," your conclusion could easily be this -- all the shoes sold in Macy's Union Square today are attributed to the homeless guy standing in front of Macy's holding up a sign that says "20 percent off shoes today." I give this kind of flawed logic a failing grade.

Inadequate proof

Most people in interactive have suffered through a logic course or two. This is ironic given that we work in a multi-billion-dollar industry centered on the singular assumption that because I read an article about a car, I am going to buy one. But what if I am simply a car enthusiast? Or maybe I am comparing prices because I want to sell my car.

The way we define behavior and data is laughable. There are entire disciplines and companies with real math behind their definitions in the offline world (see, for example, Acxiom, TARGUSInfo, or any customer relationship management company). Yet, in digital, we have magically waved our wand and created unrelated new definitions. In a recent poll of a few hundred of our industry colleagues, we found that 90 percent of respondents felt they were either somewhat confident or not confident in the actual performance of their data, with one in four marketers stating they had no confidence. Clearly, when we rely on inadequate proof, we should get a failing grade.

Conflicts of interest

Transport yourself to your Philosophy 101 class or any sociology class. Consider the following premise: "The people of Athens should give the keys to their creativity, and their economy, to the Spartans so that the Spartans can then fairly evaluate the information and decide where to draw economic and geographic boundaries. This is because the Spartan's have a sign outside of the city wall that says 'Do no evil.'" How should we grade this premise?

Now think about where you currently put your conversion data and where you go to decide if a publisher, data source, or inventory source is performing well. Interactive is rife with conflicts of interest, from Google's stranglehold on search, to data platforms that buy and sell data and then call themselves neutral. If we learned anything from our sociology and philosophy courses, it is that people will always choose to act in their own self-interest. I ask: Is this where we want to be as an industry? I posit: Until we learn to do the work ourselves, we will continue to be dependent on the self-interested business models of others.

Imprecise testing methodologies

In academic science labs, there are multiple sources of data. There are controls and variables. Good results are reproducible and consistent. Scientists track data coming from all data sources -- we do not just cherry pick one data source or one result. In media, every day we see examples of campaigns where one data source dominates a budget. It is an all-or-nothing game: Either a publisher is the best thing since sliced bread, or it is the worst website ever. The publisher is not necessarily at fault.

More likely, there was no A/B testing to isolate the variables. No one asked whether the inventory was good or bad. Was it the audience or data? How about the creative? The truth is we don't know. Without empiricism and reproducible results, what grade should we give our testing methodologies?

Bad math

When did counting become so complicated? Why do we report that we have 100 million unique users on our site that represent 50 percent of the internet population? For starters, it is just not true. The reality is that we are counting cookies, not users. That same number really represents about one-eighth of the monthly pool. So when privacy advocates cry out for more transparency and greater consumer control, who is to blame for all the misconceptions of what we actually know about users and their data? What grade do we give ourselves here?

Summer school: Learning from our lessons

I raise these issues not to point out how bad we are at what we do, but to say that we need to work a little harder at being transparent and accountable. If we get serious about our work, the dollars in traditional media will not be able to compete with the science and methodologies that digital enables. As an industry, we need to move to the next level, finish our dusty dissertations, and finally graduate.

Matt Curcio is the chief scientist of Aggregate Knowledge.He writes for iMedia Connection.com & this article is originally published here.