Tip: Be careful about applying results of previous tests

Blog

Anyone who has seen the infamous Terence Kawaja "Crowded Landscape" slide can see how interconnected the ad technology space is. Sometimes it's easy to forget that at the end of the day we are all interested in putting an ad in front of a user.  Very often, many players are vying for the same inventory space and same user, which cause marketers to go up against themselves due to multiple media buys converging on the same site through different paths across the crowded eco-system.  When this happens during a testing event, participating parties (save perhaps the publisher) lose.  The test participants get skewed performance, and the client looses out on efficiency and control.  This situation is exacerbated when last-touch attribution is the measure of choice.  Therefore, advertisers who have performed last-touch based head-to-head tests without audience segmentation should carefully consider how they apply these results as they begin to adopt more advanced attribution models.

The strategies that make a player very successful at last-touch often make them very poor at multi-touch.  There’s no doubt that we all know that the fastest road to credit on last-touch is to remarket like crazy, hammer frequency at low cost, and push off risk through CPC/CPA buying wherever possible.  However, those strategies are not as applicable in a real lift based attribution model.  You can't simply assume that rankings in one set of abilities translate directly into rankings in another set.  It's kind of like saying "I know that the ability to run a long distance is not the best measure of the athlete I'm looking for, but if I evaluate this marathon runner and this pole vaulter on the basis of their ability to run 26.2 miles as fast as possible, I'll learn which one is the better athlete."  The rank order of that contest simply will not tell you what you want to know.  If you change your goal (i.e. decide that what you want is not the ability to run 26.2 miles fast, but rather do something quite different like down-hill skiing), you would really have to do another test as the results of the first  analysis  would have little bearing on what you want to accomplish.

In short, if you are thinking about moving to a new attribution model, you should be careful about applying the results of the previous tests because they become less applicable. For more information on a new attribution model, check out Xuhui’s latest ClickZ column: New Year, New Attribution Model.