# AnalyticBridge

A Data Science Central Community

# Measuring the success of automated bidding campaigns

People with experience in black box trading are welcome to answer this question too.

The problem is as follows: keywords are purchased on advertising network (pay-per-click search engines such as Google), a bid is placed every day for each keyword, and revenue is generated when the click converts (e.g. when the user subscribe to a newsletter).

Each day, we know how much money is made, per traffic segment. The question is: what important metrics should be used to assess the quality of a bidding campaign? Of course total profit is very important, but not the only metric. For instance, if positive days generate a \$1000 gain and negative days a \$950 loss (profit = \$1000 - \$950 = \$50), it's worse than if the numbers were \$80 gain, \$45 loss and \$35 profit. So the ratio gain/loss is critical. How is this ratio called? What other ratios should be considered?

Views: 49

### Replies to This Discussion

Vincent:

I have been interested in that topic as well. I came across the following paper from some google researchers. It takes a Google view point but it introduces some terminology and math that might be helpful. I have found similar papers that bring in game theory to reason about this problem.

Interestingly enough the laws of big numbers factor heavily into the usefulness of any such analysis. High specificity does not seem to be a strong suit of either Yahoo or Google algorithms. Many of these large systems are based on Tournament style rules and high specificity does not work with popularity contests and thus Yahoo and Google fail miserably. The higher the specificity the 'sparser' the universe becomes and proper ad placement becomes more brittle.
Attachments:
Hi Theodore,

Thank you for this very interesting article, showing the search engine viewpoint.

One thing that I forgot to mention is that measuring total profit is itself tricky:
• should you use one month of data, one week, one quarter? Or a data-driven time period (investors might not like it)
• should you base your conclusions on at least 100 transactions, 500, or 50000?
• what if most of the gains takes place during the first half period, most of the losses during the second half?
Vincent:

The goal of deriving a measurement of a confluence of discrete events requires a determination whether or not there is some scale similarity in the data (fractal behavior). To gain some insight in the data I would experiment with different event/time windows and see if they yield similar tendencies. Any structure that might be projected on this data derived from first principals or customer behavior could modulate the bidding through a feedback mechanism. That way you don't need to know, only need to measure and adjust.
That's what we are going to do. A/B/C testing with analysis of variances to make sure some tests do indeed perform better. I am going to read more on Taguchi tests (see e.g. www.sitespect.com ) as this seems very relevant to this problem, when approached through A/B/C testing.