Subscribe to DSC Newsletter

Highlighting Brand And Product Features As a Function of Probability Of Response From Nested Conjoint Studies

Inform your marketing team as to which set of features to highlight, within a product, on a product conversion campaign for current customers or prospects. This design can be extended for copy and image influence before campaign targeting.

When we are done you will be able to simulate the response rate to a campaign as a function of how you mix together emphasis on brand features, as copy, image, etc. This is a great opportunity to partner with creative types and marketing managers in a cost efficient research design.

Featured Industry Co-Author(s): Glenn Ross , Don Wedding

In this example we will use the term “product”, but if your brand is the product, or a feature of the product you are selling, which it is more often than not, then you can simply work that into the design.

We are not changing the product features, but rather using marketing intelligence to construct a compelling presentation of which features to sell based on one’s likelihood to respond to a product campaign.  

The impact of this method can be reflected in innovative dashboard techniques [5].

The Solution

If you do not present to executives, there will always be someone willing to proselytize your research if it is worthwhile, so make it easy for them by a clear and concise study pitch, sell the benefits, keep the cost low. If you are a behavioral scientist, this will simply add to your mystique and may justify your decision to wear black all the time.

The method discussed here is useful because some product features, such as car insurance, mobile pricing plans, checking accounts, and health insurance, are more often than not the result of some serious considerations. This makes it unrealistic to adjust product features on a campaign-by-campaign basis.

So the solution is to choose which specific product features to highlight directly in ad copy and graphics, or perhaps more subtle and indirectly through various persuasion techniques designed to bypass critical thinking. 

We are going to highlight brand features depending on the scored probability of a response model. Those more likely to convert in a campaign might have different needs and wants than those least likely to respond.

Practical Introduction:

Building out products with varying features often includes intense market research, which has to reconcile with a firm’s ability to actually deliver customer needs and preferences in a profitable manner.

It is truly a unique optimization problem to maximize profit within the constraints of a complex network of resources while also maximizing customer attractiveness. We will address this in another research brief, but this lesson is necessary in order to make more advanced maneuvers.

Intense research efforts can result in a combinatorial explosion of different choices for consumers, which pose a validity problem for researchers [1]. Various techniques have been developed to deal with these situations in Conjoint Analysis (CA) research, but they have fancy names that hard to pronounce.

No matter what CA method is used, the method discussed here assumes that the brand features are orthogonal or unrelated. This can be confirmed using the data from the brand feature study. The levels of the features can, however, be correlated. For example, if a deductible is reduced as the price for a policy increases that is less problematic if minutes on a mobile plan, in the eye of the consumer, are always as equally important as number of texts per month. CA forces these choices, but since we are modeling choices as a function of probability to respond, two correlated choices might be a compelling alternative explanation for the variability in choice ranks rather than the probability to respond.

Oddly enough, this is a nested CA when you consider it as a whole, but since all the work has been done and product specifications have been considered, you only have to complete the final leg.

You could do this as a nested part of an original CA to optimize cash flow, but we will build on this as part of another research brief because there will be constraints per your organizational infrastructure assets. If you are thinking constraint-based optimization-ish, please get out of my head because I am about to type the method below.  


  1. Construct a product targeting model, either a look a like model or a response model. All the rules apply for creating models. Here we will use logistic regression as an example, but the response can be continuous. 
  2. Construct a survey asking participants to rank order their preferences of the brand features of your product you are marketing. Keep it simple. Information overload will make the survey response less useful if the study participant is exhausted [1]. Additionally, working memory holds about 5-9 pieces of information. As a decision sciences researcher I have to tell you that a comparison is information.  So if there are two features, the third piece of information is what feature is more important than the other. This is different when comparing products, which also includes a process of elimination cognitive process.  
  3. Stratify your survey by the model predictors so that survey results are, in concept, a function of the probability to respond. If that is not practical, then penalize your targeting model by using the Targeting Utility method for biased samples before going to step 5. [2]
  4. Deploy the survey campaign to the target population. Cast a wide net. Expect low response rates for prospects. Also, use a reputable vendor for this function, as you do not want multiple requests for survey responses to be part of your brand experience.
  5. If your sample is stratified, this is high fidelity information, congratulations! Do step 6 and generalize the insights of the sample to campaign targets as a function of the targeting model. If you used the Targeting Utility for biased samples technique, know that math tricks are not a substitute for good research design, this is not as high-fi as a good research effort.
  6. For this step and step 7, Targeting Utility and Response Probability will be referenced as Rank Function, just to keep it short. Create a model specific to your application to express the Rank Function as function of the product and brand assets, be sure to control for interactions using one of two methods. Either introduce interactions into the model as inputs, or create models within tiers (ex. deciles) of the Rank Function. If your Rank Function is not a probability, (ex. expected value, odds to double risk) and is more of a linear model type, then the ranks are already on the same scale so you can directly compare the coefficients to one another. Slam-dunk, go to step 8! 
  7. For Logistic Models or "logit" link functions, use your ROC curve and simulate the results. This is fun for business partners who enjoy the more interactive aspects of modeling using simulators and spreadsheets. It is even more fun if you are the only partner with the laptop that has all the formulas in it when no one else does. 
  8. Target those with the highest Rank Function, you can still use decile analysis for the Targeting Utility.
  9. Explain all of this to your business partners in 250 words or less with some slick graphics.  Then you are done.
  10. Final Tip: If you want to explain these in terms of Random Effects, do so with extreme caution [6].


Questions, Comments, Smart Remarks? Always welcome. 



Ramirez, Jose Manuel (2009). "Measuring: from Conjoint Analysis to Integrated Conjoint Experiments". Journal of Quantitative Methods for Economics and Business Administration 9: 28–43. ISSN 1886-516X.


Flanigan et al. , 2012 “Targeting Utility: Penalizing Response Models As a Function of Practical Bias In Multivariate Survey Response To Increase Customer Relevance For Targeting Applications In Digital and Offline Advertising”


Article Image:

Text: Flanigan


Bottom image:


Dashboard Implications:

[6] M. Biderman University of Tennessee at Chattanooga (Genius, Mentor, Life Coach at 5 year intervals,-Personal Communication) -

"A popular distinction is in terms of the number of values that variable has.  If it has just a few values, say two or three or four, so that in an experiment, performance at all possible values can be compared, then it's a fixed effect.  On the other hand, if the variable whose effects are being assessed has many values, so that only a sample of the possible values can be included in the research and thus performance in only a sample of the values can be compared, then it's a random effect.  This is what I use for students.  It means that for random effects we must generalize from the sample of values to the population of values of the variable. 
The rub with that definition concerns variables like IQ, for example, which have many values but which are often analyzed as fixed effect variables.  So another way of thinking about the problem is that a fixed effect is an independent variable whose theoretical effect would be the same from experiment to experiment.  Estimates of the effect might differ from experiment to experiment, but in the population, that variable's effect is a constant. Note that IQ, for example, has specific values that are repeatable.  You could conduct two experiments with persons with the exact same IQs.  A random effect, on the other hand, is a variable whose theoretical effect would be expected to vary from experiment to experiment, because the values of this variable cannot be replicated from experiment to experiment, because the variable itself is random. "

Views: 521


You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2018 is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Terms of Service