WTP space models

Open In Colab

Import xlogit

The cell below installs and imports xlogit and check if GPU is available. A GPU is not is not strictly required, but it can speed up computations for models in WTP space with random parameters.

[1]:
!pip install xlogit
from xlogit import MixedLogit
MixedLogit.check_if_gpu_available()
Requirement already satisfied: xlogit in /usr/local/lib/python3.10/dist-packages (0.2.7)
Requirement already satisfied: numpy>=1.13.1 in /usr/local/lib/python3.10/dist-packages (from xlogit) (1.23.5)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from xlogit) (1.11.4)
1 GPU device(s) available. xlogit will use GPU processing
[1]:
True

Yougurt Dataset

This dataset comprises revealed preferences data involving 2,412 choices among three yogurt brands. Following a panel structure, it includes multiple choice situations observed across 100 households. Due to variations in the number of choice situations experienced by each household, the panels are imbalanced, a characteristic that xlogit is capable of handling. Originally introduced by Jain et al. (1994), this dataset was ported from the logitr package for R.

Read data

[2]:
import pandas as pd
import numpy as np
df = pd.read_csv("https://raw.githubusercontent.com/arteagac/xlogit/master/examples/data/yogurt_long.csv")
df
[2]:
id choice feat price chid alt
0 1 0 0 8.1 1 dannon
1 1 0 0 6.1 1 hiland
2 1 1 0 7.9 1 weight
3 1 0 0 10.8 1 yoplait
4 1 1 0 9.8 2 dannon
... ... ... ... ... ... ...
9643 100 0 0 12.2 2411 yoplait
9644 100 0 0 8.6 2412 dannon
9645 100 0 0 4.3 2412 hiland
9646 100 1 0 7.9 2412 weight
9647 100 0 0 10.8 2412 yoplait

9648 rows × 6 columns

Convert column to dummy representation

The brand column needs to be converted to a dummy representation in order to be included in the model.

[3]:
df["brand_yoplait"] = 1*(df["alt"] == "yoplait")
df["brand_hiland"] = 1*(df["alt"] == "hiland")
df["brand_weight"] = 1*(df["alt"] == "weight")

Estimate model in WTP space.

Note that you need to provide a scale_factor, which corresponds to the price column. For models in WTP space, xlogit uses the negative of the price column.

[4]:
varnames = ["feat", "brand_yoplait", "brand_hiland", "brand_weight"]
wtp = MixedLogit()
wtp.fit(X=df[varnames],
        y=df["choice"],
        varnames=varnames,
        ids=df["chid"],
        alts=df["alt"],
        panels=df['id'],
        randvars={"feat": "n", "brand_yoplait": "n", "brand_hiland": "n", "brand_weight": "n"},
        scale_factor=df["price"],
        n_draws=1000
        )

wtp.summary()
GPU processing enabled.
Optimization terminated successfully.
    Message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
    Iterations: 84
    Function evaluations: 104
Estimation time= 20.2 seconds
---------------------------------------------------------------------------
Coefficient              Estimate      Std.Err.         z-val         P>|z|
---------------------------------------------------------------------------
feat                    2.3717675     0.6195132     3.8284377      0.000132 ***
brand_yoplait           1.9544038     0.6661769     2.9337607       0.00338 **
brand_hiland          -12.1382781     1.4991326    -8.0968678      8.85e-16 ***
brand_weight           -8.6411789     1.5387448    -5.6157323      2.18e-08 ***
sd.feat                 2.3618763     0.6012260     3.9284337      8.79e-05 ***
sd.brand_yoplait        8.2849425     1.0365091     7.9931208      2.02e-15 ***
sd.brand_hiland         6.1774711     1.1621855     5.3153917      1.16e-07 ***
sd.brand_weight         8.2961161     1.1133869     7.4512428      1.28e-13 ***
_scale_factor           0.4564795     0.0408925    11.1629027      3.01e-28 ***
---------------------------------------------------------------------------
Significance:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log-Likelihood= -1247.199
AIC= 2512.398
BIC= 2564.492

Provide alternative starting values

You can pass starting values to xlogit’s fit method using the init_coeff argument. The most important aspect to consider when passing starting values is to follow the same order in which xlogit lists the parameters in the summary table. The order of the coefficients is varnames + sd of varnames + scale_factor. An easy way to figure out the order of the coefficients is to run a test estimation and follow the order of the coefficients in the summary table.

The code below estimates first a model in preference space, and uses the estimated parameters as initial values for the model in WTP space.

Estimimate model in preference space.

[5]:
# Step 3. Estimating a mixed logit model to obtain starting values for WTP Space Model
varnames = ["price", "feat", "brand_yoplait", "brand_hiland", "brand_weight"]
ml = MixedLogit()
ml.fit(X=df[varnames],
       y=df["choice"],
       varnames=varnames,
       ids=df["chid"],
       alts=df["alt"],
       panels=df['id'],
       randvars={"feat": "n", "brand_yoplait": "n", "brand_hiland": "n", "brand_weight": "n"},
       n_draws=1000)

ml.summary()
GPU processing enabled.
Optimization terminated successfully.
    Message: The gradients are close to zero
    Iterations: 48
    Function evaluations: 54
Estimation time= 6.1 seconds
---------------------------------------------------------------------------
Coefficient              Estimate      Std.Err.         z-val         P>|z|
---------------------------------------------------------------------------
price                  -0.4564785     0.0397931   -11.4712971      1.07e-29 ***
feat                    1.0826681     0.2101995     5.1506685      2.81e-07 ***
brand_yoplait           0.8921554     0.1375538     6.4858654      1.07e-10 ***
brand_hiland           -5.5409249     0.4190916   -13.2212731      1.41e-38 ***
brand_weight           -3.9445356     0.2405401   -16.3986622      2.22e-57 ***
sd.feat                 1.0781550     0.2361639     4.5652820      5.24e-06 ***
sd.brand_yoplait        3.7818750     0.1950585    19.3884147       6.1e-78 ***
sd.brand_hiland         2.8199068     0.3515460     8.0214458      1.61e-15 ***
sd.brand_weight         3.7870137     0.1869526    20.2565465       2.2e-84 ***
---------------------------------------------------------------------------
Significance:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log-Likelihood= -1247.199
AIC= 2512.398
BIC= 2564.492

Obtain starting values by formatting estimates from the ML model

[6]:
# Divide all estimates by the price coefficient
coef = ml.coeff_
coef = coef / -coef[0]

# Add 1 as the starting value of the scale parameter
coef = np.append(coef, 1)

# Drop the price coefficient
coef = coef[1: ]

Estimating the WTP Space Model using the starting values

[7]:
#use init_coeff to provide starting values
#use scale_factor to specify price as the scale parameter
varnames = ["feat", "brand_yoplait", "brand_hiland", "brand_weight"]
wtp = MixedLogit()
wtp.fit(X=df[varnames],
       y=df["choice"],
       varnames=varnames,
       ids=df["chid"],
       alts=df["alt"],
       panels=df['id'],
       randvars={"feat": "n", "brand_yoplait": "n", "brand_hiland": "n", "brand_weight": "n"},
       init_coeff= coef,
       scale_factor= df["price"],
       n_draws=4000)

wtp.summary()
GPU processing enabled.
Optimization terminated successfully.
    Message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
    Iterations: 73
    Function evaluations: 85
Estimation time= 50.4 seconds
---------------------------------------------------------------------------
Coefficient              Estimate      Std.Err.         z-val         P>|z|
---------------------------------------------------------------------------
feat                    2.2219166     0.6459974     3.4395134      0.000593 ***
brand_yoplait           1.9045552     0.7059636     2.6978096       0.00703 **
brand_hiland          -12.0965444     1.4722644    -8.2162851      3.38e-16 ***
brand_weight           -7.3143202     1.1454247    -6.3856841      2.04e-10 ***
sd.feat                 2.6308963     0.6602489     3.9847040      6.96e-05 ***
sd.brand_yoplait        8.3661417     1.0923714     7.6586971       2.7e-14 ***
sd.brand_hiland         5.6314080     1.0507816     5.3592564      9.15e-08 ***
sd.brand_weight         7.3543321     0.8909297     8.2546716      2.48e-16 ***
_scale_factor           0.4638383     0.0414612    11.1872810      2.32e-28 ***
---------------------------------------------------------------------------
Significance:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log-Likelihood= -1247.314
AIC= 2512.627
BIC= 2564.721

Use large number of random draws

xlogit enables estimations using very large number of random draws. If the number of draws is too big and the data does not fit on the GPU memory, use the the batch_size parameter to split the processing into multiple batches. For instance, when using batch_size=1000, xlogit will process 1,000 random draws at a time. This avoids overflowing the GPU memory as xlogit processes one batch at a time, computes the log likelihoods, and average them at the end, which does not affect on the final estimates or log likelihood. You can also increase the batch_size depending on your GPU memory size. The example below estimates a model with 10,000 draws using batches of 2,000 random draws.

[8]:
varnames = ["feat", "brand_yoplait", "brand_hiland", "brand_weight"]
wtp = MixedLogit()
wtp.fit(X=df[varnames],
       y=df["choice"],
       varnames=varnames,
       ids=df["chid"],
       alts=df["alt"],
       panels=df['id'],
       randvars={"feat": "n", "brand_yoplait": "n", "brand_hiland": "n", "brand_weight": "n"},
       n_draws=10000,
       batch_size=2000,
       init_coeff=coef,
       scale_factor= df["price"],
       verbose=2)

wtp.summary()
GPU processing enabled.
Optimization terminated successfully.
    Message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
    Iterations: 40
    Function evaluations: 58
Estimation time= 139.6 seconds
---------------------------------------------------------------------------
Coefficient              Estimate      Std.Err.         z-val         P>|z|
---------------------------------------------------------------------------
feat                    2.1505100     0.6509423     3.3036876      0.000968 ***
brand_yoplait           1.3023788     0.7384744     1.7636072        0.0779 .
brand_hiland          -11.8644846     1.4151361    -8.3839882      8.58e-17 ***
brand_weight           -6.8533816     1.2996173    -5.2733846      1.46e-07 ***
sd.feat                 2.6531236     0.7187810     3.6911432      0.000228 ***
sd.brand_yoplait        8.0006492     1.0914629     7.3302070      3.12e-13 ***
sd.brand_hiland         5.6846516     1.1605500     4.8982392      1.03e-06 ***
sd.brand_weight         8.9836146     1.2620149     7.1184695      1.43e-12 ***
_scale_factor           0.4610077     0.0417999    11.0289098      1.25e-27 ***
---------------------------------------------------------------------------
Significance:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log-Likelihood= -1245.232
AIC= 2508.464
BIC= 2560.558

References

Jain, Dipak C, Naufel J Vilcassim, and Pradeep K Chintagunta. 1994. “A Random-Coefficients Logit Brand-Choice Model Applied to Panel Data.” Journal of Business & Economic Statistics 12 (3): 317–28.

The Yogurt dataset example was kindly developed by [@chevrotin](https://github.com/chevrotin).