packages = ["numpy"]

🎯 Contextual Bandits

Linear Models for Context-Aware Decision Making

💡 What are Contextual Bandits?

Contextual bandits extend the multi-armed bandit problem by incorporating context (features) into the decision-making process. Instead of choosing the same best action every time, the optimal action depends on the current situation.

The Key Extension: At each time step, we observe a context vector (e.g., user features, time of day, product attributes) and choose an action based on both the context and our learned model. This enables personalization and generalization across similar contexts.

📊 Learning Slides

Loading slides...
1 / 5

🎯 Key Takeaways

1. Context Matters

Contextual bandits enable personalized decisions by incorporating features of the current situation

2. Linear Models Work Well

LinUCB and LinTS use linear regression to model reward as a function of context

3. Confidence-Based Exploration

LinUCB uses ridge regression confidence bounds; LinTS uses Bayesian posterior sampling

4. Real-World Applications

Powers recommendation systems, ad placement, personalized medicine, and more

5. Bridge to Full RL

Contextual bandits are one step closer to full reinforcement learning with state transitions