Motivated by the phenomenon of strategic agents gaming a recommendation system to maximize the number of times they are recommended to users, we study a strategic variant of the linear contextual bandit problem, where the arms strategically misreport privately observed contexts to the learner. % under strategic context manipulation. We treat the algorithm design problem as one of \emph{mechanism design} under uncertainty and propose the Optimistic Grim Trigger Mechanism (OptGTM) that minimizes regret while simultaneously incentivizing the agents to be approximately truthful. We show that OptGTM achieves sublinear regret despite the agents being unrestricted in their ability to game the learning algorithm by misreporting contexts. We then also show that failing to account for the strategic nature of the agents results in linear regret. However, a trade-off between incentive-compatibility and regret minimization is shown to be unavoidable. More broadly, this work provides insight into the intersection of online learning and mechanism design.