Learning a complex task such as low-level robot manoeuvres while preventing failure of monocular SLAM is a challenging problem for both robots and humans. The data-driven identification of basic motion strategies in preventing monocular SLAM failure is a largely unexplored problem. We devise a computational model for representing and inferring strategies, formulated as Markov decision processes, where the reward function models the goal of the task as well as information about the strategy. We show how this reward function can be learnt from expert demonstrations using Inverse Reinforcement Learning. The resulting framework allows one to identify the way in which a few chosen parameters affect the quality of monocular SLAM estimates. The estimated reward function was able to capture expert demonstration information and the inherent expert strategy and it was possible to give an intuitive explanation to the obtained reward structure. A significant improvement in performance as compared to an intuitive hand-crafted reward function is also shown.