Video games have become an attractive testbed for evaluating AI systems, by capturing some aspects of real-world complexity (rich visual stimuli and non-trivial decision policies) while abstracting away from other sources of complexity (e.g., sensory transduction and motor planning). Some AI researchers have reported human-level performance of their systems, but we still have very little insight into how humans actually learn to play video games. This talk will present empirical data on human video game learning indicating that humans learn very differently from most current AI systems, particularly those based on deep learning. Humans can induce object-oriented, relational models from a small amount of experience, which allow them to learn quickly, explore intelligently, plan efficiently, and generalize flexibly. These aspects of human-like learning can be captured by a model that learns through a form of program induction.