Real-time human travel pattern modeling is essential to various applications. To model such patterns, numerous data-driven techniques have been proposed. However, existing techniques are mostly driven by data from a single view, e.g., a transportation view or a cellphone view, which leads to over-fitting of these single-view models. To address this issue, we propose a human mobility modeling technique based on a generic multi-view learning framework called coMobile. In coMobile, we first improve the performance of single-view models based on tensor decomposition with correlated contexts, and then we integrate these improved single-view models together for multi-view learning to iteratively obtain mutually-reinforced knowledge for real-time human mobility at urban scale. We implement coMobile based on an extremely large dataset in the Chinese city Shenzhen, including data about taxi, bus and subway passengers along with cellphone users, capturing 27 thousand vehicles and 10 million urban residents. The results show that our approach outperforms a single-view model by 51% on average. This paper is published in ACM SIGSPATIAL 2015.