Multi-camera Multiple People Tracking (MMPTRACK) dataset has around 5 hour videos for training and 1.5 hour videos for validation. The dataset is fully annotated with person bounding boxes and corresponding person id. All videos are recorded with cameras placed in different angle at the scene, with guaranteen that all cameras field of view are connected (one camera has overlapped FoV with at least one of the other cameras). field of view and are well calibrated. All the videos are collected inside Microsoft indoor labs where we built 5 simulated environments, retail, lobby, industry, cafe and office. We asked total 28 persons to participate in recording (14 in training, 7 in validation and 7 in testing). All people were paid and signed agreement to release their data to public for research usage. All participants are in various age, gender and race groups.
We also provide the ID annotations cross the 5 simulated environments of MMPTRACK for retrieval evaluation, denoted as MMP-Retrieval.
Currently we provide:
rgb_[frameid]_[cameraid].jpg
rgb_[frameid]_[cameraid].json
) and annotations for person footpoint in
discretesize coordinate in world coordinates system (topdown_[frame_id].csv
).
# bounding box annotations for camera view, json format { "4": [478, 40, 630, 206] ... 'id': [l, t, r, b], ... } # person footpoint annotations, csv format 4, 253, 139, 904.15 ... id, x, y, z(ignore) ...
Visualization code and calibration file usage can be found here.
The datasets can only be used for research purpose. The recipient of the datasets must agree to terms and conditions.
Please cite the following paper when using our benchmark.
MMPTRACK: Large-scale Densely Annotated Multi-camera Multiple People Tracking Benchmark. arXiv:2111.15157
@misc{han2021mmptrack, title={MMPTRACK: Large-scale Densely Annotated Multi-camera Multiple People Tracking Benchmark}, author={Xiaotian Han and Quanzeng You and Chunyu Wang and Zhizheng Zhang and Peng Chu and Houdong Hu and Jiang Wang and Zicheng Liu}, year={2021}, eprint={2111.15157}, archivePrefix={arXiv}, primaryClass={cs.CV} }