原文:
UMDFaces is a face dataset divided into two parts:
Still Images - 367,888 face annotations for 8,277 subjects.
Video Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.
The dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. We provide human curated bounding boxes for faces. We also provide the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.
The second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. Again, we also provide the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.
译:
概述
UMDFaces是一个分为两部分的人脸数据集:
静态图像-为8277名受试者提供367888个面部注释。
视频帧-超过370万个带注释的视频帧,来自3100个受试者的22000多个视频。
第1部分-静态图像
数据集包含了分为3个批次的8277名受试者的367888个面部标注。我们提供人为设计的人脸边界框。我们还提供了估计的姿势(偏航,俯仰和横摇),21个关键点的位置,以及由预先训练的神经网络生成的性别信息。
此外,我们还发布了一个新的基于批处理3的人脸验证测试协议。
第2部分-视频帧
第二部分包含3735476个带注释的视频帧,从3107个受试者的22075个中提取。同样,我们还提供了估计的姿态(偏航,俯仰和横摇),21个关键点的位置,以及由预先训练的神经网络生成的性别信息。
大家可以到官网地址下载数据集,我自己也在百度网盘分享了一份。可关注本人公众号,回复“2020102002”获取下载链接。