当前位置: X-MOL 学术Biosyst. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automatic posture change analysis of lactating sows by action localisation and tube optimisation from untrimmed depth videos
Biosystems Engineering ( IF 4.4 ) Pub Date : 2020-06-01 , DOI: 10.1016/j.biosystemseng.2020.04.005
Chan Zheng , Xiaofan Yang , Xunmu Zhu , Changxin Chen , Lina Wang , Shuqin Tu , Aqing Yang , Yueju Xue

The automatic detection of postures and posture changes in sows using a computer-vision system has substantial potential for learning their maternal abilities, enhancing their welfare and productivity, and reducing the crushing risk to piglets. The objectives of this study are to (1) detect frame-level sow postures, (2) temporally localise posture change actions, and (3) generate spatio-temporally action tubes parsed from a long-time untrimmed segment of depth video. Depth videos were recorded for five batches of lactating sows, using a Kinect from a top-view in a commercial farm. Three batches were used for training and validation, and the other two for testing. Four postures (standing, sitting, ventral lying, and lateral lying) were automatically detected, with a mean average-precision (mAP) of 0.927. The localisation performance of the clip-level mAP involved eight posture change actions, and achieved 0.774 in the temporal intersection over union (tIoU) ≥ 0.5. A tube optimisation algorithm was used to optimise and smooth the action tubes. When the mean IoU≥0.8 in the tube, the performance of the video-level mAP significantly improved, from 0.313 to 0.796. The error analysis could deepen the understanding of the causes of errors in action detection. The system was applied to test two day videos of various sows, by obtaining the regularity of posture change probability, comparing the action characteristics, and discerning the maternal differences of the sows. The methodology can be applied in large-scale deployments for learning livestock action preferences and behavioural traits, thereby enhancing welfare and productivity on a farm.

中文翻译:

通过未修剪深度视频的动作定位和管优化自动分析泌乳母猪的姿势变化

使用计算机视觉系统自动检测母猪的姿势和姿势变化,对于了解母猪的母性能力、提高其福利和生产力以及降低仔猪的挤压风险具有巨大的潜力。本研究的目标是 (1) 检测帧级母猪姿势,(2) 时间定位姿势变化动作,以及 (3) 生成从长时间未修剪的深度视频片段解析的时空动作管。使用 Kinect 从商业农场的俯视图拍摄了五批泌乳母猪的深度视频。三批用于训练和验证,另外两批用于测试。自动检测四种姿势(站立、坐着、腹侧躺和侧躺),平均精度 (mAP) 为 0.927。剪辑级 mAP 的定位性能涉及八个姿势变化动作,并在联合的时间交集 (tIoU) ≥ 0.5 中达到 0.774。管优化算法用于优化和平滑动作管。当管内平均 IoU≥0.8 时,视频级 mAP 的性能显着提升,从 0.313 到 0.796。错误分析可以加深对动作检测错误原因的理解。该系统用于测试各种母猪的两天视频,通过获取姿势变化概率的规律性,比较动作特征,辨别母猪的母性差异。该方法可应用于大规模部署,以学习牲畜行动偏好和行为特征,从而提高农场的福利和生产力。
更新日期:2020-06-01
down
wechat
bug