随着人们在工作和日常生活中使用计算机频次的增加,久坐行为变得日益普遍,上班族高达71%的工作时间是坐着的,在工作时间外又花费大量时间坐着玩手机、使用计算机或看电视[1]。久坐行为会增加Ⅱ型糖尿病、心血管疾病的患病风险,甚至增加全因死亡率[2]。此外,不良坐姿会给腰椎施加压力,导致肌筋膜疼痛综合征,引发下背部肌肉骨骼的疼痛与不适[3]。青少年长期坐姿不良会加剧肌力不平衡,造成脊柱侧弯,严重者还会影响脊柱形态导致休门氏症并损害肺功能[4]。因此,通过坐姿监测与识别对不良坐姿及久坐进行提醒是至关重要的。
目前,坐姿分类与识别主要通过图像和可穿戴传感器进行识别。图像处理使用摄像头获取坐姿图像或视频以判断姿势类别,如Bustamante等分别对摄像头及无人机采集的图片进行分类,判别站、坐、躺等姿势[5]。Li等根据图像重构骨架图,进行坐姿监测[6]。图像识别会受到角度、光强等环境因素影响,因此,许多研究人员利用惯性测量单元(Inertial Measurement Unit,IMU)采集坐姿数据,这时需要将压力传感器固定在身体上[7],也可将压力传感器直接固定在座椅表面,以解决使用三轴加速度计舒适性较差且固定不便的问题[8]。
不同的压力传感器数量及分类算法选择,使得坐姿识别准确度各异。Wang等在椅面和椅背上各放置了8×8的压力传感阵列,通过构建复杂度较低的脉冲神经网络进行分类,能够识别15种坐姿,准确率为88.52%[9]。Wan等使用32×32的压力传感阵列和支持向量机(Support Vector Machine,SVM),实现4种坐姿识别,准确率为89.60%[10]。Farhani等将7个压力传感器单元围成三角形采集数据,利用决策树(Decision Tree, DT)实现7种坐姿识别,准确率为94.00%[11]。Jaffery等使用3×3的矩阵压力传感器配置及机器学习算法,实现5种坐姿识别,准确率为95.41%[12]。Ran等使用11×13的压力传感阵列和五层人工神经网络(Artifical Neural Network, ANN),实现7种坐姿识别,准确率为97.07%[13]。Hu等使用6个压力传感器单元(1个在椅背、2个在左右扶手、3个在椅面),实现7种坐姿识别,准确率为97.78%[14]。Ahmad等采用16个压力传感阵列单元获取压力数据,实现4种坐姿识别,准确率高达99.03%[15]。Ma等使用12个压力传感器单元和DT,可以识别5种坐姿,准确率达99.47%[16]。Fan等使用卷积神经网络(Convolutional Neural Network, CNN)对44×52的压力传感阵列采集的热力图进行分类,可以识别5种坐姿,准确率达99.82%[17]。
本文设计了一个坐姿监测与识别系统,通过一维数组及对应的压力数据热力图两种形式实现坐姿识别。一维数组使用传统机器学习算法识别,热力图使用轻量级CNN提取特征后再进行识别。同时,根据不同坐姿的身体受力情况,减小阵列面积,重新铺设分布式阵列传感器,以实现节约成本的同时提高分类效率。
1 坐姿监测与识别系统概述
2 分类算法
3 实验与结果分析
4 结语
本文基于柔性压力传感阵列设计了坐姿监测与识别系统,使用44×52的压力传感阵列和采集电路板采集数据,并形成可视化压力数据热力图。使用传统机器学习算法训练长度为2 288的一维数据,使用训练好的轻量级CNN提取图像数据特征后进行训练,能够准确识别正坐、前倾、后倾、左倾、右倾、左二郎腿和右二郎腿7种坐姿。此外,还对比分析了多种机器学习算法的准确度以及不同传感阵列配置与性能的关系。结果表明,LR算法表现最佳,单样本预测类别仅需0.996 ms,准确率达99.83%,模型大小仅为1.2 MB。为进一步提高实时坐姿分类效率和降低硬件成本,选取关键部位重新铺设分布式阵列传感器,使用12×12的阵列和LR算法实现对7种坐姿的分类识别,比原始阵列减小93.7%,识别单个样本所用时间由0.996 ms降低至0.208 ms,分类准确率为99.64%,模型大小仅为0.033 MB。
利用该系统将坐姿分类结果发送到上位机,当分类结果为左倾、右倾、前倾、后倾或二郎腿时,提醒使用者处于不良坐姿并告知坐姿状态;若内置的计时器记录传感器连续受力超过2小时,提醒使用者应起身活动。该系统适宜部署在嵌入式设备中实现坐姿监测与识别,具有内存占用小、识别速度快、准确率高等特点。
参考文献
[1] BRIERLEY M L, CHATER A M, SMITH L R, et al. The effectiveness of sedentary behaviour reduction workplace interventions on cardiometabolic risk markers: A systematic review[J].Sports Med, 2019, 49(11): 1739-1767.
[2] MURTAGH E M, MURPHY M H, MILTON K, et al. Interventions outside the workplace for reducing sedentary behaviour in adults under 60 years of age[DB/OL]. (2020-07-17)[2023-08-05]. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD012554.pub2/full.
[3] SOMAYEH T, MOHSEN R, MOHAMMAD A, et al. Recommended maximum holding time of common static sitting postures of office workers[J]. International Journal of Occupational Safety and Ergonomics, 2023, 2(29): 847-854.
[4] 罗朝淑.青少年要警惕特发性脊柱侧凸[N].科技日报, 2023-05-10(8).
[5] BUSTAMANTE A, BELMONTE L, PEREIRA A, et al. Bio-inspired systems and applications: from robotics to ambient intelligence. IWINAC 2022. Lecture notes in computer science[C]. New York: Springer, Cham, 2022. https://doi.org/10.1007/978-3-031-06527-9_48.
[6] LI L, YANG G, LI Y, et al. Abnormal sitting posture recognition based on multi-scale spatiotemporal features of skeleton graph[J].Engineering Applications of Artificial Intelligence, 2023, 123, Part B, 106374.
[7] 张晶莹,刘芳羽,尚林乐,等.基于三轴传感器的坐姿监测系统开发[J].电脑知识与技术,2023,19(10):35-39.
[8] 周瑞,李时维,李绍成,等. 压阻式柔性压力传感器阵列信号采集系统设计[J].传感器与微系统,2021,40(9):104-107.
[9] WANG J, HAFIDH B, DONG H, et al. Sitting posture recognition using a spiking neural network[J]. IEEE Sensors Journal, 2021, 21(2): 1779-1786.
[10] WAN Q, ZHAO H, LI J, et al. Hip positioning and sitting posture recognition based on human sitting pressure image[J]. Sensors, 2021(21): 426.
[11] FARHANI G, ZHOU Y, DANIELSON P, et al. Implementing machine learning algorithms to classify postures and forecast motions when using a dynamic chair[J]. Sensors, 2022, 22(1): 400.
[12] JAFFERY M, ASHRAF M, ALMOGREN A, et al. FSR-based smart system for detection of wheelchair sitting postures using machine learning algorithms and techniques[J]. Journal of Sensors, 2022, 1901058.
[13] RAN X, WANG C, XIAO Y, et al. A portable sitting posture monitoring system based on a pressure sensor array and machine learning[J]. Sensors and Actuators A: Physical, 2021, 331, 112900.
[14] HU Q, TANG X, TANG W. A smart chair sitting posture recognition system using flex sensors and FPGA implemented artificial neural network[J]. IEEE Sensors Journal, 2020, 20(14): 8007-8016.
[15] AHMAD J, SIDEN J, ANDERSSON H. A proposal of implementation of sitting posture monitoring system for wheelchair utilizing machine learning methods[J]. Sensors, 2021, 21(19): 6349.
[16] MA C, LI W, GRAVINA R, et al. Posture detection based on smart cushion for wheelchair users[J]. Sensors, 2017, 17(4): 719.
[17] FAN Z, HU X, CHEN W, et al. A deep learning based 2-dimensional hip pressure signals analysis method for sitting posture recognition[J].Biomedical Signal Processing and Control, 2022(73): 103432.
[18] 白世琪.基于人体工程学的坐垫压力分布研究[D].杭州:浙江理工大学,2013.
[19] 奚广生.座椅接触面压力场分布与坐姿识别的研究[D].哈尔滨:哈尔滨理工大学,2016.
[20] RAY S. A quick review of machine learning algorithms[C]// 2019 international conference on machine learning, big data, cloud and parallel computing (COMITCon). Faridabad, India: IEEE, 2019, 35-39.
[21] DONG S, WANG P, ABBAS K. A survey on deep learning and its applications[J].Computer Science Review,2021(40): 100379.
[22] LI Z, LIU F, YANG W, et al. A survey of convolutional neural networks: Analysis, applications, and prospects[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 6999-7019.
[23] ZHOU Y, CHEN S, WANG Y, et al. Review of research on lightweight convolutional neural networks[C]// 2020 IEEE 5th information technology and mechatronics engineering conference (ITOEC). Chongqing,China: IEEE, 2020: 1713-1720.
[24] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[J]. arXiv preprint arXiv: 1409.4842, 2014.
[25] IANDOLA N, HAN S, MOSKEWICZ M, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[J]. arXiv preprint arXiv: 1602.07360, 2016.
[26] ZHANG X, ZHOU X, LIN M, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices[J]. arXiv preprint arXiv: 1707.01083, 2017.
[27] HOWARD A, ZHU M, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint arXiv: 1704.04861, 2017.
[28] SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[J].arXiv preprint arXiv: arXiv:1801.04381, 2018.
[29] ZOPH B, VASUDEVAN V, SHLENS J, et al. Learning transferable architectures for scalable image recognition[J]. arXiv preprint arXiv: arXiv: 1707.07012, 2017.
[30] TAN M, CHEN B, PANG R, et al. MnasNet: Platform-aware neural architecture search for mobile[C]// 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR). Long Beach, CA, USA: IEEE, 2019, 2815-2823.
李晋1 张溢阅2 李妍1 孙俊君1 张茹棋3
1. 北京理工大学北京学院 2. 北京理工大学集成电路与电子学院 3. 北京理工大学网络空间安全学院 |