题名 | Understand Human Behaviours with IoT Sensors: From Physical to Physiological Sensing |
姓名 | |
姓名拼音 | XIE Wentao
|
学号 | 11951009
|
学位类型 | 博士
|
学位专业 | Computer Science and Engineering
|
导师 | |
导师单位 | 计算机科学与工程系
|
外机构导师 | 张黔
|
外机构导师单位 | 香港科技大学
|
论文答辩日期 | 2024-08-02
|
论文提交日期 | 2024-09-03
|
学位授予单位 | 香港科技大学
|
学位授予地点 | 香港
|
摘要 | This thesis explores the design of Internet of Things (IoT) sensing systems to understand human behaviours, in the physical and physiological aspects. The objectives of this thesis are three-fold: (i) to study and showcase new sensing principles to capture physical and physiological human behaviours, (ii) to design innovative algorithms and techniques to analyse the physical and physiological indicators, and (iii) to demonstrate various appli- cations where behavioural sensing can be enabled with IoT sensors. These objectives are fulfilled with five research attempts, where each attempt presents the complete problem formulation, technical design, system implementation, application showcase and empir- ical study development cycle. The first two attempts focus on physical sensing and pro- pose two novel eyewear interaction methods enabled by facial and rim gestures, which are achieved with miniature ultrasonic and vibrational sensors. The last three attempts focus on physiological sensing and present a series of new home-based pulmonary health management systems using ubiquitous speakers, earphones, and depth cameras. |
关键词 | |
语种 | 英语
|
培养类别 | 联合培养
|
入学年份 | 2019
|
学位授予年份 | 2024-09
|
参考文献列表 | [1] X. Xu, J. Yu, Y. Chen, Y. Zhu, L. Kong, and M. Li, “Breathlistener: Fine-grained breathing monitoring in driving environments utilizing acoustic signals,” in Pro- ceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2019, pp. 54–66. [2] A. Wang, J. E. Sunshine, and S. Gollakota, “Contactless infant monitoring using white noise,” in The 25th Annual International Conference on Mobile Computing and Networking, 2019, pp. 1–16. [3] B. L. Graham, I. Steenbruggen, M. R. Miller, I. Z. Barjaktarevic, B. G. Cooper, G. L. Hall, T. S. Hallstrand, D. A. Kaminsky, K. McCarthy, M. C. McCormack et al., “Stan- dardization of spirometry 2019 update. an official american thoracic society and eu- ropean respiratory society technical statement,” American journal of respiratory and critical care medicine, vol. 200, no. 8, pp. e70–e88, 2019. [4] K. L. Wood, “Airflow, lung volumes, and flow-volume loop —pulmonary dis- orders —msd manual professional edition,” https://www.msdmanuals.com/ professional/pulmonary-disorders/tests-of-pulmonary-function-pft/airflow,-lung-volumes,-and-flow-volume-loop, 2020, accessed Aug 23, 2021. [5] J. B. Sterner, M. J. Morris, J. M. Sill, and J. A. Hayes, “Inspiratory flow-volume curve evaluation for detecting upper airway disease,” Respiratory care, vol. 54, no. 4, pp. 461–466, 2009. [6] MarketsandMarkets, “Internet of things (iot) market size, statistics, trends, forcasts,” 2024. [Online]. Available: https://www.marketsandmarkets.com/ Market-Reports/internet-of-things-market-573.html [7] S. Jalaliniya and D. Mardanbegi, “Seamless interaction with scrolling contents on eyewear computers using optokinetic nystagmus eye movements,” in Proceed- ings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. Charleston South Carolina: ACM, Mar. 2016, pp. 295–298. [8] S. Ahn and G. Lee, “Gaze-Assisted Typing for Smart Glasses,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’19. New York, NY, USA: Association for Computing Machinery, Oct. 2019, pp. 857–869. [9] P.-S. Ku, T.-Y. Wu, and M. Y. Chen, “EyeExpression: exploring the use of eye expres- sions as hands-free input for virtual and augmented reality devices,” in Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. Gothenburg Sweden: ACM, Nov. 2017, pp. 1–2. [10] ——, “EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expres- sions,” in The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings. Berlin Germany: ACM, Oct. 2018, pp. 126–127. [11] P. Graybill and M. Kiani, “Eyelid Drive System: An Assistive Technology Employ- ing Inductive Sensing of Eyelid Movement,” IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 203–213, Feb. 2019. [12] K. Masai, K. Kunze, and M. Sugimoto, “Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition,” in Pro- ceedings of the Augmented Humans International Conference, ser. AHs ’20. New York, NY, USA: Association for Computing Machinery, Jun. 2020, pp. 1–10. [13] A. Tanwear, X. Liang, Y. Liu, A. Vuckovic, R. Ghannam, T. Böhnert, E. Paz, P. P. Freitas, R. Ferreira, and H. Heidari, “Spintronic Sensors Based on Magnetic Tun- nel Junctions for Wireless Eye Movement Gesture Control,” IEEE Transactions on Biomedical Circuits and Systems, vol. 14, no. 6, pp. 1299–1310, Dec. 2020. [14] S. Hickson, N. Dufour, A. Sud, V. Kwatra, and I. Essa, “Eyemotion: Classifying Fa- cial Expressions in VR Using Eye-Tracking Cameras,” in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). Waikoloa Village, HI, USA: IEEE, Jan. 2019, pp. 1626–1635. [15] S. Rostaminia, A. Lamson, S. Maji, T. Rahman, and D. Ganesan, “W!NCE: Unobtru- sive Sensing of Upper Facial Action Units with EOG-based Eyewear,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 1, pp. 1–26, Mar. 2019. [16] W. Xie, Q. Zhang, and J. Zhang, “Acoustic-based Upper Facial Action Recognition for Smart Eyewear,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiq- uitous Technologies, vol. 5, no. 2, pp. 1–28, Jun. 2021. [17] K. Masai, K. Kunze, D. Sakamoto, Y. Sugiura, and M. Sugimoto, “Face Commands - User-Defined Facial Gestures for Smart Glasses,” in 2020 IEEE International Sympo- sium on Mixed and Augmented Reality (ISMAR), Nov. 2020, pp. 374–386, iSSN: 1554- 7868. [18] D. J. Matthies, A. Woodall, and B. Urban, “Prototyping Smart Eyewear with Capac- itive Sensing for Facial and Head Gesture Detection,” in Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers. Virtual USA: ACM, Sep. 2021, pp. 476–480. [19] Y. Igarashi, K. Futami, and K. Murao, “Silent Speech Eyewear Interface: Silent Speech Recognition Method using Eyewear with Infrared Distance Sensors,” in Pro- ceedings of the 2022 ACM International Symposium on Wearable Computers, ser. ISWC ’22. New York, NY, USA: Association for Computing Machinery, Dec. 2022, pp. 33–38. [20] T. Nukarinen, J. Kangas, O. pakov, P. Isokoski, D. Akkil, J. Rantala, and R. Raisamo, “Evaluation of HeadTurn: An Interaction Technique Using the Gaze and Head Turns,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction, ser. NordiCHI ’16. New York, NY, USA: Association for Computing Machinery, Oct. 2016, pp. 1–8. [21] S. Yi, Z. Qin, E. Novak, Y. Yin, and Q. Li, “GlassGesture: Exploring head gesture interface of smart glasses,” in IEEE INFOCOM 2016 - The 35th Annual IEEE Interna- tional Conference on Computer Communications, Apr. 2016, pp. 1–9. [22] Y. Yan, C. Yu, X. Yi, and Y. Shi, “HeadGesture: Hands-Free Input Approach Lever- aging Head Movements for HMD Devices,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 4, pp. 198:1–198:23, Dec. 2018. [23] A. Esteves, D. Verweij, L. Suraiya, R. Islam, Y. Lee, and I. Oakley, “SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality,” in Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’17. New York, NY, USA: Association for Computing Machinery, Oct. 2017, pp. 167–178. [24] J. Lee, H.-S. Yeo, M. Dhuliawala, J. Akano, J. Shimizu, T. Starner, A. Quigley, W. Woo, and K. Kunze, “Itchy nose: discreet gesture interaction using EOG sensors in smart eyewear,” in Proceedings of the 2017 ACM International Symposium on Wearable Com- puters, ser. ISWC ’17. New York, NY, USA: Association for Computing Machinery, Sep. 2017, pp. 94–97. [25] K. Yamashita, T. Kikuchi, K. Masai, M. Sugimoto, B. H. Thomas, and Y. Sugiura, “CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display,” in Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, ser. VRST ’17. New York, NY, USA: Association for Computing Machinery, Nov. 2017, pp. 1–8. [26] K. Masai, Y. Sugiura, and M. Sugimoto, “FaceRubbing: Input Technique by Rubbing Face using Optical Sensors on Smart Eyewear for Facial Expression Recognition,” in Proceedings of the 9th Augmented Human International Conference, ser. AH ’18. New York, NY, USA: Association for Computing Machinery, Feb. 2018, pp. 1–5. [27] Y. Weng, C. Yu, Y. Shi, Y. Zhao, Y. Yan, and Y. Shi, “FaceSight: Enabling Hand-to- Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ser. CHI ’21. New York, NY, USA: Association for Computing Machinery, May 2021,pp. 1–14. [28] “Glass,” 2023, accessed Mar 13, 2022. [Online]. Available: https://www.google. com/glass/start/ [29] “Vuzix | Heads-Up, Hands-Free AR Smart Glasses,” 2023, accessed Mar 10, 2022. [Online]. Available: https://www.vuzix.com/ [30] “Augmented Reality and Mixed Reality | by MOVERIO | Epson.com | Epson US,” 2023, accessed Mar 10, 2022. [Online]. Available: https://epson.com/ moverio-augmented-reality [31] “Rokid Glass 2 | Everyday AR Glasses Build for Enterprises,” 2023, accessed Mar 10, 2022. [Online]. Available: https://rokid.ai/products/rokid-glass-2/ [32] “Smart Glasses by soloso˝ | Personalize your Audio & Style with AirGo 2,” Jan.2023, accessed Mar 10, 2022. [Online]. Available: https://solosglasses.com/ [33] “Iristick - Smart glasses built for every industry,” 2023, accessed Mar 10, 2022. [Online]. Available: https://iristick.com/ [34] L. Paredes, A. Ipsita, J. C. Mesa, R. V. Martinez Garrido, and K. Ramani, “StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses UsingWearable Straps,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiqui- tous Technologies, vol. 6, no. 3, pp. 134:1–134:26, Sep. 2022. [35] W. Xie, J. Zhang, and Q. Zhang, “Transforming eyeglass rim into touch panel using piezoelectric sensors,” in Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. Sydney NSW Australia: ACM, Oct. 2022, pp. 838–840. [36] E. Brasier, O. Chapuis, N. Ferey, J. Vezien, and C. Appert, “ARPads: Mid-air Indirect Input for Augmented Reality,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Nov. 2020, pp. 332–343, iSSN: 1554-7868. [37] M. J. Kim and A. Bianchi, “Exploring Pseudo Hand-Eye Interaction on the Head- Mounted Display,” in Proceedings of the Augmented Humans International Conference 2021, ser. AHs ’21. New York, NY, USA: Association for Computing Machinery, Jul. 2021, pp. 251–258. [38] H.-S. Yeo, J. Lee, W. Woo, H. Koike, A. J. Quigley, and K. Kunze, “JINSense: Repur- posing Electrooculography Sensors on Smart Glass for Midair Gesture and Context Sensing,” in Extended Abstracts of the 2021 CHI Conference on Human Factors in Com- puting Systems, ser. CHI EA ’21. New York, NY, USA: Association for Computing Machinery, May 2021, pp. 1–6. [39] H. Bai, G. Lee, and M. Billinghurst, “Using 3D hand gestures and touch input for wearable AR interaction,” in CHI ’14 Extended Abstracts on Human Factors in Com- puting Systems, ser. CHI EA ’14. New York, NY, USA: Association for Computing Machinery, Apr. 2014, pp. 1321–1326. [40] M. D. Barrera-Machuca, A. Cassinelli, and C. Sandor, “Context-Based 3D Grids for Augmented Reality User Interfaces,” in Adjunct Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’20 Adjunct. New York, NY, USA: Association for Computing Machinery, Oct. 2020, pp. 73–76. [41] S. Lin, H. F. Cheng, W. Li, Z. Huang, P. Hui, and C. Peylo, “Ubii: Physical World Interaction Through Augmented Reality,” IEEE Transactions on Mobile Computing, vol. 16, no. 3, pp. 872–885, Mar. 2017, conference Name: IEEE Transactions on Mo- bile Computing. [42] H. J. Chae, J.-i. Hwang, and J. Seo, “Wall-based Space Manipulation Technique for Efficient Placement of Distant Objects in Augmented Reality,” in Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’18. New York, NY, USA: Association for Computing Machinery, Oct. 2018, pp. 45–52. [43] C. Harrison, D. Tan, and D. Morris, “Skinput: appropriating the body as an input surface,” in Proceedings of the 28th international conference on Human factors in comput- ing systems - CHI ’10. Atlanta, Georgia, USA: ACM Press, 2010, p. 453. [44] Y. Zhang, J. Zhou, G. Laput, and C. Harrison, “SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. San Jose California USA: ACM, May 2016, pp. 1491–1503. [45] Y. Suzuki, K. Sekimori, B. Shizuki, and S. Takahashi, “Touch Sensing on the Forearm Using the Electrical Impedance Method,” in 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Mar. 2019,pp. 255–260. [46] Y. Zhang, W. Kienzle, Y. Ma, S. S. Ng, H. Benko, and C. Harrison, “ActiTouch: Ro- bust Touch Detection for On-Skin AR/VR Interfaces,” in Proceedings of the 32nd An- nual ACM Symposium on User Interface Software and Technology, ser. UIST ’19. New York, NY, USA: Association for Computing Machinery, Oct. 2019, pp. 1151–1159. [47] A. Mujibiya, X. Cao, D. S. Tan, D. Morris, S. N. Patel, and J. Rekimoto, “The sound of touch: on-body touch and gesture sensing based on transdermal ultrasound propa-gation,” in Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces. St. Andrews Scotland, United Kingdom: ACM, Oct. 2013, pp. 189–198. [48] R. Nandakumar, V. Iyer, D. Tan, and S. Gollakota, “Fingerio: Using active sonar for fine-grained finger tracking,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2016, p. 1515–1525. [49] S. Yun, Y.-C. Chen, H. Zheng, L. Qiu, and W. Mao, “Strata: Fine-grained acoustic- based device-free tracking,” in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’17. New York, NY, USA: ACM, 2017, p. 15–28. [50] C. Zhang, Q. Xue, A. Waghmare, S. Jain, Y. Pu, S. Hersek, K. Lyons, K. A. Cunefare,O. T. Inan, and G. D. Abowd, “SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 2, pp. 30:1–30:25, Jun. 2017. [51] G. Laput, R. Xiao, and C. Harrison, “ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, ser. UIST ’16. New York, NY, USA: Association for Computing Machinery, Oct. 2016, pp. 321–333. [52] C. Zhang, J. Yang, C. Southern, T. E. Starner, and G. D. Abowd, “WatchOut: extend- ing interactions on a smartwatch with inertial sensing,” in Proceedings of the 2016 ACM International Symposium on Wearable Computers, ser. ISWC ’16. New York, NY, USA: Association for Computing Machinery, Sep. 2016, pp. 136–143. [53] W. Chen, L. Chen, Y. Huang, X. Zhang, L. Wang, R. Ruby, and K. Wu, “Taprint: Se- cure Text Input for Commodity Smart Wristbands,” in The 25th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’19. New York, NY, USA: Association for Computing Machinery, Aug. 2019, pp. 1–16. [54] G. Laput and C. Harrison, “Sensing Fine-Grained Hand Activity with Smart- watches,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Glasgow Scotland Uk: ACM, May 2019, pp. 1–13. [55] W. Chen, L. Chen, M. Ma, F. S. Parizi, S. Patel, and J. Stankovic, “ViFin: Harness Passive Vibration to Continuous Micro Finger Writing with a Commodity Smart- watch,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Tech- nologies, vol. 5, no. 1, pp. 45:1–45:25, Mar. 2021. [56] Q. Zhang, D. Wang, R. Zhao, Y. Yu, and J. Jing, “Write, Attend and Spell: Streaming End-to-end Free-style Handwriting Recognition Using Smartwatches,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 5, no. 3,pp. 138:1–138:25, Sep. 2021. [57] X. Xu, J. Gong, C. Brum, L. Liang, B. Suh, S. K. Gupta, Y. Agarwal, L. Lind- sey, R. Kang, B. Shahsavari, T. Nguyen, H. Nieto, S. E. Hudson, C. Maalouf, J. S. Mousavi, and G. Laput, “Enabling Hand Gesture Customization on Wrist-Worn De- vices,” in CHI Conference on Human Factors in Computing Systems. New Orleans LA USA: ACM, Apr. 2022, pp. 1–19. [58] X. Lin, Y. Chen, X.-W. Chang, X. Liu, and X. Wang, “SHOW: Smart Handwriting on Watches,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 4, pp. 151:1–151:23, Jan. 2018. [59] M. Zhang, Q. Dai, P. Yang, J. Xiong, C. Tian, and C. Xiang, “iDial: Enabling a Virtual Dial Plate on the Hand Back for Around-Device Interaction,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 1, pp. 55:1– 55:20, Mar. 2018. [60] J. Gong, A. Gupta, and H. Benko, “Acustico: Surface Tap Detection and Localiza- tion using Wrist-based Acoustic TDOA Sensing,” in Proceedings of the 33rd AnnualACM Symposium on User Interface Software and Technology. New York, NY, USA: Association for Computing Machinery, Oct. 2020, pp. 406–419. [61] J. McIntosh, A. Marzo, and M. Fraser, “SensIR: Detecting Hand Gestures with a Wearable Bracelet using Infrared Transmission and Reflection,” in Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’17. New York, NY, USA: Association for Computing Machinery, Oct. 2017, pp. 593–597. [62] J. Gong, Z. Xu, Q. Guo, T. Seyed, X. A. Chen, X. Bi, and X.-D. Yang, “WrisText: One-handed Text Entry on Smartwatch using Wrist Gestures,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Montreal QC Canada: ACM, Apr. 2018, pp. 1–14. [63] J. Gong, X. Yang, T. Seyed, J. U. Davis, and X.-D. Yang, “Indutivo: Contact-Based, Object-Driven Interactions with Inductive Sensing,” in Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’18. New York, NY, USA: Association for Computing Machinery, Oct. 2018, pp. 321–333. [64] L. H. Lee, N. Y. Yeung, T. Braud, T. Li, X. Su, and P. Hui, “Force9: Force-assisted Miniature Keyboard on Smart Wearables,” in Proceedings of the 2020 International Conference on Multimodal Interaction. Virtual Event Netherlands: ACM, Oct. 2020,pp. 232–241. [65] Y. Zhang, Y. Chen, H. Yu, X. Yang, R. Sun, and B. Zeng, “A Feature Adaptive Learn- ing Method for High-Density sEMG-Based Gesture Recognition,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 5, no. 1, pp. 1–26, Mar. 2021. [66] D. Kim and C. Harrison, “EtherPose: Continuous Hand Pose Tracking with Wrist- Worn Antenna Impedance Characteristic Sensing,” in The 35th Annual ACM Sympo- sium on User Interface Software and Technology. Bend OR USA: ACM, Oct. 2022, pp. 1–12. [67] Z. Yang, Y.-L. Wei, S. Shen, and R. R. Choudhury, “Ear-ar: indoor acoustic aug- mented reality on earphones,” in Proceedings of the 26th Annual International Confer- ence on Mobile Computing and Networking, 2020, pp. 1–14. [68] D. J. Matthies, B. A. Strecker, and B. Urban, “Earfieldsensing: A novel in-ear electric field sensing to enrich wearable gesture input through facial expressions,” in Pro- ceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 1911–1922. [69] D. Ma, A. Ferlini, and C. Mascolo, “Oesense: employing occlusion effect for in-ear human sensing,” in Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’21. New York, NY, USA: Associa- tion for Computing Machinery, 2021, p. 175–187. [70] J. Prakash, Z. Yang, Y.-L. Wei, H. Hassanieh, and R. R. Choudhury, “Earsense: ear- phones as a teeth activity sensor,” in Proceedings of the 26th Annual International Con- ference on Mobile Computing and Networking, 2020, pp. 1–13. [71] A. Ferlini, A. Montanari, C. Mascolo, and R. Harle, “Head Motion Tracking Through in-Ear Wearables,” in Proceedings of the 1st International Workshop on Earable Comput- ing. London United Kingdom: ACM, Sep. 2019, pp. 8–13. [72] Y. Yan, C. Yu, Y. Shi, and M. Xie, “PrivateTalk: Activating Voice Input with Hand- On-Mouth Gesture Detected by Bluetooth Earphones,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, ser. UIST ’19. New York, NY, USA: Association for Computing Machinery, Oct. 2019, pp. 1013–1020. [73] X. Xu, H. Shi, X. Yi, W. Liu, Y. Yan, Y. Shi, A. Mariakakis, J. Mankoff, and A. K. Dey, “EarBuddy: Enabling On-Face Interaction via Wireless Earbuds,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ser. CHI ’20. New York, NY, USA: Association for Computing Machinery, Apr. 2020, pp. 1–14. [74] Y. Wang, J. Ding, I. Chatterjee, F. Salemi Parizi, Y. Zhuang, Y. Yan, S. Patel, and Y. Shi, “FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on Earphones,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, ser. CHI ’22. New York, NY, USA: Association for Computing Machinery, Apr. 2022, pp. 1–12. [75] L. Ge, Q. Zhang, J. Zhang, and H. Chen, “EHTrack: Earphone-Based Head Tracking via Only Acoustic Signals,” IEEE Internet of Things Journal, pp. 1–1, 2023, conference Name: IEEE Internet of Things Journal. [76] T. Arakawa, T. Koshinaka, S. Yano, H. Irisawa, R. Miyahara, and H. Imaoka, “Fast and accurate personal authentication using ear acoustics,” in 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2016, pp. 1–4. [77] Y. Gao, W. Wang, V. V. Phoha, W. Sun, and Z. Jin, “Earecho: Using ear canal echo for wearable authentication,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 3, pp. 1–24, 2019. [78] Z. Wang, S. Tan, L. Zhang, Y. Ren, Z. Wang, and J. Yang, “Eardynamic: An ear canal deformation based continuous user authentication using in-ear wearables,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 5, no. 1, pp. 1–27, 2021. [79] H. Chen, F. Li, and Y. Wang, “Echotrack: Acoustic device-free hand tracking on smart phones,” in IEEE INFOCOM 2017-IEEE Conference on Computer Communica- tions, 2017, pp. 1–9. [80] R. Nandakumar, A. Takakuwa, T. Kohno, and S. Gollakota, “CovertBand: Activity Information Leakage using Music,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 3, pp. 1–24, Sep. 2017. [81] S. Yun, Y.-C. Chen, and L. Qiu, “Turning a mobile device into a mouse in the air,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’15. New York, NY, USA: ACM, 2015, p. 15–29. [82] W. Mao, M. Wang, W. Sun, L. Qiu, S. Pradhan, and Y.-C. Chen, “Rnn-based room scale hand motion tracking,” in The 25th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’19. New York, NY, USA: ACM, 2019. [83] K. Sun, T. Zhao, W. Wang, and L. Xie, “VSkin: Sensing Touch Gestures on Surfaces of Mobile Devices Using Acoustic Signals,” in Proceedings of the 24th Annual Interna- tional Conference on Mobile Computing and Networking. New Delhi India: ACM, Oct. 2018, pp. 591–605. [84] C. Chen, Y. Chen, Y. Han, H.-Q. Lai, and K. R. Liu, “Achieving centimeter-accuracy indoor localization on wifi platforms: A frequency hopping approach,” IEEE Inter- net of Things Journal, vol. 4, no. 1, pp. 111–121, 2016. [85] W. Ruan, Q. Z. Sheng, L. Yang, T. Gu, P. Xu, and L. Shangguan, “Audiogest: En- abling fine-grained hand gesture detection by decoding echo signal,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp ’16. New York, NY, USA: ACM, 2016, p. 474–485. [86] K. Ling, H. Dai, Y. Liu, and A. X. Liu, “UltraGesture: Fine-Grained Gesture Sens- ing and Recognition,” in 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). Hong Kong: IEEE, Jun. 2018, pp. 1–9. [87] Y. Wang, J. Shen, and Y. Zheng, “Push the limit of acoustic gesture recognition,” in IEEE INFOCOM 2020-IEEE Conference on Computer Communications, 2020, pp. 566– 575. [88] H. Kim, A. Byanjankar, Y. Liu, Y. Shu, and I. Shin, “Ubitap: Leveraging acoustic dispersion for ubiquitous touch interface on solid surfaces,” in Proceedings of the16th ACM Conference on Embedded Networked Sensor Systems, ser. SenSys ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 211–223. [89] X. Xu, J. Yu, Y. chen, Q. Hua, Y. Zhu, Y.-C. Chen, and M. Li, “Touchpass: towards behavior-irrelevant on-touch user authentication on smartphones leveraging vibra- tions,” in Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’20. New York, NY, USA: Association for Computing Machinery, 2020. [90] X. Sun, L. Qiu, Y. Wu, Y. Tang, and G. Cao, “Sleepmonitor: Monitoring respiratory rate and body position during sleep using smartwatch,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 1, no. 3, sep 2017. [91] D. Liaqat, M. Abdalla, P. Abed-Esfahani, M. Gabel, T. Son, R. Wu, A. Gershon,F. Rudzicz, and E. D. Lara, “Wearbreathing: Real world respiratory rate monitoring using smartwatches,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 3, no. 2, jun 2019. [92] T. Hao, C. Bi, G. Xing, R. Chan, and L. Tu, “MindfulWatch: A Smartwatch-Based System For Real-Time Respiration Monitoring During Meditation,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 3, pp. 57:1–57:19, Sep. 2017. [93] Y.-D. Lin, Y.-H. Chien, and Y.-S. Chen, “Wavelet-based embedded algorithm for res- piratory rate estimation from ppg signal,” Biomedical Signal Processing and Control, vol. 36, pp. 138–145, 2017. [94] D. J. Meredith, D. Clifton, P. Charlton, J. Brooks, C. W. Pugh, and L. Tarassenko, “Photoplethysmographic derivation of respiratory rate: a review of relevant physi- ology,” J Med Eng Technol, vol. 36, no. 1, pp. 1–7, Jan 2012. [95] C.-H. I. Shih, N. Tomita, Y. X. Lukic, . H. Reguera, E. Fleisch, and T. Kowatsch,“Breeze: Smartphone-based Acoustic Real-time Detection of Breathing Phases for a Gamified Biofeedback Breathing Training,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 4, pp. 1–30, Dec. 2019. [96] W. Xie, Q. Hu, J. Zhang, and Q. Zhang, “EarSpiro: Earphone-based Spirometry for Lung Function Assessment,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 4, pp. 188:1–188:27, Jan. 2023. [97] T. Ahmed, M. M. Rahman, E. Nemati, M. Y. Ahmed, J. Kuang, and A. J. Gao, “Re- mote breathing rate tracking in stationary position using the motion and acoustic sensors of earables,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, ser. CHI ’23. New York, NY, USA: Association for Computing Machinery, 2023. [98] F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. C. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Hu- man Factors in Computing Systems, ser. CHI ’15. New York, NY, USA: Association for Computing Machinery, 2015, p. 837–846. [99] Z. Yang, P. H. Pathak, Y. Zeng, X. Liran, and P. Mohapatra, “Monitoring vital signs using millimeter wave,” in Proceedings of the 17th ACM International Symposium on Mobile Ad Hoc Networking and Computing, ser. MobiHoc ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 211–220. [100] T. Wang, D. Zhang, Y. Zheng, T. Gu, X. Zhou, and B. Dorizzi, “C-fmcw based con- tactless respiration detection using acoustic signal,” Proc. ACM Interact. Mob. Wear- able Ubiquitous Technol., vol. 1, no. 4, jan 2018. [101] Y. Gong, Q. Zhang, B. H. NG, and W. Li, “Breathmentor: Acoustic-based diaphrag- matic breathing monitor system,” Proc. ACM Interact. Mob. Wearable Ubiquitous Tech- nol., vol. 6, no. 2, jul 2022. [102] T. Wang, Z. Wang, X. Liu, W. Liu, L. Wang, Y. Zheng, J. Hu, T. Gu, and D. Zhang, “Omniresmonitor: Omnimonitoring of human respiration using acoustic multipath reflection,” IEEE Transactions on Mobile Computing, pp. 1–14, 2023. [103] W. Xie, R. Tian, J. Zhang, and Q. Zhang, “Noncontact Respiration Detection Lever- aging Music and Broadcast Signals,” IEEE Internet of Things Journal, vol. 8, no. 4, pp. 2931–2942, Feb. 2021, conference Name: IEEE Internet of Things Journal. [104] R. Nandakumar, S. Gollakota, and N. Watson, “Contactless sleep apnea detection on smartphones,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’15. New York, NY, USA: ACM, 2015, pp. 45–57. [105] T. Wang, D. Zhang, Y. Zheng, T. Gu, X. Zhou, and B. Dorizzi, “C-fmcw based con- tactless respiration detection using acoustic signal,” Proc. ACM Interact. Mob. Wear- able Ubiquitous Technol., vol. 1, no. 4, pp. 170:1–170:20, Jan. 2018. [106] R. Ravichandran, E. Saba, K. Chen, M. Goel, S. Gupta, and S. N. Patel, “Wibreathe: Estimating respiration rate using wireless signals in natural settings in the home,” in 2015 IEEE International Conference on Pervasive Computing and Communications (Per- Com), March 2015, pp. 131–139. [107] H. Abdelnasser, K. A. Harras, and M. Youssef, “Ubibreathe: A ubiquitous non- invasive wifi-based breathing estimator,” in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, ser. MobiHoc ’15. New York, NY, USA: ACM, 2015, pp. 277–286. [108] X. Liu, J. Cao, S. Tang, and J. Wen, “Wi-sleep: Contactless sleep monitoring via wifi signals,” in 2014 IEEE Real-Time Systems Symposium, Dec 2014, pp. 346–355. [109] H. Wang, D. Zhang, J. Ma, Y. Wang, Y. Wang, D. Wu, T. Gu, and B. Xie, “Human respiration detection with commodity wifi devices: Do user location and body ori-entation matter?” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp ’16. New York, NY, USA: ACM, 2016, pp. 25–36. [110] F. Adib, H. Mao, Z. Kabelac, D. Katabi, and R. C. Miller, “Smart homes that monitor breathing and heart rate,” in Proceedings of the 33rd Annual ACM Conference on Hu- man Factors in Computing Systems, ser. CHI ’15. New York, NY, USA: ACM, 2015,pp. 837–846. [111] S. Yue, H. He, H. Wang, H. Rahul, and D. Katabi, “Extracting Multi-Person Res- piration from Entangled RF Signals,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 2, Jul. 2018. [112] Y. Zeng, D. Wu, R. Gao, T. Gu, and D. Zhang, “FullBreathe: Full Human Respira- tion Detection Exploiting Complementarity of CSI Phase and Amplitude of WiFi Signals,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Tech- nologies, vol. 2, no. 3, pp. 1–19, Sep. 2018. [113] Y. Zeng, D. Wu, J. Xiong, E. Yi, R. Gao, and D. Zhang, “FarSense: Pushing the Range Limit of WiFi-based Respiration Sensing with CSI Ratio of Two Antennas,” Proceed- ings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 3, pp. 1–26, Sep. 2019. [114] C.-Y. Hsu, A. Ahuja, S. Yue, R. Hristov, Z. Kabelac, and D. Katabi, “Zero-effort in- home sleep and insomnia monitoring using radio signals,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 1, no. 3, pp. 59:1–59:18, Sep. 2017. [115] U. M. Khan, L. Rigazio, and M. Shahzad, “Contactless monitoring of ppg using radar,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 6, no. 3, sep 2022. [116] T. Zheng, Z. Chen, S. Zhang, C. Cai, and J. Luo, “MoRe-Fi: Motion-robust and Fine- grained Respiration Monitoring via Deep-Learning UWB Radar,” in Proceedings ofthe 19th ACM Conference on Embedded Networked Sensor Systems. Coimbra Portugal: ACM, Nov. 2021, pp. 111–124. [117] F. Benetazzo, A. Freddi, A. Monteriù, and S. Longhi, “Respiratory rate detection al- gorithm based on rgb-d camera: theoretical background and experimental results,” Healthcare Technology Letters, vol. 1, no. 3, pp. 81–86, 2014. [118] Y. Nam, Y. Kong, B. Reyes, N. Reljin, and K. H. Chon, “Monitoring of heart and breathing rates using dual cameras on a smartphone,” PLOS ONE, vol. 11, pp. 1–15, 03 2016. [119] C. Massaroni, D. S. Lopes, D. Lo Presti, E. Schena, and S. Silvestri, “Contactless monitoring of breathing patterns and respiratory rate at the pit of the neck: A single camera approach,” Journal of Sensors, vol. 2018, 2018. [120] M.-Z. Poh, D. J. McDuff, and R. W. Picard, “Advancements in noncontact, multipa- rameter physiological measurements using a webcam,” IEEE Transactions on Biomed- ical Engineering, vol. 58, no. 1, pp. 7–11, 2011. [121] C. G. Scully, J. Lee, J. Meyer, A. M. Gorbach, D. Granquist-Fraser, Y. Mendelson, andK. H. Chon, “Physiological parameter monitoring from optical recordings with a mobile phone,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 2, pp. 303– 306, 2012. [122] E. C. Larson, M. Goel, G. Boriello, S. Heltshe, M. Rosenfeld, and S. N. Patel, “Spiros- mart: using a microphone to measure lung function on a mobile phone,” in Proceed- ings of the 2012 ACM Conference on ubiquitous computing, 2012, pp. 280–289. [123] M. Goel, E. Saba, M. Stiber, E. Whitmire, J. Fromm, E. C. Larson, G. Borriello, andS. N. Patel, “Spirocall: Measuring lung function over a phone call,” in Proceedings of the 2016 CHI conference on human factors in computing systems, 2016, pp. 5675–5685. [124] X. Song, B. Yang, G. Yang, R. Chen, E. Forno, W. Chen, and W. Gao, “Spirosonic: monitoring human lung function via acoustic sensing on commodity smartphones,” in Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, 2020, pp. 1–14. [125] S. Kaiser, A. Parks, P. Leopard, C. Albright, J. Carlson, M. Goel, D. Nassehi, and E. C. Larson, “Design and learnability of vortex whistles for managing chronic lung func- tion via smartphones,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016, pp. 569–580. [126] A. Adhikari, A. Hetherington, and S. Sur, “mmflow: Facilitating at-home spirom- etry with 5g smart devices,” in 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). IEEE, 2021, pp. 1–9. [127] X. Yin, K. Huang, E. Forno, W. Chen, H. Huang, and W. Gao, “Ptease: Objective air- way examination for pulmonary telemedicine using commodity smartphones,” in Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services, ser. MobiSys ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 110–123. [128] M. Chu, T. Nguyen, V. Pandey, Y. Zhou, H. N. Pham, R. Bar-Yoseph, S. Radom-Aizik,R. Jain, D. M. Cooper, and M. Khine, “Respiration rate and volume measurements using wearable strain sensors,” NPJ Digit Med, vol. 2, p. 8, 2019. [129] P. Sharma, X. Hui, J. Zhou, T. B. Conroy, and E. C. Kan, “Wearable radio-frequency sensing of respiratory rate, respiratory volume, and heart rate,” NPJ Digit Med, vol. 3, p. 98, 2020. [130] P. Nguyen, X. Zhang, A. Halbower, and T. Vu, “Continuous and fine-grained breath- ing volume monitoring from afar using wireless signals,” in IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications, 2016, pp. 1–9. [131] T. Zheng, Z. Chen, S. Zhang, C. Cai, and J. Luo, “More-fi: Motion-robust and fine- grained respiration monitoring via deep-learning uwb radar,” in Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, ser. SenSys ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 111–124. [132] V. Soleimani, M. Mirmehdi, D. Damen, S. Hannuna, M. Camplani, J. Viner, andJ. Dodd, “Remote pulmonary function testing using a depth sensor,” in 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS), Oct. 2015, pp. 1–4. [133] S. Ostadabbas, N. Sebkhi, M. Zhang, S. Rahim, L. J. Anderson, F. E.-H. Lee, andM. Ghovanloo, “A Vision-Based Respiration Monitoring System for Passive Airway Resistance Estimation,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 9,pp. 1904–1913, Sep. 2016, conference Name: IEEE Transactions on Biomedical Engi- neering. [134] C. Sharp, V. Soleimani, S. Hannuna, M. Camplani, D. Damen, J. Viner, M. Mirmehdi, and J. W. Dodd, “Toward Respiratory Assessment Using Depth Measurements from a Time-of-Flight Sensor,” Frontiers in Physiology, vol. 8, 2017. [135] V. Soleimani, M. Mirmehdi, D. Damen, J. Dodd, S. Hannuna, C. Sharp, M. Camplani, and J. Viner, “Remote, Depth-Based Lung Function Assessment,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 8, pp. 1943–1958, Aug. 2017, conference Name: IEEE Transactions on Biomedical Engineering. [136] V. Soleimani, M. Mirmehdi, D. Damen, M. Camplani, S. Hannuna, C. Sharp, andJ. Dodd, “Depth-Based Whole Body Photoplethysmography in Remote Pulmonary Function Testing,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 6, pp. 1421–1431, Jun. 2018, conference Name: IEEE Transactions on Biomedical Engineer- ing. [137] W. Imano, K. Kameyama, M. Hollingdal, J. Refsgaard, K. Larsen, C. Topp, S. H. Kronborg, J. D. Gade, and B. Dinesen, “Non-Contact Respiratory Measurement Us-ing a Depth Camera for Elderly People,” Sensors, vol. 20, no. 23, p. 6901, Jan. 2020, number: 23 Publisher: Multidisciplinary Digital Publishing Institute. [138] J. M. Harte, C. K. Golby, J. Acosta, E. F. Nash, E. Kiraci, M. A. Williams, T. N. Ar- vanitis, and B. Naidu, “Chest wall motion analysis in healthy volunteers and adults with cystic fibrosis using a novel Kinect-based motion tracking system,” Medical & Biological Engineering & Computing, vol. 54, no. 11, pp. 1631–1640, 2016. [139] K. Oh, C. S. Shin, J. Kim, and S. K. Yoo, “Level-Set Segmentation-Based Respiratory Volume Estimation Using a Depth Camera,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 4, pp. 1674–1682, Jul. 2019, conference Name: IEEE Journal of Biomedical and Health Informatics. [140] U. Ha, S. Assana, and F. Adib, “Contactless seismocardiography via deep learning radars,” in Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’20. New York, NY, USA: Association for Computing Machinery, Sep. 2020, pp. 1–14. [141] F. Wang, F. Zhang, C. Wu, B. Wang, and K. J. R. Liu, “ViMo: Multi-person Vital Sign Monitoring using Commodity Millimeter Wave Radio,” IEEE Internet of Things Journal, pp. 1–1, 2020. [142] C. Wang, L. Xie, W. Wang, Y. Chen, Y. Bu, and S. Lu, “RF-ECG: Heart Rate Variability Assessment Based on COTS RFID Tag Array,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 2, no. 2, pp. 1–26, Jul. 2018. [143] X. Wang, C. Yang, and S. Mao, “TensorBeat: Tensor Decomposition for Monitoring Multiperson Breathing Beats with Commodity WiFi,” ACM Transactions on Intelli- gent Systems and Technology, vol. 9, no. 1, pp. 1–27, Oct. 2017. [144] Y. Gu, X. Zhang, Z. Liu, and F. Ren, “WiFi-Based Real-Time Breathing and HeartRate Monitoring during Sleep,” in 2019 IEEE Global Communications Conference (GLOBECOM). Waikoloa, HI, USA: IEEE, Dec. 2019, pp. 1–6. [145] K. Qian, C. Wu, F. Xiao, Y. Zheng, Y. Zhang, Z. Yang, and Y. Liu, “Acousticcardio- gram: Monitoring Heartbeats using Acoustic Signals on Smart Devices,” in IEEE INFOCOM 2018 - IEEE Conference on Computer Communications. Honolulu, HI: IEEE, Apr. 2018, pp. 1574–1582. [146] F. Zhang, Z. Wang, B. Jin, J. Xiong, and D. Zhang, “Your Smart Speaker Can "Hear" Your Heartbeat!” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiqui- tous Technologies, vol. 4, no. 4, pp. 1–24, Dec. 2020. [147] A. Wang, D. Nguyen, A. R. Sridhar, and S. Gollakota, “Using smart speakers to contactlessly monitor heart rhythms,” Communications Biology, vol. 4, no. 1, pp. 1– 12, Mar. 2021, number: 1 Publisher: Nature Publishing Group. [148] E. J. Wang, J. Zhu, M. Jain, T.-J. Lee, E. Saba, L. Nachman, and S. N. Patel, “Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and Camera,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ser. CHI ’18. New York, NY, USA: Association for Computing Machinery, Apr. 2018,pp. 1–9. [149] N. Bui, N. Pham, J. J. Barnitz, Z. Zou, P. Nguyen, H. Truong, T. Kim, N. Farrow,A. Nguyen, J. Xiao, R. Deterding, T. Dinh, and T. Vu, “eBP: A Wearable System For Frequent and Comfortable Blood Pressure Monitoring From User’s Ear,” in The 25th Annual International Conference on Mobile Computing and Networking. Los Cabos Mexico: ACM, Oct. 2019, pp. 1–17. [150] Y. Cao, H. Chen, F. Li, and Y. Wang, “Crisp-BP: continuous wrist PPG-based blood pressure measurement,” in Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’21. New York, NY, USA: Associ- ation for Computing Machinery, Oct. 2021, pp. 378–391. [151] Y. Liang, A. Zhou, X. Wen, W. Huang, P. Shi, L. Pu, H. Zhang, and H. Ma, “airbp: Monitor your blood pressure with millimeter-wave in the air,” ACM Trans. Internet Things, vol. 4, no. 4, nov 2023. [152] Z. Shi, T. Gu, Y. Zhang, and X. Zhang, “mmbp: Contact-free millimetre-wave radar based approach to blood pressure measurement,” in Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems, ser. SenSys ’22. New York, NY, USA: Association for Computing Machinery, 2023, p. 667–681. [153] H. Nakano, K. Hirayama, Y. Sadamitsu, A. Toshimitsu, H. Fujita, S. Shin, andT. Tanigawa, “Monitoring Sound To Quantify Snoring and Sleep Apnea Severity Using a Smartphone: Proof of Concept,” Journal of Clinical Sleep Medicine : JCSM : Official Publication of the American Academy of Sleep Medicine, vol. 10, no. 1, pp. 73–78, Jan. 2014. [154] T. Rosenwein, E. Dafna, A. Tarasiuk, and Y. Zigel, “Breath-by-breath detection of apneic events for OSA severity estimation using non-contact audio recordings,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biol- ogy Society (EMBC), Aug. 2015, pp. 7688–7691, iSSN: 1558-4615. [155] H. Xu, W. Song, H. Yi, L. Hou, C. Zhang, B. Chen, Y. Chen, and S. Yin, “Nocturnal snoring sound analysis in the diagnosis of obstructive sleep apnea in the Chinese Han population,” Sleep and Breathing, vol. 19, no. 2, pp. 599–605, May 2015. [156] G. Korompili, L. Kokkalas, S. A. Mitilineos, N.-A. Tatlas, and S. M. Potirakis, “De- tecting Apnea/Hypopnea Events Time Location from Sound Recordings for Pa- tients with Severe or Moderate Sleep Apnea Syndrome,” Applied Sciences, vol. 11, no. 15, p. 6888, Jul. 2021. [157] H. E. Romero, N. Ma, G. J. Brown, and E. A. Hill, “Acoustic Screening for Obstruc- tive Sleep Apnea in Home Environments Based on Deep Neural Networks,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 7, pp. 2941–2950, Jul. 2022. [158] C. Yang, G. Cheung, V. Stankovic, K. Chan, and N. Ono, “Sleep Apnea Detection via Depth Video and Audio Feature Learning,” IEEE Transactions on Multimedia, vol. 19, no. 4, pp. 822–835, Apr. 2017, conference Name: IEEE Transactions on Multimedia. [159] M. P. Bonnesen, H. B. D. Sorensen, and P. Jennum, “Mobile Apnea Screening Sys- tem for at-home Recording and Analysis of Sleep Apnea Severity,” in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2018, pp. 457–460, iSSN: 1558-4615. [160] I. Ferrer-Lluis, Y. Castillo-Escario, J. M. Montserrat, and R. Jané, “Automatic Event Detector from Smartphone Accelerometry: Pilot mHealth Study for Obstructive Sleep Apnea Monitoring at Home,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2019, pp. 4990–4993, iSSN: 1558-4615. [161] S. Yue, Y. Yang, H. Wang, H. Rahul, and D. Katabi, “Bodycompass: Monitoring sleep posture with wireless signals,” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 4, no. 2, jun 2020. [162] J. Liu, Y. Chen, Y. Wang, X. Chen, J. Cheng, and J. Yang, “Monitoring vital signs and postures during sleep using wifi signals,” IEEE Internet of Things Journal, vol. 5, no. 3, pp. 2071–2084, 2018. [163] M. Zhao, S. Yue, D. Katabi, T. S. Jaakkola, and M. T. Bianchi, “Learning sleep stages from radio signals: A conditional adversarial architecture,” in Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, ser. Proceedings of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70. PMLR, 2017, pp. 4100–4109. [164] Everysight, “About raptor - everysight,” 2020. [Online]. Available: https://everysight.com/about-raptor/ [165] Huawei, “Huawei x gentle monster eyewear ii,” 2020. [Online]. Available: https://consumer.huawei.com/en/wearables/gentle-monster-eyewear2/ [166] Amazon, “Official site: What is alexa?” 2020. [Online]. Available: https://developer.amazon.com/en-US/alexa [167] B. Ashtiani and I. S. MacKenzie, “Blinkwrite2: an improved text entry method using eye blinks,” in Proceedings of the 2010 symposium on eye-tracking research & applications, 2010, pp. 339–345. [168] O. Tuisku, V. Rantanen, and V. Surakka, “Longitudinal study on text entry by gazing and smiling,” in Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, ser. ETRA ’16. New York, NY, USA: ACM, 2016, p. 253– 256. [169] P. Ekman, W. V. Friesen, and J. C. Hager, “Facial action coding system: The manual on cd rom,” A Human Face, Salt Lake City, pp. 77–254, 2002. [170] M. Hamedi, S.-H. Salleh, M. Astaraki, and A. M. Noor, “Emg-based facial gesture recognition through versatile elliptic basis function neural network,” Biomedical en- gineering online, vol. 12, no. 1, p. 73, 2013. [171] D. Denney and C. Denney, “The eye blink electro-oculogram,” The British journal of ophthalmology, vol. 68, no. 4, pp. 225–228, Apr 1984. [172] J. MEME, “J!ns meme: The world’s first wearable eyewear that lets you see yourself,” 2020. [Online]. Available: https://jins-meme.com/en/ [173] L. M. DelRosso, R. B. Berry, S. E. Beck, M. H. Wagner, and C. L. Marcus, Pediatric Sleep Pearls E-Book. Elsevier Health Sciences, 2016. [174] K. Masai, Y. Sugiura, M. Ogata, K. Kunze, M. Inami, and M. Sugimoto, “Facial ex- pression recognition in daily life by embedded photo reflective sensors on smarteyewear,” in Proceedings of the 21st International Conference on Intelligent User Inter- faces, ser. IUI ’16. New York, NY, USA: ACM, 2016, p. 317–326. [175] Y. Kim, “Detection of eye blinking using doppler sensor with principal component analysis,” IEEE Antennas and Wireless Propagation Letters, vol. 14, pp. 123–126, 2015. [176] W. Wang, A. X. Liu, and K. Sun, “Device-free gesture tracking using acoustic sig- nals,” in Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’16. New York, NY, USA: ACM, 2016, p. 82–94. [177] J. Kwon, D.-H. Kim, W. Park, and L. Kim, “A wearable device for emotional recog- nition using facial expression and physiological response,” in 2016 38th Annual In- ternational Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2016, pp. 5765–5768. [178] K. Yamamoto, K. Toyoda, and T. Ohtsuki, “Doppler sensor-based blink duration estimation by analysis of eyelids closing and opening behavior on spectrogram,” IEEE Access, vol. 7, pp. 42 726–42 734, 2019. [179] A. Gruebler and K. Suzuki, “Measurement of distal emg signals using a wearable device for reading facial expressions,” in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, 2010, pp. 4594–4597. [180] ——, “Design of a wearable device for reading positive expressions from facial emg signals,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 227–237, 2014. [181] M. Salehi and J. Proakis, “Digital communications,” McGraw-Hill Education, vol. 31,p. 32, 2007. [182] D. Tse and P. Viswanath, Fundamentals of wireless communication. Cambridge uni- versity press, 2005. [183] A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals and Systems (2nd Edition). Pearson, 1996. [184] B. Zhou, M. Elbadry, R. Gao, and F. Ye, “Battracker: High precision infrastructure- free mobile device tracking in indoor environments,” in Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, ser. SenSys ’17. New York, NY, USA: ACM, 2017. [185] Q. Lin, Z. An, and L. Yang, “Rebooting ultrasonic positioning systems for ultrasound-incapable smart devices,” in The 25th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’19. New York, NY, USA: ACM, 2019. [186] W. Mao, J. He, and L. Qiu, “Cat: High-precision acoustic motion tracking,” in Pro- ceedings of the 22nd Annual International Conference on Mobile Computing and Network- ing, ser. MobiCom ’16. New York, NY, USA: ACM, 2016, p. 69–81. [187] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adapta- tion of deep networks,” in Proceedings of the 34th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 70. International Convention Centre, Sydney, Australia: PMLR, 06–11 Aug 2017, pp. 1126–1135. [188] W. Jiang, C. Miao, F. Ma, S. Yao, Y. Wang, Y. Yuan, H. Xue, C. Song, X. Ma, D. Kout- sonikolas, W. Xu, and L. Su, “Towards environment independent device free human activity recognition,” in Proceedings of the 24th Annual International Conference on Mo- bile Computing and Networking, ser. MobiCom ’18. New York, NY, USA: ACM, 2018,p. 289–304. [189] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the incep- tion architecture for computer vision,” in Proceedings of the IEEE conference on com- puter vision and pattern recognition, 2016, pp. 2818–2826. [190] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison,A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc., 2019, pp. 8026–8037. [191] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on knowl- edge and data engineering, vol. 22, no. 10, pp. 1345–1359, 2010. [192] R. Pi, “Raspberry pi 3 model b+,” 2020. [Online]. Available: https://www. raspberrypi.org/products/raspberry-pi-3-model-b-plus/ [193] Seeed, “Respeaker 4-mic linear array kit for raspberry pi,” 2020. [On- line]. Available: https://wiki.seeedstudio.com/ReSpeaker_4-Mic_Linear_Array_ Kit_for_Raspberry_Pi/ [194] Xiaomi, “Mi in-ear headphones pro,” 2020. [Online]. Available: https://www.mi. com/my/headphonespro/ [195] R. Pi, “Raspberry pi 4 model b,” 2020. [Online]. Available: https://www. raspberrypi.org/products/raspberry-pi-4-model-b/ [196] B. Farnsworth, “Facial action coding system (facs) –a visual guidebook,” 2019. [Online]. Available: https://imotions.com/blog/facial-action-coding-system/ [197] CHargerLAB, “Power-z km001 usb power tester voltage current ripple dual type-c meter,” 2020. [Online]. Available: http://www.chargerlab.com/ power-z-km001-usb-power-tester-voltage-current-ripple-dual-type-c-meter/ [198] G. Technologies, “Developer kits for gap 8,” 2020. [Online]. Available: https://greenwaves-technologies.com/developer-kits/ [199] X. Xu, J. Yu, Y. chen, Q. Hua, Y. Zhu, Y.-C. Chen, and M. Li, “Touchpass: Towards behavior-irrelevant on-touch user authentication on smartphones leveraging vibra- tions,” in Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, ser. MobiCom ’20. New York, NY, USA: ACM, 2020. [200] Y. Zheng, Y. Zhang, K. Qian, G. Zhang, Y. Liu, C. Wu, and Z. Yang, “Zero-effort cross-domain gesture recognition with wi-fi,” in Proceedings of the 17th Annual In- ternational Conference on Mobile Systems, Applications, and Services, ser. MobiSys ’19. New York, NY, USA: ACM, 2019, p. 313–325. [201] A. Colaço, A. Kirmani, H. S. Yang, N.-W. Gong, C. Schmandt, and V. K. Goyal, “Mime: compact, low power 3D gesture sensing for interaction with head mounted displays,” in Proceedings of the 26th annual ACM symposium on User interface software and technology, ser. UIST ’13. New York, NY, USA: Association for Computing Machinery, Oct. 2013, pp. 227–236. [202] G. Heidemann, I. Bax, and H. Bekel, “Multimodal interaction in an augmented real- ity scenario,” in Proceedings of the 6th international conference on Multimodal interfaces, ser. ICMI ’04. New York, NY, USA: Association for Computing Machinery, Oct. 2004, pp. 53–60. [203] W.-J. Tseng, L.-Y. Wang, and L. Chan, “FaceWidgets: Exploring Tangible Interaction on Face with Head-Mounted Displays,” in Proceedings of the 32nd Annual ACM Sym- posium on User Interface Software and Technology, ser. UIST ’19. New York, NY, USA: Association for Computing Machinery, Oct. 2019, pp. 417–427. [204] D. A. Norman, The design of everyday things. [New York]: Basic Books, 2002. [205] P.-S. Ku, Q. Shao, T.-Y. Wu, J. Gong, Z. Zhu, X. Zhou, and X.-D. Yang, “ThreadSense: Locating Touch on an Extremely Thin Interactive Thread,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu HI USA: ACM, Apr. 2020, pp. 1–12. [206] M. Ono, B. Shizuki, and J. Tanaka, “Touch & activate: adding interactivity to existing objects using active acoustic sensing,” in Proceedings of the 26th annual ACM symposium on User interface software and technology, ser. UIST ’13. New York, NY, USA: Association for Computing Machinery, Oct. 2013, pp. 31–40. [207] T. Rodrigues-Marinho, N. Pereira, V. Correia, D. Miranda, S. Lanceros-Méndez, andP. Costa, “Transparent piezoelectric polymer-based materials for energy harvesting and multitouch detection devices,” ACS Applied Electronic Materials, vol. 4, no. 1, pp. 287–296, 2022. [208] C. Qiu, B. Wang, N. Zhang, S. Zhang, J. Liu, D. Walker, Y. Wang, H. Tian, T. R. Shrout, Z. Xu et al., “Transparent ferroelectric crystals with ultrahigh piezoelectric- ity,” Nature, vol. 577, no. 7790, pp. 350–354, 2020. [209] Y.-C. Tung and K. G. Shin, “EchoTag: Accurate Infrastructure-Free Indoor Location Tagging with Smartphones,” in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking - MobiCom ’15. Paris, France: ACM Press, 2015,pp. 525–536. [210] R. Dutta, A. B. J. Kokkeler, R. v. d. Zee, and M. J. Bentum, “Performance of chirped- fsk and chirped-psk in the presence of partial-band interference,” in 2011 18th IEEE Symposium on Communications and Vehicular Technology in the Benelux (SCVT), 2011,pp. 1–6. [211] J. C. Liando, A. Gamage, A. W. Tengourtius, and M. Li, “Known and unknown facts of lora: Experiences from a large-scale measurement study,” ACM Trans. Sen. Netw., vol. 15, no. 2, feb 2019. [212] G. Park and D. J. Inman, “Structural health monitoring using piezoelectric impedance measurements,” Philosophical Transactions of the Royal Society A: Mathe- matical, Physical and Engineering Sciences, vol. 365, no. 1851, pp. 373–392, Dec. 2006, publisher: Royal Society. [213] M. Sato, I. Poupyrev, and C. Harrison, “Touché: enhancing touch interaction on humans, screens, liquids, and everyday objects,” in Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems. Austin Texas USA: ACM, May 2012,pp. 483–492. [214] M. Sato, R. S. Puri, A. Olwal, Y. Ushigome, L. Franciszkiewicz, D. Chandra,I. Poupyrev, and R. Raskar, “Zensei: Embedded, Multi-electrode Bioimpedance Sensing for Implicit, Ubiquitous User Recognition,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Denver Colorado USA: ACM, May 2017, pp. 3972–3985. [215] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,pp. 770–778. [216] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information process- ing systems, vol. 30, 2017. [217] B. K. Iwana and S. Uchida, “An empirical survey of data augmentation for time series classification with neural networks,” PLOS ONE, vol. 16, no. 7, p. e0254841, Jul. 2021, publisher: Public Library of Science. [218] “Home | PUI Audio,” 2023, accessed Mar 10, 2022. [Online]. Available: https://puiaudio.com/ [219] “Homepage | Focusrite,” 2023, accessed Mar 10, 2022. [Online]. Available: https://focusrite.com/en [220] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [221] A. Bangor, P. T. Kortum, and J. T. Miller, “An Empirical Evaluation of the System Usability Scale,” International Journal of Human–Computer Interaction, vol. 24, no. 6,pp. 574–594, Jul. 2008. [222] J. Shin, S. Lee, T. Gong, H. Yoon, H. Roh, A. Bianchi, and S.-J. Lee, “MyDJ: Sensing Food Intakes with an Attachable on Your Eyeglass Frame,” in CHI Conference onHuman Factors in Computing Systems. New Orleans LA USA: ACM, Apr. 2022, pp. 1–17. [223] D. Kim, K. Park, and G. Lee, “AtaTouch: Robust Finger Pinch Detection for a VR Controller Using RF Return Loss,” in Proceedings of the 2021 CHI Conference on Hu- man Factors in Computing Systems. Yokohama Japan: ACM, May 2021, pp. 1–9. [224] Y. Zhuang, Y. Wang, Y. Yan, X. Xu, and Y. Shi, “ReflecTrack: Enabling 3D Acoustic Position Tracking Using Commodity Dual-Microphone Smartphones,” in The 34th Annual ACM Symposium on User Interface Software and Technology. Virtual Event USA: ACM, Oct. 2021, pp. 1050–1062. [225] “Buy Apple Watch Series 8,” 2023, accessed Mar 10, 2022. [Online]. Available: https://www.apple.com/shop/buy-watch/apple-watch [226] J. Lin, L. Zhu, W.-M. Chen, W.-C. Wang, C. Gan, and S. Han, “On-device training under 256kb memory,” 2022. [227] M. Zhao, F. Adib, and D. Katabi, “Emotion recognition using wireless signals,” in Proceedings of the 22Nd Annual International Conference on Mobile Computing and Net- working, ser. MobiCom ’16. New York, NY, USA: ACM, 2016, pp. 95–108. [228] Amazon. Echo (3rd gen) - smart speaker with alexa. [Online]. Available: https://www.amazon.com/all-new-Echo/dp/B07R1CXKN7 [229] M. Ueda, K. Ashihara, and H. Takahashi, “How high-frequency do children hear?” in Internoise 2016, 45th International Congress and Exposition of Noise Control Engineer- ing, 2015, pp. 21–24. [230] S. E. Trehub, B. A. Schneider, B. A. Morrongiello, and L. A. Thorpe, “Developmental changes in high-frequency sensitivity: Original papers,” Audiology, vol. 28, no. 5,pp. 241–249, 1989. [231] R. R. Fay and L. A. Wilber, “Hearing in vertebrates: A psychophysics databook,”The Journal of the Acoustical Society of America, vol. 86, no. 5, p. 2044, 1989. [232] D. Warfield, “The study of hearing in animals,” Methods of animal experimentation, IV, pp. 43–143, 2012. [233] A. Wang and S. Gollakota, “Millisonic: Pushing the limits of acoustic motion track- ing,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019, p. 18. [234] O. O. Oyerinde and S. H. Mneney, “Review of channel estimation for wireless com- munication systems,” IETE Technical Review, vol. 29, no. 4, pp. 282–298, 2012. [235] H. Rahul, S. S. Kumar, and D. Katabi, “Megamimo: Scaling wireless capacity with user demand,” in Proc. ACM SIGCOMM, vol. 4, no. 4.3, 2012, p. 1. [236] A. V. Oppenheim, J. R. Buck, and R. W. Schafer, Discrete-time signal processing. Vol. 2. Upper Saddle River, NJ: Prentice Hall, 2001. [237] M. Pukkila, “Channel estimation modeling,” Nokia Research Center, vol. 17, p. 66, 2000. [238] “Schuhfried - biofeedback,” https://www.schuhfried.com/biofeedback/, 2018, ac- cessed April 1, 2018. [239] “Coal workers’health surveillance program: Spirometry,” https://www.cdc.gov/ niosh/topics/cwhsp/coalminerhealth.html, 2018, accessed Mar 28, 2022. [240] “Occupational lung diseases,” https://www.cedars-sinai.org/health-library/ diseases-and-conditions/o/occupational-lung-diseases.html, 2022, accessed Mar 28, 2022. [241] “Easyone air,” https://nddmed.com/products/spirometers/easyone-air, 2022, ac- cessed Aug 4, 2022. [242] “Ubreath spirometer system (pf680),” https://www.e-linkcare.com/ spirometer-system-pf680-product/, 2021, accessed Aug 23, 2021. [243] M. Kryger, F. Bode, R. Antic, and N. Anthonisen, “Diagnosis of obstruction of the upper and central airways,” The American journal of medicine, vol. 61, no. 1, pp. 85–93, 1976. [244] A. K. Das, L. D. Davanzo, G. J. Poiani, P. G. Zazzali, A. T. Scardella, M. L. Warnock, and N. H. Edelman, “Variable extrathoracic airflow obstruction and chronic laryn- gotracheitis in gulf war veterans,” Chest, vol. 115, no. 1, pp. 97–101, 1999. [245] D. Vilozni, O. Efrati, A. Barak, Y. Yahav, A. Augarten, and L. Bentur, “Forced inspi- ratory flow volume curve in healthy young children,” Pediatric pulmonology, vol. 44, no. 2, pp. 105–111, 2009. [246] A. Glazova, V. Korenbaum, A. Kostiv, O. Kabancova, A. Tagiltcev, and S. Shin, “Mea- surement and estimation of human forced expiratory noise parameters using a mi- crophone with a stethoscope head and a lapel microphone,” Physiological measure- ment, vol. 39, no. 6, p. 065006, 2018. [247] V. I. Korenbaum, I. A. Pochekutova, A. E. Kostiv, V. V. Malaeva, M. A. Safronova,O. I. Kabantsova, and S. N. Shin, “Human forced expiratory noise. origin, apparatus and possible diagnostic applications,” The Journal of the Acoustical Society of America, vol. 148, no. 6, pp. 3385–3391, 2020. [248] A. Dyachenko, G. Lyubimov, I. Skobeleva, and M. Strongin, “Generalization of the mathematical model of lungs for describing the intensity of the tracheal sounds during forced expiration,” Fluid Dynamics, vol. 46, no. 1, pp. 16–23, 2011. [249] G. Pressler, J. Mansfield, H. Pasterkamp, and G. Wodicka, “Detection of respiratory sounds within the ear canal,” in Proceedings of the Second Joint 24th Annual Confer-ence and the Annual Fall Meeting of the Biomedical Engineering Society][Engineering in Medicine and Biology, vol. 2. IEEE, 2002, pp. 1529–1530. [250] T. Barreiro and I. Perillo, “An approach to interpreting spirometry,” American family physician, vol. 69, no. 5, pp. 1107–1114, 2004. [251] “Spirolab —spirometry, oximetry, mobile health,” https://www.spirometry.com/ prodotti/spirolab/, 2021, accessed Aug 23, 2021. [252] R. Qin, J. An, J. Xie, R. Huang, Y. Xie, L. He, H. Xv, G. Qian, and J. Li, “Fef25-75% is a more sensitive measure reflecting airway dysfunction in patients with asthma: a comparison study using fef25-75% and fev1%,” The Journal of Allergy and Clinical Immunology: In Practice, vol. 9, no. 10, pp. 3649–3659, 2021. [253] (2021, August) What is a pulmonary function test? [Online]. Available: https://www.morgansci.com/support/what-is-a-pulmonary-function-test/ [254] Y. J. C. Do Sun Kwon, T. H. Kim, M. K. Byun, J. H. Cho, H. J. Kim, and H. J. Park, “Fef25-75% values in patients with normal lung function can predict the develop- ment of chronic obstructive pulmonary disease,” International Journal of Chronic Ob- structive Pulmonary Disease, vol. 15, p. 2913, 2020. [255] E. Breatnach, G. C. Abbott, and R. G. Fraser, “Dimensions of the normal human trachea,” American Journal of Roentgenology, vol. 142, no. 5, pp. 903–906, 1984. [256] S. Mehta and H. Myat, “The cross-sectional shape and circumference of the human trachea.” Annals of the Royal College of surgeons of England, vol. 66, no. 5, p. 356, 1984. [257] F. Van Herpe and D. Crighton, “Noise generation by turbulent flow in ducts,” Le Journal de Physique IV, vol. 4, no. C5, pp. C5–947, 1994. [258] X. Huang, A. Acero, H.-W. Hon, and R. Reddy, Spoken language processing: A guide to theory, algorithm, and system development. Prentice hall PTR, 2001. [259] J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algo- rithm,” Journal of the royal statistical society. series c (applied statistics), vol. 28, no. 1,pp. 100–108, 1979. [260] F. P. Romero, D. C. Piñol, and C. R. Vázquez-Seisdedos, “Deepfilter: an ecg baseline wander removal filter using deep learning techniques,” arXiv preprint arXiv:2101.03423, 2021. [261] “Airpods pro —apple (uk),” https://www.apple.com/uk/airpods-pro/, 2021, ac- cessed Aug 25, 2021. [262] “Spu0414hr5h-sb-7,” https://www.digikey.com/en/products/detail/knowles/ SPU0414HR5H-SB-7/2420969, 2021, accessed Aug 15, 2021. [263] “Miniso marvel earphones,” https://www.miniso-au.com/en-au/product/ 145169/marvel-earphones, 2021, accessed Aug 14, 2021. [264] “Audacity,” https://www.audacityteam.org, 2021, accessed Aug 15, 2021. [265] Google. (2021) Google colab. [Online]. Available: https://colab.research.google. com/ [266] “Taking a spirometry test,” https://www.youtube.com/watch?v=Zs8Fs5HaJHs, 2018, accessed Apr 7, 2022. [267] K. F. Rabe, S. Hurd, A. Anzueto, P. J. Barnes, S. A. Buist, P. Calverley, Y. Fukuchi,C. Jenkins, R. Rodriguez-Roisin, C. Van Weel et al., “Global strategy for the diagno- sis, management, and prevention of chronic obstructive pulmonary disease: Gold executive summary,” American journal of respiratory and critical care medicine, vol. 176, no. 6, pp. 532–555, 2007. [268] C. Vieyra and R. Vieyra, Physics Toolbox Suite, Vieyra Software, 2021. [Online]. Available: https://www.vieyrasoftware.net/physics-toolbox-sensor-suite [269] Y. Zhang and Q. Yang, “A survey on multi-task learning,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2021. [270] M. Fernandes, A. Cukier, and M. I. Z. Feltrim, “Efficacy of diaphragmatic breathing in patients with chronic obstructive pulmonary disease,” Chronic respiratory disease, vol. 8, no. 4, pp. 237–244, 2011. [271] A. A. of Cardiovascular & Pulmonary Rehabilitation, Guidelines for pulmonary reha- bilitation programs. Human Kinetics, 2011. [272] “Breathing exercises | america lung association,” 2022, accessed Nov 22, 2023. [Online]. Available: https://www.lung.org/lung-health-diseases/wellness/ breathing-exercises [273] Y.-F. Chen, X.-Y. Huang, C.-H. Chien, and J.-F. Cheng, “The effectiveness of di- aphragmatic breathing relaxation training for reducing anxiety,” Perspectives in Psy- chiatric Care, vol. 53, no. 4, pp. 329–336, 2017. [274] M. Yokogawa, T. Kurebayashi, T. Ichimura, M. Nishino, H. Miaki, and T. Naka- gawa, “Comparison of two instructions for deep breathing exercise: non-specific and diaphragmatic breathing,” Journal of Physical Therapy Science, vol. 30, no. 4, pp. 614–618, 2018. [275] K. K.-Y. Yau and A. Y. Loke, “Effects of diaphragmatic deep breathing exercises on prehypertensive or hypertensive adults: A literature review,” Complementary Thera- pies in Clinical Practice, vol. 43, p. 101315, May 2021. [276] “Diaphragmatic breathing exercises and benefits,” 2022, accessed Jan 17, 2024. [Online]. Available: https://my.clevelandclinic.org/health/articles/ 9445-diaphragmatic-breathing [277] D. M. Halpin, G. J. Criner, A. Papi, D. Singh, A. Anzueto, F. J. Martinez, A. A. Agusti, and C. F. Vogelmeier, “Global initiative for the diagnosis, management, and preven-tion of chronic obstructive lung disease. the 2020 gold science committee report on covid-19 and chronic obstructive pulmonary disease,” American journal of respiratory and critical care medicine, vol. 203, no. 1, pp. 24–36, 2021. [278] Y. Li, H. Qian, K. Yu, Y. Huang et al., “Nonadherence in home-based pulmonary re- habilitation program for copd patients,” Canadian respiratory journal, vol. 2020, 2020. [279] “Senseecho,” 2023, accessed Nov 22, 2023. [Online]. Available: https://www. sensecho.com/ [280] H. Takamoto, H. Nishine, S. Sato, G. Sun, S. Watanabe, K. Seokjin, M. Asai, M. Mi- neshita, and T. Matsui, “Development and Clinical Application of a Novel Non- contact Early Airflow Limitation Screening System Using an Infrared Time-of-Flight Depth Image Sensor,” Frontiers in Physiology, vol. 11, 2020. [281] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241. [282] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao,S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023. [283] A. E. Holland, C. J. Hill, A. Y. Jones, and C. F. McDonald, “Breathing exercises for chronic obstructive pulmonary disease,” Cochrane Database Syst Rev, vol. 10, p. CD008250, Oct 2012. [284] T. A. Santino, G. S. Chaves, D. A. Freitas, G. A. Fregonezi, and K. M. a, “Breathing exercises for adults with asthma,” Cochrane Database Syst Rev, vol. 3, no. 3, p. CD001277, Mar 2020. [285] D. K. Chitkara, M. Van Tilburg, W. E. Whitehead, and N. J. Talley, “Teaching diaphragmatic breathing for rumination syndrome,” Am J Gastroenterol, vol. 101, no. 11, pp. 2449–2452, Nov 2006. [286] W. P. Yamaguti, R. C. Claudino, A. P. Neto, M. C. Chammas, A. C. Gomes, J. M. Salge, H. T. Moriya, A. Cukier, and C. R. Carvalho, “Diaphragmatic breathing train- ing program improves abdominal motion during natural breathing in patients with chronic obstructive pulmonary disease: a randomized controlled trial,” Arch Phys Med Rehabil, vol. 93, no. 4, pp. 571–577, Apr 2012. [287] M. A. R. Ahad, A. D. Antar, and O. Shahid, “Vision-based action understanding for assistive healthcare: A short review.” in CVPR Workshops, 2019, pp. 1–11. [288] “Azure kinect dk - develop ai models | microsoft azure,” 2023, accessed Nov 24, 2023. [Online]. Available: https://azure.microsoft.com/en-us/products/kinect-dk [289] M. Tölgyessy, M. Dekan, L. Chovanec, and P. Hubinsky`, “Evaluation of the azure kinect and its comparison to kinect v1 and kinect v2,” Sensors, vol. 21, no. 2, p. 413, 2021. [290] G. Kurillo, E. Hemingway, M.-L. Cheng, and L. Cheng, “Evaluating the accuracy of the azure kinect and kinect v2,” Sensors, vol. 22, no. 7, p. 2469, 2022. [291] A. Lopez Paredes, Q. Song, and M. H. Conde, “Performance evaluation of state-of- the-art high-resolution time-of-flight cameras,” IEEE Sensors Journal, vol. 23, no. 12,pp. 13 711–13 727, 2023. [292] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018,pp. 3–19. [293] M. Kowalski, J. Naruniec, and M. Daniluk, “Livescan3d: A fast and inexpensive 3ddata acquisition system for multiple kinect v2 sensors,” in 2015 International Confer- ence on 3D Vision, 2015, pp. 318–325. [294] I. Loshchilov and F. Hutter, “Fixing weight decay regularization in adam,” ArXiv, vol. abs/1711.05101, 2017. [Online]. Available: https://api.semanticscholar.org/ CorpusID:3312944 [295] “Pulmonary function tests,” 2024, accessed Jan 29, 2024. [Online]. Avail- able: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/ pulmonary-function-tests [296] “Spirometry | adinstruments,” 2023, accessed Nov 25, 2023. [Online]. Available: https://www.adinstruments.com/research/human/respiratory/spirometry/ [297] “Inspiratory capacity,” 2023, accessed Nov 25, 2023. [Online]. Available: https://www.ncbi.nlm.nih.gov/mesh/?term=Inspiratory+Capacity [298] W. Jiang, C. Miao, F. Ma, S. Yao, Y. Wang, Y. Yuan, H. Xue, C. Song, X. Ma, D. Kout- sonikolas, W. Xu, and L. Su, “Towards Environment Independent Device Free Hu- man Activity Recognition,” in Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. New Delhi India: ACM, Oct. 2018, pp. 289– 304. [299] “Use face id on your iphone or ipad pro,” 2024, accessed Jan 26, 2024. [Online]Available: https://support.apple.com/en-hk/HT208109 [300] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, jan 2019. |
来源库 | 人工提交
|
成果类型 | 学位论文 |
条目标识符 | http://sustech.caswiz.com/handle/2SGJ60CL/804948 |
专题 | 工学院_计算机科学与工程系 |
推荐引用方式 GB/T 7714 |
Xie WT. Understand Human Behaviours with IoT Sensors: From Physical to Physiological Sensing[D]. 香港. 香港科技大学,2024.
|
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | 操作 | |
11951009-谢文涛-计算机科学与工(36947KB) | -- | -- | 限制开放 | -- | 请求全文 |
个性服务 |
原文链接 |
推荐该条目 |
保存到收藏夹 |
查看访问统计 |
导出为Endnote文件 |
导出为Excel格式 |
导出为Csv格式 |
Altmetrics Score |
谷歌学术 |
谷歌学术中相似的文章 |
[谢文涛]的文章 |
百度学术 |
百度学术中相似的文章 |
[谢文涛]的文章 |
必应学术 |
必应学术中相似的文章 |
[谢文涛]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论