[1] The National Institutes of Health, U.S. Rapid Acceleration of Diagnostics[EB/OL].
[2021-1-3 0].https://www.nih.gov/research-training/medical-research-initiatives/radx.
[2] The National Institutes of Health, U.S. NIH Delivering New COVID-19 Testing Technologies to Meet U.S. Demand[EB/OL].
[2021-1-30].https://www.nih.gov/news-events/news-releases/ nih-delivering-new-covid-19-testing-technologies-meet-us-demand.
[3] KESZOCZE O, WILLE R, DRECHSLER R. Exact Design of Digital Microfluidic Biochips [M]. Springer, 2019.
[4] GRIMMER A, KLEPIC B, HO T Y, et al. Sound Valve-control for Programmable Microfluidic Devices[C]//Asia and South Pacific Design Automation Conference. IEEE, 2018: 40-45.
[5] POLLACK M G, SHENDEROV A D, FAIR R B. Electrowetting-based Actuation of Droplets for Integrated Microfluidics[J]. Lab on a Chip, 2002, 2(2): 96-101.
[6] ZHONG Z, LI Z, CHAKRABARTY K, et al. Micro-electrode-dot-array Digital Microfluidic Biochips: Technology, Design Automation, and Test Techniques[J]. IEEE Transactions on Biomedical Circuits and Systems, 2018, 13(2): 292-313.
[7] MOMTAHEN S, TAAJOBIAN M, JAHANIAN A. Drug Discovery Applications: A Cus- tomized Digital Microfluidic Biochip Architecture/CAD Flow[J]. IEEE Nanotechnology Mag- azine, 2019, 13(5): 25-34.
[8] Government Accountability Office,U.S. Newborn Screening Timeliness: Most States Had Not Met Screening Goals, but Some Are Developing Strategies to Address Barriers.[EB/OL].
[2016-12-15].https://www.gao.gov/products/GAO-17-196.
[9] HOPCROFT J, SCHWARTZ J, SHARIR M. On the Complexity of Motion Planning for Multi- ple Independent Objects; PSPACE- Hardness of the “Warehouseman’s Problem”[J]. the Inter- national Journal of Robotics Research, 1984, 3(4): 76-88.
[10] SU F, CHAKRABARTY K, FAIR R B. Microfluidics-based Biochips: Technology Issues, Implementation Platforms, and Design-Automation Challenges[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2006, 25(2): 211-223.
[11] DONG C, CHEN T, GAO J, et al. On the Droplet Velocity and Electrode Lifetime of Digital Mi- crofluidics: Voltage Actuation Techniques and Comparison[J]. Microfluidics and Nanofluidics, 2015, 18(4): 673-683.
[12] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing Atari with Deep Reinforcement Learning[A]. 2013.
[13] SILVER D, SCHRITTWIESER J, SIMONYAN K, et al. Mastering the Game of Go Without Human Knowledge[J]. Nature, 2017, 550(7676): 354-359.
[14] GU S, HOLLY E, LILLICRAP T, et al. Deep Reinforcement Learning for Robotic Manipu- lation with Asynchronous Off-policy Updates[C]//IEEE International Conference on Robotics and Automation. IEEE, 2017: 3389-3396.
[15] HE D, XIA Y, QIN T, et al. Dual Learning for Machine Translation[C]//Advances in Neural Information Processing Systems: volume 29. Curran Associates, Inc., 2016.
[16] BOHRINGER K F. Modeling and Controlling Parallel Tasks in Droplet-based Microfluidic Systems[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2006, 25(2): 334-344.
[17] SU F, HWANG W, CHAKRABARTY K. Droplet Routing in the Synthesis of Digital Microflu- idic Biochips[C]//Design Automation & Test in Europe Conference: volume 1. IEEE, 2006: 1-6.
[18] CHO M, PAN D Z. A high-performance Droplet Routing Algorithm for Digital Microfluidic Biochips[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2008, 27(10): 1714-1724.
[19] YUH P H, YANG C L, CHANG Y W. BioRoute: A Network-flow-based Routing Algorithm for the Synthesis of Digital Microfluidic Biochips[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2008, 27(11): 1928-1941.
[20] HUANG T W, HO T Y. A Fast Routability-and Performance-driven Droplet Routing Algo- rithm for Digital Microfluidic Biochips[C]//IEEE International Conference on Computer De- sign. IEEE, 2009: 445-450.
[21] LI Z, LAI K Y T, CHAKRABARTY K, et al. Droplet Size-Aware and Error-Correcting Sample Preparation Using Micro-Electrode-Dot-Array Digital Microfluidic Biochips[J]. IEEE Trans- actions on Biomedical Circuits and Systems, 2017, 11(6): 1380-1391.
[22] PAN I, DASGUPTA P, RAHAMAN H, et al. Ant Colony Optimization Based Droplet Routing Technique in Digital Microfluidic Biochip[C]//International Symposium on Electronic System Design. IEEE, 2011: 223-229.
[23] PAN I, SAMANTA T. Efficient Droplet Router for Digital Microfluidic Biochip Using Parti- cle Swarm Optimizer[C]//International Conference on Communication and Electronics System Design: volume 8760. SPIE, 2013: 472-481.
[24] JUÁREZ J, BRIZUELA C, MARTÍNEZ I, et al. A Genetic Algorithm for the Routing of Droplets in DMFB: Preliminary Results[C]//IEEE International Conference on Systems, Man, and Cybernetics. IEEE, 2014: 3808-3815.
[25] JUÁREZ J, BRIZUELA C A, MARTÍNEZ-PÉREZ I M. An Evolutionary Multi-objective Opti- mization Algorithm for the Routing of Droplets in Digital Microfluidic Biochips[J]. Information Sciences, 2018, 429: 130-146.
[26] YUH P H, SAPATNEKAR S, YANG C L, et al. A Progressive-ILP based Routing Algorithm for Cross-referencing Biochips[C]//ACM/IEEE Design Automation Conference. IEEE, 2008: 284-289.
[27] YUH P H, LIN C C Y, HUANG T W, et al. A SAT-based Routing Algorithm for Cross- referencing Biochips[C]//International Workshop on System Level Interconnect Prediction. IEEE, 2011: 1-7.
[28] KESZOCZE O, WILLE R, CHAKRABARTY K, et al. A General and Exact Routing Method- ology for Digital Microfluidic Biochips[C]//IEEE/ACM International Conference on Computer- Aided Design. IEEE, 2015: 874-881.
[29] LIANG T C, ZHONG Z. Adaptive Droplet Routing in Digital Microfluidic Biochips using Deep Reinforcement Learning[C]//International Conference on Machine Learning. PMLR, 2020: 6050-6060.
[30] LIANG T C, ZHOU J, CHAN Y S, et al. Parallel Droplet Control in MEDA Biochips us- ing Multi-agent Reinforcement Learning[C]//International Conference on Machine Learning. PMLR, 2021: 6588-6599.
[31] DEVLIN S, YLINIEMI L, KUDENKO D, et al. Potential-based Difference Rewards for Multi- agent Reinforcement Learning[C]//International Conference on Autonomous Agents and Multi- agent Systems. 2014: 165-172.
[32] SU F, CHAKRABARTY K. High-Level Synthesis of Digital Microfluidic Biochips[J]. ACM Journal on Emerging Technologies in Computing Systems, 2008, 3(4).
[33] LI Z, HO T Y, LAI K Y T, et al. High-level Synthesis for Micro-electrode-dot-array Digital Microfluidic Biochips[C]//ACM/EDAC/IEEE Design Automation Conference. 2016: 1-6.
[34] LIANG T C, ZHONG Z, PAJIC M, et al. Extending the Lifetime of MEDA Biochips by Selective Sensing on Microelectrodes[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020, 39(11): 3531-3543.
[35] ZHONG Z, LIANG T C, CHAKRABARTY K. Enhancing the Reliability of MEDA Biochips using IJTAG and Wear Leveling[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021, 40(10): 2063-2076.
[36] VERHEIJEN H, PRINS M. Reversible Electrowetting and Trapping of Charge: Model and Experiments[J]. Langmuir, 1999, 15(20): 6616-6620.
[37] SU F, OZEV S, CHAKRABARTY K. Test Planning and Test Resource Optimization for Droplet-based Microfluidic systems[J]. Journal of Electronic Testing, 2006, 22(2): 199-210.
[38] XU T, CHAKRABARTY K. Functional Testing of Digital Microfluidic Biochips[C]//IEEE International Test Conference. IEEE, 2007: 1-10.
[39] DRYGIANNAKIS A I, PAPATHANASIOU A G, BOUDOUVIS A G. On the Connection be- tween Dielectric Breakdown Strength, Trapping of Charge, and Contact Angle Saturation in Electrowetting[J]. Langmuir, 2009, 25(1): 147-152.
[40] HO T Y, ZENG J, CHAKRABARTY K. Digital Microfluidic Biochips: A Vision for Functional Diversity and More Than Moore[C]//IEEE/ACM International Conference on Computer-Aided Design. IEEE, 2010: 578-585.
[41] BUSONIU L, BABUSKA R, DE SCHUTTER B. A Comprehensive Survey of Multiagent Reinforcement Learning[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Ap- plications and Reviews), 2008, 38(2): 156-172.
[42] ALMAHDI S, YANG S Y. An Adaptive Portfolio Trading System: A Risk-return Portfolio Optimization using Recurrent Reinforcement Learning with Expected Maximum Drawdown [J]. Expert Systems with Applications, 2017, 87: 267-279.
[43] GOTTESMAN O, JOHANSSON F, KOMOROWSKI M, et al. Guidelines for Reinforcement Learning in Healthcare[J]. Nature Medicine, 2019, 25(1): 16-18.
[44] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level Control Through Deep Rein- forcement Learning[J]. Nature, 2015, 518(7540): 529-533.
[45] SILVER D, HUANG A, MADDISON C J, et al. Mastering the Game of Go with Deep Neural Networks and Tree Search[J]. Nature, 2016, 529(7587): 484-489.
[46] BUŞONIU L, BABUŠKA R, DE SCHUTTER B. Multi-agent Reinforcement Learning: An Overview[J]. Innovations in multi-agent systems and applications-1, 2010: 183-221.
[47] SUTTON R S, BARTO A G. Reinforcement learning: An Introduction[M]. MIT press, 2018.
[48] KAELBLING L P, LITTMAN M L, MOORE A W. Reinforcement Learning: A Survey[J]. Journal of Artificial Intelligence research, 1996, 4: 237-285.
[49] OTTERLO M V, WIERING M. Reinforcement Learning and Markov Decision Processes[M]// Reinforcement learning. Springer, 2012: 3-42.
[50] ZHANG K, YANG Z, BAŞAR T. Multi-agent Reinforcement Learning: A Selective Overview of Theories and Algorithms[J]. Handbook of Reinforcement Learning and Control, 2021: 321- 384.
[51] HU J, WELLMAN M P, et al. Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm[C]//International Conference on Machine Learning. Citeseer, 1998: 242-250.
[52] SCHULMAN J, LEVINE S, ABBEEL P, et al. Trust Region Policy Optimization[C]// International Conference on Machine Learning. PMLR, 2015: 1889-1897.
[53] BUSONIU L, BABUSKA R, DE SCHUTTER B. Multi-agent Reinforcement Learning: A Sur- vey[C]//International Conference on Control, Automation, Robotics and Vision. IEEE, 2006: 1-6.
[54] LAURENT G J, MATIGNON L, FORT-PIAT L, et al. The World of Independent Learners is not Markovian[J]. International Journal of Knowledge-based and Intelligent Engineering Systems, 2011, 15(1): 55-64.
[55] LOWE R, WU Y I, TAMAR A, et al. Multi-agent Actor-Critic for Mixed Cooperative- Competitive Environments[J]. Advances in Neural Information Processing Systems, 2017, 30.
[56] WATKINS C J, DAYAN P. Q-learning[J]. Machine Learning, 1992, 8(3): 279-292.
[57] SILVER D, LEVER G, HEESS N, et al. Deterministic Policy Gradient Algorithms[C]// International Conference on Machine Learning. PMLR, 2014: 387-395.
[58] WILLIAMS R J. Simple Statistical Gradient-following Algorithms for Connectionist Reinforce- ment Learning[J]. Machine Learning, 1992, 8(3): 229-256.
[59] MNIH V, BADIA A P, MIRZA M, et al. Asynchronous Methods for Deep Reinforcement Learning[C]//International Conference on Machine Learning. PMLR, 2016: 1928-1937.
[60] SUTTON R S. Integrated Architectures for Learning, Planning, and Reacting based on Ap- proximating Dynamic Programming[M]//Machine Learning Proceedings 1990. Elsevier, 1990: 216-224.
[61] BRAFMAN R I, TENNENHOLTZ M. R-max-a General Polynomial Time Algorithm for Near- optimal Reinforcement Learning[J]. Journal of Machine Learning Research, 2002, 3(10): 213- 231.
[62] BROWNE C B, POWLEY E, WHITEHOUSE D, et al. A survey of Monte Carlo Tree Search Methods[J]. IEEE Transactions on Computational Intelligence and AI in Games, 2012, 4(1): 1-43.
[63] OLIEHOEK F A, SPAAN M T, VLASSIS N. Optimal and Approximate Q-value Functions for Decentralized POMDPs[J]. Journal of Artificial Intelligence Research, 2008, 32: 289-353.
[64] OLIEHOEK F A, AMATO C. A Concise Introduction to Decentralized POMDPs[M]. Springer, 2016.
[65] PANAIT L, LUKE S. Cooperative Multi-agent Learning: the State of the Art[J]. Autonomous Agents and Multi-agent Systems, 2005, 11(3): 387-434.
[66] TUYLS K, WEISS G. Multiagent Learning: Basics, Challenges, and Prospects[J]. AI Maga- zine, 2012, 33(3): 41-41.
[67] SUNEHAG P, LEVER G, GRUSLYS A, et al. Value-Decomposition Networks For Coopera- tive Multi-agent Learning based on Team Reward[C]//International Conference on Autonomous Agents and Multi-agent Systems. 2018: 2085-2087.
[68] SARTORETTI G, KERR J, SHI Y, et al. PRIMAL: Pathfinding via Reinforcement and Imitation Multi-agent Learning[J]. IEEE Robotics and Automation Letters, 2019, 4(3): 2378-2385.
[69] HAUSKNECHT M, STONE P. Deep Recurrent Q-learning for Partially Observable MDPs[C]// AAAI Fall Symposium Series. AAAI, 2015: 29-37.
修改评论