中文版 | English
题名

基于搜索的深度神经网络量化方法研究

其他题名
SEARCH-BASED QUANTIZATION METHODS FORDEEP NEURAL NETWORKS
姓名
姓名拼音
PENG Fu
学号
11930584
学位类型
硕士
学位专业
0809 电子科学与技术
学科门类/专业学位类别
08 工学
导师
唐珂
导师单位
计算机科学与工程系
论文答辩日期
2022-05-08
论文提交日期
2022-06-18
学位授予单位
南方科技大学
学位授予地点
深圳
摘要

       深度神经网络量化是神经网络模型压缩的一个简单而有效的方法,已经成为
了当前的热点研究问题。虽然各种各样的量化方法层出不穷,但是只有为数不多
的工作能够量化梯度,具备在低端设备上训练量化神经网络的潜力。本工作旨在
研究:在不需要梯度信息的情况下,提出基于演化框架的搜索算法来训练量化神
经网络,使其能够在低端设备上训练。
       针对以上研究目的,本文的研究方法如下:首先将深度神经网络量化问题转
化为大规模离散优化问题;然后设计权重量化函数和激活值量化函数来构建搜索
空间和目标函数;接着根据问题的特点设计适用的演化算法;最后为了提升对大
规模优化问题的搜索能力,引入协同演化框架来加速搜索。
具体而言,本工作的研究成果和贡献有以下几个方面:
(1)提出了一种新的基于协同演化的分布估计算法(简称 EDA+CC)来训练
量化神经网络。经过文献调研后发现,这是第一个应用演化算法来搜索 QDNN 最
佳低比特权重的工作。
(2)提出了一种新的基于分层搜索的分布估计算法(简称 LEDA)来训练量
化神经网络。相比于 EDA+CC,LEDA 针对的优化问题的搜索空间是 EDA+CC 的
100 万倍,提出了一种新的决策变量分组策略。
(3)实验验证了所提出的算法的有效性。值得注意的是,与全精度浮点神经
网络相比,EDA+CC 和 LEDA 训练出的量化网络准确率都没有明显下降,甚至持
平。这表明演化算法在神经网络量化中的巨大潜力

关键词
语种
中文
培养类别
独立培养
入学年份
2019
学位授予年份
2022-06
参考文献列表

[1] GUO Y, LIU Y, OERLEMANS A, et al. Deep learning for visual understanding: A review[J]. Neurocomputing, 2016, 187: 27-48.
[2] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 26th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2012. Lake Tahoe, NV: Curran Associates, Inc., 2012: 1106-1114.
[3] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of go with deep neural networks and tree search[J]. Nature, 2016, 529: 484-489.
[4] JUMPER J, EVANS R, PRITZEL A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596: 583-589.
[5] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2016. LasVegas, NV: IEEE Computer Society, 2016: 770-778.
[6] ZHAO H, JIA J, KOLTUN V. Exploring self-attention for image recognition[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR’2020. Seattle, WA: Computer Vision Foundation / IEEE, 2020: 10073-10082.
[7] BHOJANAPALLI S, CHAKRABARTI A, GLASNER D, et al. Understanding robustness of trransformers for image classification[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, ICCV’2021. Montreal, QC: IEEE, 2021: 10211-10221.
[8] REN S, HE K, GIRSHICK R B, et al. Faster R-CNN: towards real-time object detection with region proposal networks[C]//Proceedings of the 28th Advances in Neural Information Processing Systems : Annual Conference on Neural Information Processing Systems, NeurIPS’2015. Montreal, Canada: Curran Associates, Inc., 2015: 91-99.
[9] YANG Z, WANG L. Learning relationships for multi-view 3D object recognition[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV’2019.Seoul, Korea (South): IEEE, 2019: 7504-7513.
[10] CHEN K, WU Y, QIN H, et al. R3 adversarial network for cross model face recognition[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2019. Long Beach, CA: Computer Vision Foundation / IEEE, 2019: 9868-9876.
[11] HE K, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV’2017. Venice, Italy: IEEE Computer Society, 2017: 2961-2969.
[12] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Proceedings of the 27th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2013. Lake Tahoe, NV: Curran Associates, Inc., 2013: 3111-3119.
[13] KUUTTI S, BOWDEN R, JIN Y, et al. A survey of deep learning applications to autonomous vehicle control[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 22: 712-733.
[14] CAO C, LIU F, TAN H, et al. Deep learning and its applications in biomedicine[J]. Genomics, Proteomics & Bioinformatics, 2018, 16: 17-32.
[15] ANGERMUELLER C, PÄRNAMAA T, PARTS L, et al. Deep learning for computational biology[J]. Molecular Systems Biology, 2016, 12: 878.
[16] MATER A C, COOTE M L. Deep learning in chemistry[J]. Journal of Chemical Information and Modeling, 2019, 59: 2545-2559.
[17] BOURILKOV D. Machine and deep learning applications in particle physics[J]. International Journal of Modern Physics A, 2019, 34: 1930019.
[18] YANG Z, WANG Y, HAN K, et al. Searching for low-bit weights in quantized neural networks[J]. 2020: 4091-4102.
[19] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//Proceedings of the 3rd International Conference on Learning Representations, ICLR’2015. San Diego, CA, 2015.
[20] HAN S, MAO H, DALLY W J. Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding[C]//Proceedings of the 4th International Conference on Learning Representations, ICLR’2016. San Juan, Puerto Rico, 2016.
[21] WANG Z, LI C, WANG X. Convolutional neural network pruning with structural redundancy reduction[C]//Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2021. virtual: Computer Vision Foundation / IEEE, 2021: 14913-14922.
[22] GAO S, HUANG F, CAI W, et al. Network pruning via performance maximization[C]// Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2021. virtual: Computer Vision Foundation / IEEE, 2021: 9270-9280.
[23] CHEN H, WANG Y, XU C, et al. Data-free learning of student networks[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV’2019. Seoul, Korea (South): IEEE, 2019: 3513-3521.
[24] GOU J, YU B, MAYBANK S J, et al. Knowledge distillation: a survey[J]. International Journal of Computer Vision, 2021, 129: 1789-1819.
[25] ZHANG Y, CHEN H, CHEN X, et al. Data-Free Knowledge Distillation for Image Super-Resolution[C]//Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2021. virtual: Computer Vision Foundation / IEEE, 2021: 7852-7861.
[26] DENIL M, SHAKIBI B, DINH L, et al. Predicting parameters in deep learning[C]//Proceedings of the 27th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2013. Lake Tahoe, NV: Curran Associates, Inc., 2013: 2148-2156.
[27] YANG H, TANG M, WEN W, et al. Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020. Seattle, WA: Computer Vision Foundation / IEEE, 2020: 2899-2908.
[28] HUBARA I, COURBARIAUX M, SOUDRY D, et al. Binarized neural networks[C]//Proceedings of the 30th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2016. Barcelona, Spain: CurranAssociates, Inc., 2016: 4107-4115.
[29] LIU C, DING W, XIA X, et al. Circulant binary convolutional networks: enhancing the performance of 1-bit DCNNs with circulant back propagation[C]//Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2019. Long Beach, CA: Computer Vision Foundation / IEEE, 2019: 2691-2699.
[30] GU J, ZHAO J, JIANG X, et al. Bayesian optimized 1-bit CNNs[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV’2019. Seoul, Korea (South):IEEE, 2019: 4908-4916.
[31] GU J, LI C, ZHANG B, et al. Projection convolutional neural networks for 1-bit CNNs via discrete back propagation[C]//Proceedings of The 33rd AAAI Conference on Artificial Intelligence, AAAI’2019. Honolulu, HI: AAAI Press, 2019: 8344-8351.
[32] COURBARIAUX M, BENGIO Y, DAVID J. BinaryConnect: training deep neural networks with binary weights during propagations[C]//Proceedings of the 29th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems,NeurIPS’2015. Montreal, Canada: Curran Associates, Inc., 2015: 3123-3131.
[33] RASTEGARI M, ORDONEZ V, REDMON J, et al. XNOR-Net: imageNet classification using binary convolutional neural networks[C]//Proceedings of the 14th European conference on computer vision, ECCV’2016. Amsterdam, The Netherlands: Springer, 2016: 525-542.
[34] ZHOU S, WU Y, NI Z, et al. DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients[J]. arXiv preprint arXiv:1606.06160, 2016.
[35] WU S, LI G, CHEN F, et al. Training and inference with integers in deep neural networks[C]//Proceedings of the 6th International Conference on Learning Representations, ICLR’2018.OpenReview.net, 2018.
[36] WU Y, WU Y, GONG R, et al. Rotation consistent margin loss for efficient low-bit face recognition[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR’2020. Seattle, WA: Computer Vision Foundation / IEEE, 2020: 6865-6875.
[37] SHIN S, HWANG K, SUNG W. Fixed-point performance analysis of recurrent neural networks[C]//Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’2016. Shanghai, China: IEEE, 2016: 976-980.
[38] MORDIDO G, KEIRSBILCK M V, KELLER A. Instant quantization of neural networks using monte carlo methods[C]//Proceedings of the Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition, EMC2@NeurIPS’2019. Vancouver, Canada: IEEE, 2019: 26-30.
[39] ZHOU A, YAO A, WANG K, et al. Explicit loss-error-aware quantization for low-bit deep neural networks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR’2018. Salt Lake City, UT: Computer Vision Foundation / IEEE Computer Society, 2018: 9426-9435.
[40] LIN D D, TALATHI S S, ANNAPUREDDY V S. Fixed point quantization of deep convolutional networks[C]//Proceedings of the 33rd International Conference on Machine Learning,ICML’2016. New York City, NY: JMLR.org, 2016: 2849-2858.
[41] AJANTHAN T, DOKANIA P K, HARTLEY R, et al. Proximal mean-field for neural network quantization[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV’2019. Seoul, Korea (South): IEEE, 2019: 4870-4879.
[42] ZHAO R, HU Y, DOTZEL J, et al. Improving neural network quantization without retrainingusing outlier channel splitting[C]//Proceedings of the 36th International Conference on MachineLearning, ICML’2019. Long Beach, CA: PMLR, 2019: 7543-7552.
[43] YANG Y, DENG L, WU S, et al. Training high-performance and large-scale deep neural networks with full 8-bit integers[J]. Neural Networks, 2020, 125: 70-82.
[44] ZHU F, GONG R, YU F, et al. Towards unified INT8 training for convolutional neural network[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR’2020. Seattle, WA: Computer Vision Foundation / IEEE, 2020: 1966-1976.
[45] O’SHEA K, NASH R. An introduction to convolutional neural networks[J]. arXiv preprint arXiv:1511.08458, 2015.
[46] ALBAWI S, MOHAMMED T A, AL-ZAWI S. Understanding of a convolutional neural network[C]//Proceedings of the 2017 International Conference on Engineering and Technology, ICET’2017. IEEE, 2017: 1-6.
[47] MA T, DALCA A V, SABUNCU M R. Hyper-convolution networks for biomedical image segmentation[C]//Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, HI: IEEE, 2022: 1989-1998.
[48] RADENOVIC F, TOLIAS G, CHUM O. Fine-tuning CNN image retrieval with no human annotation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41: 1655-1668.
[49] IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on Machine Learning, ICML’2015. Lille, France: JMLR.org, 2015: 448-456.
[50] BJORCK J, GOMES C P, SELMAN B, et al. Understanding batch normalization[C]//Proceedings of the 32nd Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2018. Montréal, Canada: Curran Associates, Inc., 2018: 7705-7716.
[51] SANTURKAR S, TSIPRAS D, ILYAS A, et al. How does batch normalization help optimization?[C]//BENGIO S, WALLACH H M, LAROCHELLE H, et al. Proceedings of the 31st Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’2018. Montréal, Canada, 2018: 2488-2498.
[52] HORNIK K, STINCHCOMBE M B, WHITE H. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2: 359-366.
[53] CHO Y, SAUL L K. Large-margin classification in infinite neural networks[J]. Neural Comput., 2010, 22: 2678-2697.
[54] GLOROT X, BENGIO Y. Understanding the difficulty of training deep feedforward neural networks[C]//Proceedings of the 13th International Conference on Artificial Intelligence andStatistics, AISTATS’2010. Chia Laguna Resort, Italy: JMLR.org, 2010: 249-256.
[55] NAIR V, HINTON G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th International Conference on Machine Learning, ICML’10. Haifa, Israel:Omnipress, 2010: 807-814.
[56] LI Y, FAN C, LI Y, et al. Improving deep neural network with multiple parametric exponential linear units[J]. Neurocomputing, 2018, 301: 11-24.
[57] HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: surpassing human-Level performance on imageNet classification[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, ICCV’2015. Santiago, Chile: IEEE Computer Society, 2015: 1026-1034.
[58] FOGEL D B. Evolutionary computation: the fossil record[M]. Wiley-IEEE Press, 1998.
[59] FOGEL L J. Artificial intelligence through a simulation of evolution[C]//Proceedings of the 2nd Biophysics and Cybernetic Systems, BCS’1965. 1965.
[60] HOLLAND J H. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence[M]. MIT press, 1992.
[61] SCHWEFEL H. Evolution and optimum seeking[M]. Wiley, 1995.
[62] KOZA J R. Genetic programming 2 - automatic discovery of reusable programs[M]. MIT Press,1994.
[63] JONG K A D. Evolutionary computation - a unified approach[M]. MIT Press, 2006.
[64] EIBEN A E, SMITH J E. Introduction to evolutionary computing[M]. Springer, 2015.
[65] MÜHLENBEIN H, PAASS G. From recombination of genes to the estimation of distributions. binary parameters[C]//Proceedings of the 4th International Conference on Parallel Problem Solving from Nature, PPSN’1996. Berlin, Germany: Springer, 1996: 178-187.
[66] LIPINSKI P. ECGA vs. BOA in discovering stock market trading experts[C]//Proceedings of theGenetic and Evolutionary Computation Conference, GECCO’2007. London, England: ACM,2007: 531-538.
[67] KOLLAT J B, REED P M, KASPRZYK J R. A new epsilon-dominance hierarchical bayesian optimization algorithm for large multiobjective monitoring network design problems[J]. Advances in Water Resources, 2008, 31: 828-845.
[68] YU T, SANTARELLI S, GOLDBERG D E. Military antenna design using a simple genetic algorithm and hBOA[M]//Scalable Optimization via Probabilistic Modeling. 2006: 275-289.
[69] SHAH R, REED P M. Comparative analysis of multiobjective evolutionary algorithms for random and correlated instances of multiobjective d-dimensional knapsack problems[J]. European Journal of Operational Research, 2011, 211: 466-479.
[70] CHEN C, CHEN Y. Real-coded ECGA for economic dispatch[C]//Proceedings of the 2007 Genetic and Evolutionary Computation Conference, GECCO’2007. London, England: ACM,2007: 1920-1927.
[71] DUCHEYNE E I, BAETS B D, WULF R D. Probabilistic models for linkage learning in forest management[M]//Knowledge Incorporation in Evolutionary Computation. 2005: 177-194.
[72] BALUJA S. Population-based incremental learning. a method for integrating genetic search based function optimization and competitive learning[R]. Carnegie-Mellon Univ Pittsburgh PaDept of Computer Science, 1994.
[73] BONET J S D, JR. C L I, VIOLA P A. MIMIC: finding optima by estimating probability densities[C]//Proceedings of the 10th Advances in Neural Information Processing Systems: Annual Conference on Neural Information Processing Systems, NeurIPS’1996. Denver, CO: MIT Press,1996: 424-430.
[74] BALUJA S, DAVIES S. Using optimal dependency-trees for combinatorial optimization: learning the structure of the search space.[R]. Carnegie-Mellon Univ Pittsburgh Pa Dept of Computer Science, 1997.
[75] PELIKAN M, GOLDBERG D E, CANTÚ-PAZ E, et al. BOA: the bayesian optimization algorithm[C]//Proceedings of the 1999 Genetic and Evolutionary Computation Conference,GECCO’1999. 1999: 525-532.
[76] MÜHLENBEIN H, MAHNIG T, RODRIGUEZ A O. Schemata, distributions and graphical models in evolutionary optimization[J]. Journal of Heuristics, 1999, 5: 215-247.
[77] LARRAÑAGA P, LOZANO J A. Estimation of distribution algorithms: a new tool for evolutionary computation[M]. Springer Science & Business Media, 2001.
[78] POTTER M A, JONG K A D. A cooperative coevolutionary approach to function optimization[C]//Proceedings of the 3rd Conference on Parallel Problem Solving from Nature, PPSN’1994.Jerusalem, Israel: Springer, 1994: 249-257.
[79] CHEN W, WEISE T, YANG Z, et al. Large-scale global optimization using cooperative coevolution with variable interaction learning[C]//Proceedings of the 11th International Conference on Parallel Problem Solving from Nature, PPSN’2010. Kraków, Poland: Springer, 2010: 300-309.
[80] LIU J, TANG K. Scaling up covariance matrix adaptation evolution strategy using cooperative coevolution[C]//Proceedings of the 14th Intelligent Data Engineering and Automated Learning,IDEAL’2013. Hefei, China: Springer, 2013: 350-357.
[81] MA X, LI X, ZHANG Q, et al. A survey on cooperative co-evolutionary algorithms[J]. IEEE Transactions on Evolutionary Computation, 2019, 23: 421-441.
[82] YANG Z, TANG K, YAO X. Large scale evolutionary optimization using cooperative coevolution[J]. Information sciences, 2008, 178: 2985-2999.
[83] POTTER M A, JONG K A D. Cooperative coevolution: an architecture for evolving coadapted subcomponents[J]. Evolutionary Compututation, 2000, 8: 1-29.
[84] SUN Y, KIRLEY M, HALGAMUGE S K. A recursive decomposition method for large scale continuous optimization[J]. IEEE Transactions on Evolutionary Computation, 2018, 22: 647-661.
[85] MA X, HUANG Z, LI X, et al. Merged differential grouping for large-scale global optimization[J]. IEEE Transactions on Evolutionary Computation, 2022: 1-1.
[86] SON Y S, BALDICK R. Hybrid coevolutionary programming for nash equilibrium search in games with local optima[J]. IEEE Transactions on Evolutionary Computation, 2004, 8: 305-315.
[87] VAN DEN BERGH F, ENGELBRECHT A P. A cooperative approach to particle swarm optimization[J]. IEEE Transactions on Evolutionary Computation, 2004, 8: 225-239.
[88] CAO Z, WANG L, SHI Y, et al. An effective cooperative coevolution framework integrating global and local search for large scale optimization problems[C]//Proceedings of the 2015 IEEE Congress on Evolutionary Computation, CEC’2015. Sendai, Japan: IEEE, 2015: 1986-1993.
[89] LI X, YAO X. Cooperatively coevolving particle swarms for large scale optimization[J]. IEEE Transactions on Evolutionary Computation, 2012, 16: 210-224.
[90] OMIDVAR M N, LI X, YANG Z, et al. Cooperative co-evolution for large scale optimiza-tion through more frequent random grouping[C]//Proceedings of the 2010 IEEE Congress on Evolutionary Computation, CEC’2010. Barcelona, Spain: IEEE, 2010: 1-8.
[91] TRUNFIO G A, TOPA P, WAS J. A new algorithm for adapting the configuration of subcompo-nents in large-scale optimization with cooperative coevolution[J]. Information Science, 2016,372: 773-795.
[92] MEI Y, OMIDVAR M N, LI X, et al. A competitive divide-and-conquer algorithm for uncon-strained large-scale black-box optimization[J]. ACM Transactions on Mathematical Software, 2016, 42: 13:1-13:24.
[93] OMIDVAR M N, LI X, MEI Y, et al. Cooperative co-evolution with differential grouping for large scale optimization[J]. IEEE Transactions on Evolutionary Computation, 2014, 18: 378-393.
[94] KRIZHEVSKY A, HINTON G, et al. Learning multiple layers of features from tiny images[R].Citeseer, 2009.
[95] Training and inference with integers in deep neural networks[EB/OL]. https://github.com/boluoweifenda/WAGE.

所在学位评定分委会
计算机科学与工程系
国内图书分类号
TP391.4
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/335956
专题工学院_计算机科学与工程系
推荐引用方式
GB/T 7714
彭福. 基于搜索的深度神经网络量化方法研究[D]. 深圳. 南方科技大学,2022.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
11930584-彭福-计算机科学与工程(24147KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[彭福]的文章
百度学术
百度学术中相似的文章
[彭福]的文章
必应学术
必应学术中相似的文章
[彭福]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。