[1] MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics. PMLR, 2017: 1273-1282.
[2] 杨强, 刘洋, 程勇, 等. 联邦学习[M]. 北京: 电子工业出版社, 2020.
[3] YANG Q, LIU Y, CHEN T, et al. Federated machine learning: Concept and applications[J].ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19.
[4] LIU F, WU X, GE S, et al. Federated learning for vision-and-language grounding problems[C]//AAAI Conference on Artificial Intelligence. 2020: 11572-11579.
[5] XIE P, WU B, SUN G. Bayhenn: Combining bayesian deep learning and homomorphic encryption for secure dnn inference[J]. arXiv preprint arXiv:1906.00639, 2019.
[6] BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy preserving machine learning[C]//ACM SIGSAC Conference on Computer and CommunicationsSecurity. 2017: 1175-1191.
[7] KONEČNỲ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: Distributed machine learning for on-device intelligence[J]. arXiv preprint arXiv:1610.02527, 2016.
[8] KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and open problems in federated learning[J]. CoRR, 2019.
[9] LI T, SAHU A K, TALWALKAR A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50-60.
[10] KONEČNỲ J, MCMAHAN H B, YU F X, et al. Federated learning: Strategies for improving communication efficiency[J]. arXiv preprint arXiv:1610.05492, 2016.
[11] ROTHCHILD D, PANDA A, ULLAH E, et al. Fetchsgd: Communication-efficient federated learning with sketching[C]//International Conference on Machine Learning. PMLR, 2020:8253-8265.
[12] DIAO E, DING J, TAROKH V. Heterofl: Computation and communication efficient federated learning for heterogeneous clients[J]. arXiv preprint arXiv:2010.01264, 2020.
[13] LI T, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks[J].arXiv preprint arXiv:1812.06127, 2018.
[14] KARIMIREDDY S P, KALE S, MOHRI M, et al. Scaffold: Stochastic controlled averaging for federated learning[C]//International Conference on Machine Learning. PMLR, 2020: 5132-5143.
[15] IOFFE S, SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning. PMLR, 2015: 448-456.
[16] ZHU L, LIU Z, HAN S. Deep leakage from gradients[J]. Advances in Neural Information Processing Systems, 2019, 32.
[17] HITAJ B, ATENIESE G, PEREZ-CRUZ F. Deep models under the gan: Information leakage from collaborative deep learning[C]//ACM SIGSAC Conference on Computer and Communications Security. 2017: 603-618.
[18] BAGDASARYAN E, VEIT A, HUA Y, et al. How to backdoor federated learning[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2020: 2938-2948.
[19] SUN Z, KAIROUZ P, SURESH A T, et al. Can you really backdoor federated learning?[J].arXiv preprint arXiv:1911.07963, 2019.
[20] TORREY L, SHAVLIK J. Transfer learning[M]//Handbook of research on machine learning applications and trends: Algorithms, methods, and techniques. IGI Global, 2010: 242-264.
[21] WANG K, MATHEWS R, KIDDON C, et al. Federated evaluation of on-device personalization[J]. arXiv preprint arXiv:1910.10252, 2019.
[22] ARIVAZHAGAN M G, AGGARWAL V, SINGH A K, et al. Federated learning with personalization layers[J]. arXiv preprint arXiv:1912.00818, 2019.
[23] COLLINS L, HASSANI H, MOKHTARI A, et al. Exploiting shared representations for personalized federated learning[C]//International Conference on Machine Learning. PMLR, 2021:2089-2099.
[24] SHAMSIAN A, NAVON A, FETAYA E, et al. Personalized federated learning using hypernetworks[C]//International Conference on Machine Learning. PMLR, 2021: 9489-9502.
[25] VILALTA R, DRISSI Y. A perspective view and survey of meta-learning[J]. Artificial Intelligence Review, 2002, 18(2): 77-95.
[26] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Conference on Machine Learning. PMLR, 2017: 1126-1135.
[27] JIANG Y, KONEČNỲ J, RUSH K, et al. Improving federated learning personalization via model agnostic meta learning[J]. arXiv preprint arXiv:1909.12488, 2019.
[28] T DINH C, TRAN N, NGUYEN J. Personalized federated learning with moreau envelopes[J].Advances in Neural Information Processing Systems, 2020, 33: 21394-21405.
[29] LI X C, ZHAN D C, SHAO Y, et al. Fedphp: Federated personalization with inherited private models[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2021: 587-602.
[30] GUO B, MEI Y, XIAO D, et al. Pfl-moe: Personalized federated learning based on mixture of experts[J]. arXiv preprint arXiv:2012.15589, 2020.
[31] DENG Y, KAMANI M M, MAHDAVI M. Adaptive personalized federated learning[J]. arXiv preprint arXiv:2003.13461, 2020.
[32] HUANG Y, CHU L, ZHOU Z, et al. Personalized cross-silo federated learning on non-iid data[C]//AAAI Conference on Artificial Intelligence. 2021: 7865-7873.
[33] ZHANG M, SAPRA K, FIDLER S, et al. Personalized federated learning with first order model optimization[J]. arXiv preprint arXiv:2012.08565, 2020.
[34] HINTON G, VINYALS O, DEAN J, et al. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015.
[35] GOU J, YU B, MAYBANK S J, et al. Knowledge distillation: A survey[J]. International Journal of Computer Vision, 2021, 129(6): 1789-1819.
[36] ZADEH A, PU P. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph[C]//the 56th Annual Meeting of the Association for Computational Linguistics. 2018.
[37] ELLIS J G, JOU B, CHANG S F. Why we watch the news: A dataset for exploring sentiment in broadcast video news[C]//International Conference on Multimodal Interaction. 2014: 104-111.
[38] PORIA S, HAZARIKA D, MAJUMDER N, et al. Meld: A multimodal multi-party dataset for emotion recognition in conversations[J]. arXiv preprint arXiv:1810.02508, 2018.
[39] CAI Y, CAI H, WAN X. Multi-modal sarcasm detection in twitter with hierarchical fusion model[C]//the 57th Annual Meeting of the Association for Computational Linguistics. 2019:2506-2515.
[40] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[41] ZADEH A, CHEN M, PORIA S, et al. Tensor fusion network for multimodal sentiment analysis[J]. arXiv preprint arXiv:1707.07250, 2017.
[42] ZADEH A, LIANG P P, PORIA S, et al. Multi-attention recurrent network for human communication comprehension[C]//AAAI Conference on Artificial Intelligence. 2018: 5642-5649.
[43] ZADEH A, LIANG P P, MAZUMDER N, et al. Memory fusion network for multi-view sequential learning[C]//AAAI Conference on Artificial Intelligence. 2018: 5634-5641.
[44] FALLAH A, MOKHTARI A, OZDAGLAR A. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach[J]. Advances in Neural Information Processing Systems, 2020, 33: 3557-3568.
[45] SMITH V, CHIANG C K, SANJABI M, et al. Federated multi-task learning[J]. Advances in Neural Information Processing Systems, 2017, 30: 4424-4434.
[46] ZHANG Y, XIANG T, HOSPEDALES T M, et al. Deep mutual learning[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4320-4328.
[47] REDDI S, CHARLES Z, ZAHEER M, et al. Adaptive federated optimization[J]. arXiv preprint arXiv:2003.00295, 2020.
[48] SHAMIR O, SREBRO N, ZHANG T. Communication-efficient distributed optimization using an approximate newton-type method[C]//International Conference on Machine Learning. PMLR, 2014: 1000-1008.
[49] YU W, XU H, MENG F, et al. Ch-sims: A chinese multimodal sentiment analysis dataset with fine-grained annotation of modality[C]//the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 3718-3727.
[50] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
[51] DEGOTTEX G, KANE J, DRUGMAN T, et al. Covarep—a collaborative voice analysis repository for speech technologies[C]//IEEE International Conference on Acoustics, Speech and Signal Processing. 2014: 960-964.
[52] MCFEE B, RAFFEL C, LIANG D, et al. Librosa: Audio and music signal analysis in python[C]//the 14th Python in Science Conference. 2015: 18-25.
[53] ZHANG K, ZHANG Z, LI Z, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J]. IEEE Signal Processing Letters, 2016, 23(10): 1499-1503.
[54] BALTRUSAITIS T, ZADEH A, LIM Y C, et al. Openface 2.0: Facial behavior analysis toolkit[C]//IEEE International Conference on Automatic Face & Gesture Recognition. 2018: 59-66.
[55] LECUN Y, BOSER B, DENKER J S, et al. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4): 541-551.
[56] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation,1997, 9(8): 1735-1780.
[57] ZHANG W, LI R, ZENG T, et al. Deep model based transfer and multi-task learning for biological image analysis[J]. IEEE Transactions on Big Data, 2016, 6(2): 322-333.
[58] LIU W, MEI T, ZHANG Y, et al. Multi-task deep visual-semantic embedding for video thumbnail selection[C]//IEEE Conference on Computer Vision and Pattern Recognition. 2015: 3707-3715.
[59] CARUANA R. Multitask learning[J]. Machine Learning, 1997, 28(1): 41-75.
[60] ZHANG Y, YANG Q. A survey on multi-task learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2021.
修改评论