中文版 | English
题名

Exploring Leaked Information in Privacy-Aware Machine Learning

姓名
姓名拼音
ZHANG Kaiyue
学号
11960004
学位类型
博士
学位专业
Computer Science and Engineering
导师
宋轩
导师单位
计算机科学与工程系
论文答辩日期
2024-07-10
论文提交日期
2024-09-03
学位授予单位
悉尼科技大学
学位授予地点
澳大利亚
摘要

Privacy-aware machine learning is a significant field that integrates privacy concerns into

designing, developing, and deploying machine learning algorithms. Emerging techniques,

such as federated learning and machine unlearning, play a crucial role in safeguarding

user data, ensuring it remains inaccessible to servers, and upholding the right to be

forgotten during machine learning processes. However, despite these methods solely disclosing

machine learning model parameters without exposing data, they still carry a risk

of privacy leakage. In this thesis, our primary focus is investigating information leakage

caused by the changes in privacy-aware machine learning model parameters. Firstly, we

explore the information leakage in federated learning, where model parameter changes

stem from aggregating the gradients of local users. By exploiting the leaked gradients

and mobility prior knowledge, we propose a novel reconstruction attack algorithm aimed

at recovering local users’ private data. Subsequently, we explore the information leakage

issue in machine unlearning. The changes in machine unlearning model parameters stem

from removing the influence of a small amount of user data on the model. Leveraging the

relational knowledge between leaked changes in model parameters and the unlearned

data, we present a forgotten sample reconstruction algorithm. This algorithm is based

on self-designed Conditional Matching Generative Adversarial Networks, facilitating

the reconstruction of private unlearned samples. Furthermore, for complex unlearning

reconstruction tasks, considering the significant changes in model parameters and the

intricate details in complex unlearned samples, we propose a hierarchical reconstruction

attack driven by pre-trained models. Finally, to enhance privacy protection and defend

against the aforementioned reconstruction attacks in privacy-aware machine learning,

we investigate the potential counter-attack strategies and develop a new unlearning

strategy based on secure substitution.

关键词
语种
英语
培养类别
联合培养
入学年份
2020
学位授予年份
2024-07
参考文献列表
[1] Sheng Shen, Tianqing Zhu, Di Wu, Wei Wang, and Wanlei Zhou. From distributed machine learning to federated learning: In the view of data privacy and security. Concurrency and Computation: Practice and Experience, 2020.
[2] Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Pro- ceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1310–1321, 2015.
[3] Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, and Athanasios V Vasilakos. Privacy and security issues in deep learning: A survey. IEEE Access, 9:4566–4593, 2020.
[4] Maryam M Najafabadi, Flavio Villanustre, Taghi M Khoshgoftaar, Naeem Seliya, Randall Wald, and Edin Muharemagic. Deep learning applications and challenges in big data analytics. Journal of big data, 2(1):1–21, 2015.
[5] Xue-Wen Chen and Xiaotong Lin. Big data deep learning: challenges and perspec- tives. IEEE access, 2:514–525, 2014.
[6] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
[7] Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. Deep models under the gan: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pages 603–618, 2017.
[8] Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF), pages 268–282. IEEE, 2018.
[9] Sawsan AbdulRahman, Hanine Tout, Hakima Ould-Slimane, Azzam Mourad, Chamseddine Talhi, and Mohsen Guizani. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet of Things Journal, 8(7):5476–5497, 2020.
[10] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
[11] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191, 2017.
[12] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2017.
[13] Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, 10(2):1–19, 2019.
[14] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282. PMLR, 2017.
[15] David Leroy, Alice Coucke, Thibaut Lavril, Thibault Gisselbrecht, and Joseph Dureau. Federated learning for keyword spotting. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6341– 6345. IEEE, 2019.
[16] Swaroop Ramaswamy, Rajiv Mathews, Kanishka Rao, and Françoise Beaufays. Federated learning for emoji prediction in a mobile keyboard. arXiv e-prints, pages arXiv–1906, 2019.
[17] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. Advances in Neural Information Processing Systems, 33, 2020.
[18] Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 792–801, 2021.
[19] Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient- based methods for machine unlearning. In Algorithmic Learning Theory, pages 931–962. PMLR, 2021.
[20] Ronak Mehta, Sourav Pal, Vikas Singh, and Sathya N Ravi. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10422–10431, 2022.
[21] Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. Un- rolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), pages 303–319. IEEE, 2022.
[22] Ga Wu, Masoud Hashemi, and Christopher Srinivasa. Puma: Performance un- changed model augmentation for training data removal. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8675–8682, 2022.
[23] Chuan Guo, Tom Goldstein, Awni Hannun, and Laurens Van Der Maaten. Cer- tified data removal from machine learning models. In Proceedings of the 37th International Conference on Machine Learning, pages 3832–3842, 2020.
[24] Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pages 14774–14784, 2019.
[25] Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP), pages 691–706. IEEE, 2019.
[26] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333, 2015.
[27] Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. When machine unlearning jeopardizes privacy. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 896–911, 2021.
[28] Zhaobo Lu, Hai Liang, Minghao Zhao, Qingzhe Lv, Tiancai Liang, and Yilei Wang. Label-only membership inference attacks on machine unlearning without depen- dence of posteriors. International Journal of Intelligent Systems, 37(11):9424–9441, 2022.
[29] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representa- tions using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, 2014.
[30] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5769– 5779, 2017.
[31] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial net- works. Communications of the ACM, 63(11):139–144, 2020.
[32] Borja Balle, Giovanni Cherubin, and Jamie Hayes. Reconstructing training data with informed adversaries. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1138–1156. IEEE, 2022.
[33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[34] Yufeng Zhan, Peng Li, Zhihao Qu, Deze Zeng, and Song Guo. A learning-based incentive mechanism for federated learning. IEEE Internet of Things Journal, 7 (7):6360–6368, 2020.
[35] Xuezhen Tu, Kun Zhu, Nguyen Cong Luong, Dusit Niyato, Yang Zhang, and Juan Li. Incentive mechanisms for federated learning: From economic and game theo- retic perspective. IEEE transactions on cognitive communications and networking, 8(3):1566–1593, 2022.
[36] Adrian Nilsson, Simon Smith, Gregor Ulm, Emil Gustavsson, and Mats Jirstrand. A performance evaluation of federated learning algorithms. In Proceedings of the Second Workshop on Distributed Infrastructures for Deep Learning, pages 1–8, 2018.
[37] Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075–18086, 2021.
[38] Chris Bishop. Exact calculation of the hessian matrix for the multilayer perceptron.Neural Computation, 4(4):494–501, 1992.
[39] Latif U Khan, Walid Saad, Zhu Han, Ekram Hossain, and Choong Seon Hong. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Communications Surveys & Tutorials, 2021.
[40] Zipei Fan, Xuan Song, Renhe Jiang, Quanjun Chen, and Ryosuke Shibasaki. De- centralized attention-based personalized human mobility prediction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(4): 1–26, 2019.
[41] Jie Feng, Can Rong, Funing Sun, Diansheng Guo, and Yong Li. Pmf: A privacy- preserving human mobility prediction framework via federated learning. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 4(1), March 2020. doi: 10.1145/ 3381006. URL https://doi.org/10.1145/3381006.
[42] Ruipeng Gao, Xuan Xiao, Shuli Zhu, Weiwei Xing, Chi Li, Lei Liu, Li Ma, and Hua Chai. Glow in the dark: Smartphone inertial odometry for vehicle tracking in gps blocked environments. IEEE Internet of Things Journal, 8(16):12955–12967, 2021.
[43] Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Lingjuan Lyu, and Ji Liu. Data poisoning attacks on federated machine learning. IEEE Internet of Things Journal, 2021.
[44] Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pages 1–11, 2019.
[45] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
[46] Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. In- verting gradients–how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020.
[47] Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al. Privacy- preserving deep learning via additively homomorphic encryption. IEEE Transac- tions on Information Forensics and Security, 13(5):1333–1345, 2017.
[48] Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, page 1467‚Äì1474, Madison, WI, USA, 2012. Omnipress. ISBN 9781450312851.
[49] Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 2020.
[50] Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schütze. Comparative study of cnn and rnn for natural language processing. arXiv preprint arXiv:1702.01923, 2017.
[51] Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the ‚Äòright to be forgotten‚Äô. Computer Law & Security Review, 29(3):229–235, 2013.
[52] Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480. IEEE, 2015.
[53] Ahmed Mohamed Gamal Salem, Apratim Bhattacharyya, Michael Backes, Mario Fritz, and Yang Zhang. Updates-leak: Data set inference and reconstruction attacks in online learning. In 29th USENIX Security Symposium, pages 1291– 1308. USENIX, 2020.
[54] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influ- ence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885–1894, 2017.
[55] Kai-Fung Chu, Albert YS Lam, and Victor OK Li. Deep multi-scale convolu- tional lstm network for travel demand and origin-destination predictions. IEEE Transactions on Intelligent Transportation Systems, 21(8):3219–3232, 2019.
[56] Abebe Belay Adege, Hsin-Piao Lin, and Li-Chun Wang. Mobility predictions for iot devices using gated recurrent unit network. IEEE Internet of Things journal, 7 (1):505–517, 2019.
[57] Mu Han, Kai Xu, Shidian Ma, Aoxue Li, and Haobin Jiang. Federated learning- based trajectory prediction model with privacy preserving for intelligent vehicle. International Journal of Intelligent Systems, 37(12):10861–10879, 2022.
[58] Zhengxin Yu, Jia Hu, Geyong Min, Zhiwei Zhao, Wang Miao, and M Shamim Hossain. Mobility-aware proactive edge caching for connected vehicles using federated learning. IEEE Transactions on Intelligent Transportation Systems, 22 (8):5341–5351, 2020.
[59] Xiaoding Wang, Wenxin Liu, Hui Lin, Jia Hu, Kuljeet Kaur, and M Shamim Hossain. Ai-empowered trajectory anomaly detection for intelligent transportation systems: A hierarchical federated learning approach. IEEE Transactions on Intelligent Transportation Systems, 24(4):4631–4640, 2022.
[60] Xuehan Zhou, Ruimin Ke, Zhiyong Cui, Qiang Liu, and Wenxing Qian. Stfl: Spatio- temporal federated learning for vehicle trajectory prediction. In 2022 IEEE 2nd International Conference on Digital Twins and Parallel Intelligence (DTPI), pages 1–6. IEEE, 2022.
[61] Henry Loeffler-Wirth, Maria Schmidt, and Hans Binder. Covid-19 transmission trajectories–monitoring the pandemic in the worldwide context. Viruses, 12(7):777, 2020.
[62] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning repre- sentations by back-propagating errors. nature, 323(6088):533–536, 1986.
[63] Fengli Xu, Zhen Tu, Yong Li, Pengyu Zhang, Xiaoming Fu, and Depeng Jin. Tra- jectory recovery from ash: User privacy is not preserved in aggregated mobility data. In Proceedings of the 26th international conference on world wide web, pages 1241–1250, 2017.
[64] Zhen Tu, Fengli Xu, Yong Li, Pengyu Zhang, and Depeng Jin. A new privacy breach: User trajectory recovery from aggregated mobility data. IEEE/ACM Transactions on Networking, 26(3):1446–1459, 2018.
[65] Chaoming Song, Zehui Qu, Nicholas Blumm, and Albert-László Barabási. Limits of predictability in human mobility. Science, 327(5968):1018–1021, 2010.
[66] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
[67] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
[68] Kaggle Taxi Trajectory Prediction Data. Ecml/pkdd 15: Taxi trajectory prediction (i), 2015.
[69] Didi Chuxing. Data source: Didi chuxing gaia initiative. https://gaia. didichuxing.com, 2016.
[70] Donald J Berndt and James Clifford. Using dynamic time warping to find patterns in time series. In KDD workshop, volume 10, pages 359–370, 1994.
[71] Meinard Müller. Dynamic time warping. Information retrieval for music and motion, pages 69–84, 2007.
[72] Geoff Boeing. Osmnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. Computers, Environment and Urban Systems, 65:126–139, 2017.
[73] Desheng Zhang, Juanjuan Zhao, Fan Zhang, and Tian He. Urbancps: a cyber- physical system based on multi-source big infrastructure data for heterogeneous model integration. In Proceedings of the ACM/IEEE Sixth International Confer- ence on Cyber-Physical Systems, pages 238–247, 2015.
[74] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211–407, 2014.
[75] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3):50–60, 2020.
[76] Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10 (3152676):10–5555, 2017.
[77] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278– 2324, 1998.
[78] Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
[79] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747.
[80] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
[81] Nitin Bhatia et al. Survey of nearest neighbor techniques. arXiv e-prints, pages arXiv–1007, 2010.
[82] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Ku- nal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
[83] Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, and Lei Yu. Effects of differential privacy and data skewness on membership inference vulnerability. In 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), pages 82–91. IEEE, 2019.
[84] Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Ma- chine unlearning of features and labels. In 30th Annual Network and Distributed System Security Symposium, NDSS 2023, San Diego, California, USA, February 27- March 3, 2023. The Internet Society, 2023. URL https://www.ndss-symposium. org/ndss-paper/machine-unlearning-of-features-and-labels/.
[85] Dan C. Cires¸an, Ueli Meier, Jonathan Masci, Luca M. Gambardella, and Jürgen Schmidhuber. Flexible, high performance convolutional neural networks for image classification. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Two, IJCAI’11, page 1237‚Äì1242. AAAI Press, 2011. ISBN 9781577355144.
[86] Kaiyue Zhang, Weiqi Wang, Zipei Fan, Xuan Song, and Shui Yu. Conditional matching gan guided reconstruction attack in machine unlearning. In GLOBE- COM 2023 - 2023 IEEE Global Communications Conference, pages 44–49, 2023. doi: 10.1109/GLOBECOM54140.2023.10437231.
[87] Mattis Paulin, Matthijs Douze, Zaid Harchaoui, Julien Mairal, Florent Perronin, and Cordelia Schmid. Local convolutional features with unsupervised training for image retrieval. In Proceedings of the IEEE international conference on computer vision, pages 91–99, 2015.
[88] Tengfei Wang, Ting Zhang, Bo Zhang, Hao Ouyang, Dong Chen, Qifeng Chen, and Fang Wen. Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952, 2022.
[89] Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 806–813, 2014.
[90] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. Unsuper- vised pre-training of image features on non-curated data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2959–2968, 2019.
[91] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[92] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[93] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xi- aohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[94] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Im- agenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
[95] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
[96] Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross- modal contrastive learning for text-to-image generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 833–842, 2021.
[97] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In-domain gan inversion for real image editing. In European conference on computer vision, pages 592–608. Springer, 2020.
[98] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Computer Vision–ECCV 2016: 14th Euro- pean Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 694–711. Springer, 2016.
[99] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009. URL https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
[100] Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. To- wards unbounded machine unlearning. Advances in Neural Information Processing Systems, 36, 2024.
[101] Ayush K Tarun, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
[102] Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, and Philip S Yu. Machine unlearning: A survey. ACM Computing Surveys, 56(1):1–36, 2023.
[103] Vikram S Chundawat, Ayush K Tarun, Murari Mandal, and Mohan Kankanhalli. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023.
[104] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[105] Zhaoyang Niu, Guoqiang Zhong, and Hui Yu. A review on the attention mechanism of deep learning. Neurocomputing, 452:48–62, 2021.
[106] Sarthak Jain and Byron C Wallace. Attention is not explanation. arXiv preprint arXiv:1902.10186, 2019.
[107] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International conference on machine learning, pages 7354–7363. PMLR, 2019.
[108] Rishav Chourasia, Neil Shah, and Reza Shokri. Forget unlearning: Towards true data-deletion in machine learning. arXiv preprint arXiv:2210.08911, 2022.
[109] Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. Adaptive machine unlearning. Advances in Neural Information Processing Systems, 34:16319–16330, 2021.
[110] Enayat Ullah, Tung Mai, Anup Rao, Ryan A Rossi, and Raman Arora. Machine unlearning via algorithmic stability. In Conference on Learning Theory, pages 4126–4142. PMLR, 2021.
[111] Robert F Ling. Residuals and influence in regression, 1984.
[112] Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In Prroceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11516–11524, 2021.
来源库
人工提交
成果类型学位论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/833842
专题工学院_计算机科学与工程系
推荐引用方式
GB/T 7714
Zhang KY. Exploring Leaked Information in Privacy-Aware Machine Learning[D]. 澳大利亚. 悉尼科技大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可 操作
11960004-张恺玥-计算机科学与工(18991KB)----限制开放--请求全文
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[张恺玥]的文章
百度学术
百度学术中相似的文章
[张恺玥]的文章
必应学术
必应学术中相似的文章
[张恺玥]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。