<acronym id="a6yas"></acronym><acronym id="a6yas"><small id="a6yas"></small></acronym>
<sup id="a6yas"><div id="a6yas"></div></sup>

免费发布

可信联邦学习系列活动-AIRS in the AIR

2022年6月28日 9:00 ~ 2022年6月28日 11:00
线上活动 (活动行Live)
AIRS 研究院

收起

活动票种
    付费活动,请选择票种

    第三方登录:

    展开活动详情

    活动内容收起

    微信图片_20220526090840.jpg


     1654656464(1).jpg

    Bo Li

    Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the MIT Technology Review TR-35 Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, IJCAI Computer and Thought Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for robust machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.

    Her website is http://boli.cs.illinois.edu/


    Trustworthy Federated Learning

    ?Abstract

    Advances in machine learning have led to rapid and widespread deployment of learning-based inference and decision-making for safety-critical applications, such as autonomous driving and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in inference time through poisoning attacks, especially in the distributed learning setting. In this talk, I will describe my recent research about security and privacy problems in federated learning, with a focus on potential certifiable defense approaches. We will also discuss other defense principles towards developing practical robust learning systems with robustness guarantees.

     

    kairouz_new.jpg

    Peter Kairouz

    Peter Kairouz is a research scientist at Google, where he coordinates research efforts on federated learning and privacy-preserving technologies. Before joining Google, he was a Postdoctoral Research Fellow at Stanford University. He received his Ph.D. in electrical and computer engineering from the University of Illinois at Urbana-Champaign (UIUC). He is the recipient of the 2012 Roberto Padovani Scholarship from Qualcomm's Research Center, the 2015 ACM SIGMETRICS Best Paper Award, the 2021 ACM Conference on Computer and Communications Security (CCS) Best Paper Award, the 2015 Qualcomm Innovation Fellowship Finalist Award, and the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC.

      


    Towards Sparse Federated Analytics: Location Heatmaps under Distributed Differential Privacy with Secure Aggregation

    Abstract

    I will start this talk by overviewing federated learning and analytics, and their core data minimization principles. I will then describe how privacy can be strengthened using complementary privacy techniques such as differential privacy, secure multi-party computation, and privacy auditing methods. I will spend much of the talk describing how we can carefully combine technologies like differential privacy and secure aggregation to obtain formal distributed privacy guarantees without fully trusting the server in adding noise. As a main example, I will present a scalable federated analytics algorithm for learning geolocation heatmaps with distributed differential privacy via secure aggregation. Evaluation on public location datasets shows that this approach successfully generates metropolitan-scale heatmaps from millions of user samples with a worst-case client communication overhead that is significantly smaller than existing state-of-the-art private protocols of similar accuracy.





    举报活动

    活动标签

    最近参与

    您还可能感兴趣

    您有任何问题,在这里提问!

    全部讨论

    还木有人评论,赶快抢个沙发!

    活动主办方更多

    微信扫一扫

    分享此活动到朋友圈

    活动日历   10月
    26 27 28 29 30 1 2
    3 4 5 6 7 8 9
    10 11 12 13 14 15 16
    17 18 19 20 21 22 23
    24 25 26 27 28 29 30
    31 1 2 3 4 5 6
    百姓彩票welcome登录