||Algorithmic Perspectives on Certification of Machine Learning
||Prof.Xiaowei Huang (University of Liverpool)
||Machine learning has been proven practical in solving complex problems that cannot be solved before but was also found to be not without shortfalls. Therefore, before its adoption in safety critical applications, machine learning and machine learning enabled systems need to be certified, that is, a written assurance (i.e., a certificate) is provided to justify that it meets specific requirements. This talk will provide an overview on the certification of machine learning, from the algorithmic perspectives in dealing with the vulnerabilities of machine learning. This includes the efforts on falsification, explanation, verification, enhancement, reliability estimation, and runtime monitoring, in dealing with known risks in the machine learning development cycle, such as generalisation, uncertainty, robustness, poisoning, backdoor, and privacy-related attacks.
||Xiaowei Huang is Professor of Computer Science at the University of Liverpool, leads the Trustworthy Autonomous Cyber-Physics Systems laboratory, and acts as the Research Lead of the school of EEECS. His research interests span over AI Safety and Security, verification and validation of learning-enabled systems, explainable AI, and formal methods. He had seminal works on “safety verification of deep learning” and published a textbook “Machine Learning Safety”. He has published 100+ papers, most of which appeared in top conferences such as AAAI, IJCAI, CAV, ICSE, NeurIPS, CVPR, ICRA, and IROS. He co-chairs AISafety workshops since 2018, and his research is supported by EPSRC, EU, Innovate UK, and Dstl (MoD).