A testing approach for dependable Machine Learning systems
Abstract
In order to be used into a critical system, a software or an hardware component must come with strong evidences that the designer's intents have been correctly captured and implemented. This activity is already complex and expensive for classical systems despite a very large corpus of verification methods and tools. But it is even more complicated for systems embedding Machine Learning (ML) algorithms due to the very nature of the functions being implemented using ML and the very nature of the ML algorithms. This paper focuses on one specific verification technique, testing, for which we propose a four-pronged approach combining performance testing, robustness testing, worst-case testing, and bias testing.
Origin : Files produced by the author(s)