Towards Certifying Trustworthy Machine Learning Systems

2021 
Machine Learning (ML) is increasingly deployed in complex application domains replacing human-decision making. While ML has been surprisingly successful, there are fundamental concerns in wide scale deployment without humans in the loop. A critical question is the trustworthiness of such ML systems. Although there is research towards making ML systems more trustworthy, there remain many challenges. In this position paper, we discuss the challenges and limitations of current proposals. We focus on a more adversarial approach, borrowing ideas from certification of security software with the Common Criteria. While it is unclear how to get strong trustworthy assurances for ML systems, we believe this approach can further increase the level of trust.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []