Risk score learning for COVID-19 contact tracing apps.

2021 
Digital contact tracing apps for COVID-19, such as the one developed by Google and Apple, need to estimate the risk that a user was infected during a particular exposure, in order to decide whether to notify the user to take precautions, such as entering into quarantine, or requesting a test. Such risk score models contain numerous parameters that must be set by the public health authority. Although expert guidance for how to set these parameters has been provided (e.g. https://github.com/lfph/gaen-risk-scoring/blob/main/risk-scoring.md), it is natural to ask if we could do better using a data-driven approach. This can be particularly useful when the risk factors of the disease change, e.g., due to the evolution of new variants, or the adoption of vaccines. In this paper, we show that machine learning methods can be used to automatically optimize the parameters of the risk score model, provided we have access to exposure and outcome data. Although this data is already being collected in an aggregated, privacy-preserving way by several health authorities, in this paper we limit ourselves to simulated data, so that we can systematically study the different factors that affect the feasibility of the approach. In particular, we show that the parameters become harder to estimate when there is more missing data (e.g., due to infections which were not recorded by the app). Nevertheless, the learning approach outperforms a strong manually designed baseline.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []