Not Directly Stated, Not Explicitly Stored:: Conversational Agents and the Privacy Threat of Implicit Information

2021 
As conversational agents continue to evolve, it will become increasingly common to interact with search engines and recommender systems via natural language dialogue. Such interactions guide and shape our decision making, especially our consumption of products and services. The evolution of conversational agents will bring new challenges in protecting the privacy of users and research has already begun to identify and address potential threats. Current research, however, focuses on how conversational agents acquire and process explicit information. In this paper, we consider the future and bring to light the up-and-coming privacy risks posed by implicit information. Our first point is that meaning that is expressed implicitly is an integral part of natural language, implying that agents that have the ability to engage in a fully humanlike dialogue will also have the ability to manipulate implied meaning. As a result, such agents will be capable of acquiring sensitive information about users that is not directly stated. Users have little awareness of or control over information that is implicitly communicated. Our second point is that in today's search and recommender systems user profiles are not explicitly stored. As a result, it is not obvious that a user is being targeted on the basis of implicit person-specific information. The way forward, we argue, is for research in the area of conversational agents to devote more attention to the linguistic principles that underlie implied meaning and the legal means that are available to protect users.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []