Covert Attacks through Adversarial Learning: Studying the Effect of Lane Keeping Attacks on the Safety of Autonomous Vehicles

2021 
Road management systems are to improve in terms of integrity, mobility, sustainability and safety by the adoption artificial intelligence and Internet of Things services. This paper introduces the concept of covert attacks on autonomous vehicles which can jeopardize the safety of passengers. Covert attacks are designed to manipulate the output of a cyber physical system through network channels in a way that while the changes are not easily noticeable by human beings, the system is negatively affected in the long run. We argue that future smart vehicles are prone to worms of such kind which can use adversarial learning methods to adapt themselves to hosts and remain stealth for a long period. As a case study, we design and launch a covert attack on the lane keeping system of autonomous vehicles. In the studied scenario, an intelligent adversary manipulates sensor readings (lane position, curvature, etc.) in order to deceive the controller to drive the vehicle closer to the boundaries. The worm/attacker interactively learns the host vehicle behaviors in terms of lateral deviation and maneuverability and tries to increase the errors to the extent that remains unnoticeable to the driver. This process is carried out by using actor-critic learning based on the Newton-Raphson method in the sample studied scenario. We additionally show how an intrusion detection system can be designed for such covert attacks to alert the driver. We use the GPS data as well as offline maps to reconstruct the road curves and match it against the readings in the case study. A simulation testbed is developed based on the map of Nurburgring-Grand Prix track to evaluate the developed models. Results confirm the validity and effectiveness of the proposed models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []