Data Poisoning Attacks (DPA)

DPA Full List PDF Version

Data Poisoning Attacks (DPAs) have been a serious threat to machine learning models used in computer vision, speech recognition, and other Artificial Intelligence (AI) application areas. The attacks are based on minimal change to data (szegedy 2014) and can deceive a trained model to produce incorrect outcomes. Thus, DPAs are able to poison complex and state-of-the-art machine learning models that are central to the decision-making processes of any intelligent system running in various sectors including business, industry, and defence. For example, Microsoft reported a DPA attack that targeted the company chatbot Tay whose training data were poisoned with racist tweets and consequently caused the chatbot’s conversational algorithm to generate offensive tweets (TayMicrosoftIssues2016). The consequence of a DPA can even lead to loss of human life. A recent piece of news reported that a vulnerability of the AI module in the autopilot of a Tesla car was exploited, and caused the failure to recognise a stopped car in the lane as an obstacle (engleSteeringWheelWas2021).