Hacking the AI – the Next Generation of Hijacked Systems

327Hacking the AI - the Next Generation of Hijacked Systems

Table of Contents




Abstract

Within the next decade, the need for automation, intelligent data handling and pre-processing is expected to increase in order to cope with the vast amount of information generated by a heavily connected and digitalised world. Over the past decades, modern computer networks, infrastructures and digital devices have grown in both complexity and interconnectivity. Cyber security personnel protecting these assets have been confronted with increasing attack surfaces and advancing attack patterns. In order to manage this, cyber defence methods began to rely on automation and (artificial) intelligence supporting the work of humans. However, machine learning (ML) and artificial intelligence (AI) supported methods have not only been integrated in network monitoring and endpoint security products but are almost omnipresent in any application involving constant monitoring, complex or large volumes of data. Intelligent IDS, automated cyber defence, network monitoring and surveillance as well as secure software development and orchestration are all examples of assets that are reliant on ML and automation. These applications are of considerable interest to malicious actors due to their importance to society. Furthermore, ML and AI methods are also used in audio-visual systems utilised by digital assistants, autonomous vehicles, face-recognition applications and many others. Successful attack vectors targeting the AI of audio-visual systems have already been reported. These attacks range from requiring little technical knowledge to complex attacks hijacking the underlying AI.

With the increasing dependence of society on ML and AI, we must prepare for the next generation of cyber attacks being directed against these areas. Attacking a system through its learning and automation methods allows attackers to severely damage the system, while at the same time allowing them to operate covertly. The combinationof being inherently hidden through the manipulation made, its devastating impact and the wide unawareness of AI and ML vulnerabilities make attack vectors against AI and ML highly favourable for malicious operators. Furthermore, AI systems tend to be difficult to analyse post-incident as well as to monitor during operations. Discriminating a compromised from an uncompromised AI in real-time is still considered difficult.

In this paper, we report on the state of the art of attack patterns directed against AI and ML methods. We derive and discuss the attack surface of prominent learning mechanisms utilised in AI systems. We conclude with an analysis of the implications of AI and ML attacks for the next decade of cyber conflicts as well as mitigations strategies and their limitations.

  • Author Keywords

    • AI hijacking,
    • artificial intelligence,
    • machine learning,
    • cyber attack,
    • cyber security
  • Controlled Indexing

    • learning (artificial intelligence),
    • security of data
  • Controlled Indexing

    • cyber defence methods ,
    • network monitoring ,
    • automated cyber defence ,
    • audio-visual systems ,
    • attack vectors ,
    • AI systems ,
    • uncompromised AI ,
    • ML attacks ,
    • hijacked systems ,
    • intelligent data handling ,
    • digital devices ,
    • cyber security personnel ,
    • AI hacking ,
    • digital assistants ,
    • autonomous vehicles ,
    • face-recognition

Introduction

Artificial intelligence (AI) has been applied in many scenarios in recent years, and this technology is expected to establish itself in further fields over the next decade. Within the military sphere alone, AI technology is expected to penetrate into areas such as intelligence, surveillance, reconnaissance, logistics, cyberspace operations, information operations (the most prominent technology is currently “deepfakes”), command and control, semiautonomous and autonomous vehicles and autonomous weapon systems. Numerous reports and analyses suggest that an AI arms race has indeed already begun [1]. In addition to the military application scenarios, AI systems are also utilised in applications such as public security surveillance [2], financial markets [3], healthcare [4], Human-Computer and Human-Machine Interactions, cybersecurity, power grid management [5], autonomous driving and driver assistance systems. Any of the aforementioned application scenarios are of high value to civilian, governmental or military units and have a high significance to society. Therefore, these applications and the systems involved must be considered as highly valuable assets in cyberwarfare and protected accordingly.

The security of AI systems is currently underrepresented in public discussions; however, reports on successful attacks on AI systems have emerged over the past couple of years. The utilised attack vectors range from requiring little technica expertise to attacks involving detailed knowledge of the underlying AI [6]. Reported results have ranged from the AI mistaking a turtle for a rifle, to making individuals undetectable to the system.

The penetration of AI throughout digital spaces is likely to increase even further over the next decade, as well as our reliance on its correct identification and reasoning abilities. AI is envisioned to outperform humans in most tasks involving processing large amounts of data/information, high precision or complex reasoning. It is assumed to deliver unbiased and rational results without interference from non-logical events or circumstances. This presumption renders hijacked AI systems an extremely dangerous threat to modern societies.

The wide-range of applications involving AI is startling, especially as AI has been regarded as being almost impossible to secure [7]. In December 2019, Microsoft published a series of materials on the topic, stating that “[i]n short, there is no common terminology today to discuss security threats to these systems and methods to mitigate them, and we hope these new materials will provide baseline language […]” [8]. Over the past decade, we have witnessed increasing and incautious utilisation of AI and ML techniques in applications whose correct functioning is crucial to modern societies. It is easy to imagine how any malfunctioning of these systems might have a devastating impact on civilian lives, financial markets, national security and even military operations.

With society’s increasing dependence on ML and AI, we must prepare for the next generation of cyber attacks being directed against these systems. Attacking the system through its learning and automation methods allows the attackers to severely damage the system by altering its learning outcome, decision making, identification or final output. Furthermore, it is difficult to analyse AI systems post-incident and integrate real-time monitoring during their operation: much of the learning and reasoning is done in what is called a “hidden layer” and in its essence corresponding to a black box model. Therefore, the discrimination of a compromised from an uncompromised AI system in real-time is still considered very difficult. With its increasing utilisation in crucial application scenarios, the security of AI systems becomes indispensable.Knowledge of AI systems’ vulnerabilities may also become of high importance to defensive cyber operations. During 2019, we witnessed increasing weaponisation of AI, often to create “deepfakes” – artificially generated or altered media material found to impose a sincere threat to democracies [9]. The uprising of deepfakes has encouraged the U.S. DARPA to spend $68 million on the identification of deepfakes over the past four years [10]. While it is of utmost importance to identify AI-supported disinformation campaigns, identification alone will not stop such operations. Offensive 330technological knowledge of how to stop AI-supported attacks will become essential to establish and uphold cyber power in an ongoing AI arms race.

The aim of this paper is to foster understanding of the susceptibility of AI systems to cyber attacks, how incautious utilisation of AI and ML may make societies vulnerable, and to transfer the value of knowing AI-/ML-system vulnerabilities within the ongoing AI arms race. Attack surface modelling is a key contribution to assessing a target’s susceptibility to attacks. However, AI systems have several peculiarities, which must be addressed when deriving the attack surface. Within this article, attack surfaces of different AI systems are derived that consider systems’ data assets, processing units and known attack vectors, allowing us to understand these systems’ vulnerabilities. Furthermore, these attack surfaces must be discussed with the systems’ societal and economic impact in mind to allow strategic and policy recommendations. At the time of writing, neither the AI systems’ concrete attack surface definition nor the embedment of the different AI systems’ specific operational setup have been part of the security assessment of these systems. Allowing an AI-specific, concrete attack surface discussion, which includes the operational setup associated with the AI/ML method utilised by the system, is the main contribution of this article in addition to providing insights into the role of AI systems’ susceptibilities to cyber attacks in the next decade of cyber conflicts.This paper will continue as follows: we start by giving a brief introduction to selected AI and ML methods currently deployed (section 2). We report on state of the art attack patterns directed against these systems and how it must be expected that these systems will become prominent targets over the next decade. We derive and discuss how attack surfaces may be modelled for AI systems (section 3). In section 4, we apply the previously derived attack surface model to AI systems utilising the different methods previously introduced in section 2 to compare their susceptibility to attacks. We conclude with an analysis of the implications of AI and ML attacks for the next generation of cyber conflicts and recent mitigation strategy attempts (section 5).

Concluson

Summarising the above findings and discussions, the combination of being inherently covert, their devastating impact on society and the wide unawareness of AI and ML vulnerabilities make attack vectors against these systems highly favourable for malicious cyber operators. Such attacks have already been witnessed and are being discussed in technical and academic communities but have not yet reached the public sphere, nor are application developers aware of the risk imposed by the utilisation of AI.Despite the analyses presented in section 4, it remains difficult to provide a vulnerability hierarchy of the methods investigated regarding their susceptibility to cyber-attacks. While some entry/exit points are easier to attack, others are only accessible with insider knowledge. The impact of the attack varies greatly with the data assets targeted and the specific method used. Using a preliminary approach to derive a quantifiable hierarchy based on the number of possible entry/exit points, one may observe that the number of entry/exit points is lowest in CNNs, followed by GANs and ANNs. SVMs have the same amount of identified entry/exit points as GANs. However, for AI systems, the mere number of entry/exit points is not a good measure of the susceptibility of the technology investigated. It appears that each of the AI/ML methods investigated has specific high-value data assets, which make the system vulnerable through a combination of the data asset and a specific trait or process utilised. As an example, SVMs are highly sensitive to support vector manipulations, while GANs are exceptionally vulnerable to transfer learning attacks. The likelihood of successfully manipulating, destroying or obtaining these specific assets, traits or processes appears to give a more reliable assessment of the susceptibility than merely counting the overall number of access points. This is due to the fact that not all assets are equally important for the system to uphold its function, nor do all assets allow manipulation by an attacker or interact with the system.

In conclusion, it must be noted that AI systems are indeed susceptible to cyber-attacks and that the utilisation of AI or ML methods increases any applications’ vulnerability. This necessitates a more sensitive use of AI and ML methods in security- or safety-sensitive applications. Defining the attack surface of AI systems has provided information that requires further interpretation to derive the application-specific risk of utilising AI/ML in the application context. Currently, only a few reports exist on attack surface metrics [40], and these are not specific to AI systems. We have seen that these systems cannot be analysed by solely investigating attack surfaces, but that the internal processing discloses particular weaknesses that are a result of the data assets used and the characteristics and processes of the methods used. Recent attacks against AI systems have shown that vulnerabilities are a result of the combination of particular AI architectures, the methods used, implementation decisions (data sharing, framework and library choices) as well as the data processing, storage and handling itself. In order to enhance the security of AI systems, a common language to discuss the vulnerability of such systems must be installed. Furthermore, methods to reliably quantify systems’ susceptibility to cyber-attacks must be developed. Policy considerations being driven by the AI community show that they need to harden AI systems against manipulations and attacks has been acknowledged within academic communities. Preliminary results from within the EU have been achieved by the Fraunhofer IAIS and the University of Bonn, who cooperated with the German Federal Office for Information Security to define a certification standard for AI, including security considerations. These results follow the EU AI HLEG and the EU AI Alliance working on the European Strategy on Artificial Intelligence.

Given the anticipated ubiquitous utilisation of AI and ML in applications over the next decade, the already existing diversity of attack vectors and the current inferiority of countermeasures is alarming. The defence of AI systems is yet at its beginning and requires further investigation into the specific vulnerabilities of these systems [41]. Furthermore, knowledge of AI systems’ vulnerabilities may become crucial to defend against cyber operations which are being carried out with the aid of AI. Such operations are currently described in modern disinformation campaigns, as well as in information and hybrid warfare with only limited countermeasures currently available. In the context of political challenges and the ongoing AI arms race, profound knowledge of AI systems’ vulnerabilities must be established to uphold cyber sovereignty.

About KSRA

The Kavian Scientific Research Association (KSRA) is a non-profit research organization to provide research / educational services in December 2013. The members of the community had formed a virtual group on the Viber social network. The core of the Kavian Scientific Association was formed with these members as founders. These individuals, led by Professor Siavosh Kaviani, decided to launch a scientific / research association with an emphasis on education.

KSRA research association, as a non-profit research firm, is committed to providing research services in the field of knowledge. The main beneficiaries of this association are public or private knowledge-based companies, students, researchers, researchers, professors, universities, and industrial and semi-industrial centers around the world.

Our main services Based on Education for all Spectrum people in the world. We want to make an integration between researches and educations. We believe education is the main right of Human beings. So our services should be concentrated on inclusive education.

The KSRA team partners with local under-served communities around the world to improve the access to and quality of knowledge based on education, amplify and augment learning programs where they exist, and create new opportunities for e-learning where traditional education systems are lacking or non-existent.

FULL Paper PDF file:

Hacking the AI - the Next Generation of Hijacked Systems

Bibliography

author

K. Hartmann and C. Steup, “Hacking the AI – the Next Generation of Hijacked Systems

Year

2020

Title

Hacking the AI – the Next Generation of Hijacked Systems

Publish in

2020 12th International Conference on Cyber Conflict (CyCon), Estonia, 2020, pp. 327-349,

Doi

10.23919/CyCon49761.2020.9131724

PDF reference and original file: Click here

 

+ posts

Somayeh Nosrati was born in 1982 in Tehran. She holds a Master's degree in artificial intelligence from Khatam University of Tehran.

Website | + posts

Professor Siavosh Kaviani was born in 1961 in Tehran. He had a professorship. He holds a Ph.D. in Software Engineering from the QL University of Software Development Methodology and an honorary Ph.D. from the University of Chelsea.

Website | + posts

Nasim Gazerani was born in 1983 in Arak. She holds a Master's degree in Software Engineering from UM University of Malaysia.