Tuesday, December 5, 2023

THREATS IN ROBOTICS



Written  by: Joshna K S (1st year MCA)

ABSTRACT

Robotics, automation, and related Artificial Intelligence (AI) systems have become pervasive bringing in concerns related to security, safety, accuracy, and trust. With growing dependency on physical robots that work in close proximity to humans, the security of these systems is becoming increasingly important to prevent cyber-attacks that could lead to privacy invasion, critical operations sabotage, and bodily harm. The current shortfall of professionals who can defend such systems demands development and integration of such a curriculum. This course description includes details about seven self-contained and adaptive modules on “AI security threats against pervasive robotic systems”.

Topics include:

1) Introduction

2) Inference attacks and associated security strategies;

3) Ethics of AI, robotics, and cybersecurity.

4) Conclusion.

KEYWORDS - Cybersecurity, Robotics, Artificial Intelligence, Adversarial Artificial Intelligence.

INTRODUCTION 

Robotics, automation, and related Artificial Intelligence (AI) systems have become pervasive in our daily lives and transformed operations. Modern households, construction sites, warehouses, hospitals, precision agriculture, military, emergency workers, and more, use different sets of robots to provide workflow augmentation and mobility assistance. Robotic technologies have experienced mass adoption in the consumer space, industry, and critical infrastructures bringing in concerns related to security, safety, accuracy and trust. Inference attacks and associated security strategies: After understanding training attacks on AI components of robotic systems, students next study inference attacks on robotic systems. These inference attacks include model stealing, model evasion, and model inversion. Students first study and visualize the weights and layers of a neural network and understand how these contribute to computing the robot output, such as identified objects or movement commands when carrying out inference. Students gain familiarity with how these AI models are commonly deployed on robotic systems and how attackers can potentially steal or exfiltrate private information or intellectual property via their inference APIs. Students learn about adversarial techniques such as Simple Transformation of the input, Common Corruption, or Adversarial Examples (carefully perturbing the input to achieve desired output) which an attacker may use as a model evasion tactic to prevent correct output computation. Students next learn about model stealing attacks where the attackers are able to build a shadow model whose fidelity matches that of the victim by exploiting the robot’s inference engine. Finally, students learn about model inversion attacks where by querying the robot’s inference engine strategically, an adversary could extract potentially private information embedded in the training data. In a series of case studies are conducted by the students to understand how model inference attacks work. Students study several well-known model inference attack cases such as the Cylance model evasion and the GPT-2 model replication. With the Cylance model evasion incident, students study how attackers can use logging data to understand the inner workings of the model, and then reverse-engineer the model to understand which attributes can be adjusted to cause an incorrect inference. With the GPT-2 model replication incident, students’ study how attackers were able to make use of public documentation about GPT-2 to recover a functionally equivalent “shadow” model. Students are also be assigned the task of identifying and researching other examples of model inference attacks on an AI system as well as the security repercussions of those attacks. Ethics of 

AI , ROBOTICS AND CYBER SECURITY :

The intended goal of this module is to help students understand that AI, robotics, and their security will have a significant impact on the development of humanity. These have prompted fundamental questions about privacy and surveillance, manipulation of behaviour, opacity of AI systems, bias, human-robot interaction, employment, and machine ethics. Each of the fundamental questions will be discussed with the students along with various examples and case studies. Special attention will be paid to the concept of AI trustworthiness which, in turn, depends on the ability to assess and demonstrate the fairness (including broad accessibility and utility), transparency, explain ability, impartiality, inclusivity, and accountability of such systems. Students are made to understand specific national and international laws and regulations that can come into play while building secure robotic systems. Concrete examples linking the aforementioned fundamental questions with existing laws are discussed. requires students to investigate and research a robotic cyber incident. Here they are tasked with explaining the cyber incident, research relevant legal statutes, and give their opinions. Afterwards, we impress upon the students the importance of human-robot interaction (HRI) studies. These studies enable robotic developers to understand user needs with respect to data collection and processing, information manipulation, trust, blame, informational privacy, and security. We explain to the students how to set up a scientifically sound privacy and security HRI study, collect data, gather inferences, and use the results to make informed decisions while developing robots and its AI subsystems. will entail students executing a comprehensive privacy and security HRI study on a robotic system of their choice. Students create a security/privacy related HRI experiment, an Institutional Review Board (IRB) proposal, along with the necessary pre- and post-surveys.

CONCLUSION

In this article, we provide an outline of a course with an aim of inculcating a culture of preparedness against AI security threats to pervasive robotic systems. We firmly believe that such a course when introduced at various universities and educational institutions will produce graduates and future workforce which will be well versed and equipped to prevent, detect and mitigate against sophisticated cyberattacks.

No comments:

Post a Comment

AI IN CRYPTOGRAPHY

Written by: PALLAVI V (Final year BCA) 1.     ABSTRACT: The integration of AI in Cryptography represents a significant advancement in ...