Artificial Intelligence (AI) and Machine Learing (ML) Risk Management
An AI system must be reliable, auditable, and explainable. There are 8 types of risks that must be considered for any AI systems. We assist our clients with the assessment and mitigation of AI risks.
- Bias: Perpetuation or reinforcement of societal biases, leading to unfair and discriminatory outcomes.
- Explainability: Unexplainable models (made by AI) that make decisions or prediction.
- Safety: Endangering human lives, if they malfunction or make errors (autonomous driving cars).
- Job Loss: Due to further automation and exacerbating income inequality.
- Misuse: Intentional or unintentional misuse by public and private entities.
- Dependence: over-reliance on AI systems can make entities vulnerable when AI system is not available.
- Privacy: AI-enabled technologies amplify privacy and data protection risks.
- Security: AI systems can be vulnerable to hacking and other forms of malicious attacks, which could lead to theft, fraud, or other harms, and risks outlined above.
AI Security Risks:
There are 6 AI and ML security risks:
- Model Stealing: AI models are valuable assets, and an attacker may want to steal a model and use it for their own purposes. This can happen by extracting the model weights directly from the model file, or by reverse engineering the model to infer its structure and parameters.
- Model Poisoning: Since AI systems rely on training data during learning phase (to build a model), an attacker could try to “poison” the training data by injecting malicious examples into the dataset, which could cause the model to learn incorrect or harmful patterns.
- Input Attacks: Since AI systems rely on input, slight change or noise during the input can fool the AI model and therefore misclassify the input or disturb the future outcome.
- Privacy Attacks: AI systems often have access to large number of personal (PII) data during training and operation. Intentional or inadvertent disclosure, mishandling, or unauthorized access to this data could result in a privacy breach.
- Functonality Attacks: AI-powered systems that can operate independently, such as self-driving cars or drones, could be vulnerable to hacking or other forms of malicious attacks that could cause the system to malfunction or cause harm.
- Availability Attacks: if an attacker targets the AI system or cause an AI failure, it could have a large impact because of the over-reliance of certain sectors or industries on the AI systems.
AI Privacy Risks:
There are 7 AI privacy risks:
- Data collection: AI systems collect large amounts for data, which if it is not on consent or if it is mishandled can be considered a privacy breach.
- Data use: AI systems can analyze and extract insights from the data for purposes (use) other than what I was originally intended or consented (e.g. credit scoring).
- Data sharing: AI systems may share data with third parties, which could increase the risk of data breaches or misuse.
- Data Retention: AI systems may store data indefinitely.
- Transparency: Some AI systems are considered “black boxes”, making it hard to understand or explain how the system reached a particular decision for an individual.
- Re-identification: Even if the data is anonymized, it may still be possible to re-identify individuals through de-anonymization algorithms or linkage attack.
- Surveillance: AI-powered surveillance systems could enable large-scale monitoring of individuals’ activities and movements, which could have significant implications for privacy. (e.g. airport or highway surveillance system)
Frameworks, Checklists and Guidelines for AI Security & Privacy:
- CIS Security and Privacy Controls for Artificial Intelligence
- NIST Privacy Engineering for Artificial Intelligence
- ENISA Security and Privacy Risks of Artificial Intelligence: A report on the state of the art
- IAPP AI and Privacy Guide
Regulatory Requirements for AI in Canada:
The primary AI regulatory bodies in Canada are:
- PIPEDA for privacy
- CMDR and FDA for health and safety of AI for medical devices and food
- OSFI for the financial sector
- CRTC for telecommunication
- Transportation Safety Board of Canada, and Canadian Nuclear Safety Commission for safety and security.
Tools and Technology used for AI Risk Assessment:
- Penetration Testing: Metasploit, Nessus, and Burp Suite.
- Model Protection and Explainability: Open Neural Network Exchange (ONNX) and the AI Explainability Platform (AIX).
- Adversarial Testing: CleverHans, Foolbox, and IBM’s Adversarial Robustness Toolbox (ART).
- Privacy Enhancement: TensorFlow Privacy, and PySyft.
- Fairness and Bias Testing: AI Fairness 360 (AIF360), IBM’s What-If Tool and Google’s.
- Governance and compliance templates: DPIAs tools and templates, GDPR and the California Consumer Privacy Act (CCPA).
- Auditing frameworks: CIS-Critical Security Controls for Effective Cyber Defense.
- AI Security and Governance Platform: DataRobot’s Governance platform, IBM’s AI Fairness 360, and Google’s AI.