Artificial Intelligence (AI) and Machine Learing (ML) Risk Management

An AI system must be reliable, auditable, and explainable.  There are 8 types of risks that must be considered for any AI systems. We assist our clients with the assessment and mitigation of AI risks.

  1. Bias: Perpetuation or reinforcement of societal biases, leading to unfair and discriminatory outcomes.
  2. Explainability: Unexplainable models (made by AI) that make decisions or prediction.
  3. Safety: Endangering human lives, if they malfunction or make errors (autonomous driving cars).
  4. Job Loss: Due to further automation and exacerbating income inequality.
  5. Misuse: Intentional or unintentional misuse by public and private entities.
  6. Dependence: over-reliance on AI systems can make entities vulnerable when AI system is not available.
  7. Privacy: AI-enabled technologies amplify privacy and data protection risks.
  8. Security: AI systems can be vulnerable to hacking and other forms of malicious attacks, which could lead to theft, fraud, or other harms, and risks outlined above.

AI Security Risks:

There are 6 AI and ML security risks:

  1. Model Stealing: AI models are valuable assets, and an attacker may want to steal a model and use it for their own purposes. This can happen by extracting the model weights directly from the model file, or by reverse engineering the model to infer its structure and parameters.
  2. Model Poisoning: Since AI systems rely on training data during learning phase (to build a model), an attacker could try to “poison” the training data by injecting malicious examples into the dataset, which could cause the model to learn incorrect or harmful patterns.
  3. Input Attacks: Since AI systems rely on input, slight change or noise during the input can fool the AI model and therefore misclassify the input or disturb the future outcome.
  4. Privacy Attacks: AI systems often have access to large number of personal (PII) data during training and operation. Intentional or inadvertent disclosure, mishandling, or unauthorized access to this data could result in a privacy breach.
  5. Functonality Attacks: AI-powered systems that can operate independently, such as self-driving cars or drones, could be vulnerable to hacking or other forms of malicious attacks that could cause the system to malfunction or cause harm.
  6. Availability Attacks: if an attacker targets the AI system or cause an AI failure, it could have a large impact because of the over-reliance of certain sectors or industries on the AI systems.

AI Privacy Risks:

There are 7 AI privacy risks:

  1. Data collection: AI systems collect large amounts for data, which if it is not on consent or if it is mishandled can be considered a privacy breach.
  2. Data use: AI systems can analyze and extract insights from the data for purposes (use) other than what I was originally intended or consented (e.g. credit scoring).
  3. Data sharing: AI systems may share data with third parties, which could increase the risk of data breaches or misuse.
  4. Data Retention: AI systems may store data indefinitely.
  5. Transparency: Some AI systems are considered “black boxes”, making it hard to understand or explain how the system reached a particular decision for an individual.
  6. Re-identification: Even if the data is anonymized, it may still be possible to re-identify individuals through de-anonymization algorithms or linkage attack.
  7. Surveillance: AI-powered surveillance systems could enable large-scale monitoring of individuals’ activities and movements, which could have significant implications for privacy. (e.g. airport or highway surveillance system)

Frameworks, Checklists and Guidelines for AI Security & Privacy:

Most notably:

Regulatory Requirements for AI in Canada:

The primary AI regulatory bodies in Canada are:

  • PIPEDA for privacy
  • CMDR and FDA for health and safety of AI for medical devices and food
  • OSFI for the financial sector
  • CRTC for telecommunication
  • Transportation Safety Board of Canada, and Canadian Nuclear Safety Commission for safety and security.

Tools and Technology used for AI Risk Assessment:

  1. Penetration Testing: Metasploit, Nessus, and Burp Suite.
  2. Model Protection and Explainability: Open Neural Network Exchange (ONNX) and the AI Explainability Platform (AIX).
  3. Adversarial Testing: CleverHans, Foolbox, and IBM’s Adversarial Robustness Toolbox (ART).
  4. Privacy Enhancement: TensorFlow Privacy, and PySyft.
  5. Fairness and Bias Testing: AI Fairness 360 (AIF360), IBM’s What-If Tool and Google’s.
  6. Governance and compliance templates: DPIAs tools and templates, GDPR and the California Consumer Privacy Act (CCPA).
  7. Auditing frameworks: CIS-Critical Security Controls for Effective Cyber Defense.
  8. AI Security and Governance Platform: DataRobot’s Governance platform, IBM’s AI Fairness 360, and Google’s AI.