This article is based on official Microsoft sources and adapted for blog format to support security-conscious businesses and developers using Azure AI solutions. Read the original Microsoft article here: Securing generative AI models on Azure AI Foundry.

Introduction: Innovation vs. Security in Generative AI

Generative AI continues to evolve rapidly, with powerful models introduced to the market every week. For developers, enterprises, and IT leaders integrating AI into their systems, this explosion of capability presents both opportunities and risks. Balancing cutting-edge innovation with robust cybersecurity is critical.

Microsoft’s Azure AI Foundry provides a secure platform designed for responsible AI development. But how exactly does Microsoft ensure that your data, models, and infrastructure remain protected?

In this blog, we’ll dive into the technical and policy-level strategies that Microsoft employs to secure AI models hosted on Azure AI Foundry—and how you can apply these principles in your own AI deployments.


1. Dispelling the Data Misuse Myth

First, a crucial point of clarification: Microsoft does not use your data to train shared AI models. Your inputs and outputs, logs, and other AI interactions are treated as customer content, just like your Office documents or Outlook emails.

This privacy principle applies across both Azure OpenAI Service and Azure AI Foundry. Furthermore:

  • AI services are hosted entirely on Microsoft infrastructure.
  • No runtime connection is made to third-party model providers—even partners like OpenAI.
  • Features like fine-tuning allow you to customise models with your data, but these become your models, hosted within your Azure tenant.

So, your proprietary information never leaves the boundary of your virtual environment unless you explicitly choose to export or integrate it.


2. Understanding the Threat: Models Are Just Software

It's easy to fall into the trap of thinking AI models have magical capabilities—but in reality, they’re just sophisticated software packages running on Azure Virtual Machines (VMs).

This is where Azure’s Zero Trust security architecture plays a major role:

🔐 “Azure services do not assume that things running on Azure are safe!”

This means AI models can’t escape their virtual environment any more than any typical application could. Microsoft has decades of experience preventing VM-based attacks on its cloud infrastructure, and all protections extend to AI workloads by default.

Learn more about Zero Trust architecture.


3. The Hidden Risks: Malware in AI Models

Despite the inherent isolation of VMs, there is still the possibility of malware embedded in AI models—especially open-source or externally sourced ones.

To combat this, Microsoft performs multiple layers of pre-deployment scanning for high-visibility models added to the Azure AI Foundry Model Catalogue, including:

Malware Analysis

Scans models for malicious code that could act as a backdoor or infection vector.

Vulnerability Assessment

Checks for Common Vulnerabilities and Exposures (CVEs) and zero-day threats.

Backdoor Detection

Searches for suspicious behaviour like unexpected network calls or unauthorised code execution.

Model Integrity Verification

Inspects model tensors, layers, and architecture for signs of tampering or corruption.

These assessments are reflected directly on each model’s “model card”, providing you with a clear indication of its security screening status. No extra steps are needed from your side to benefit from this protection.

In high-profile cases like DeepSeek R1, Microsoft goes further with:

  • Full source code reviews
  • Red team testing (ethical hacking)
  • Internal adversarial evaluations

While these advanced assessments aren’t yet visibly marked in the UI, they are active under the hood for select models based on risk level.


4. Governance: Responsibility Shared Between Microsoft and You

Microsoft takes serious steps to protect you—but that doesn't mean you’re off the hook.

Much like any third-party library or external software vendor, your trust in a model should be based on:

  • Microsoft’s model vetting and documentation
  • The model provider’s reputation and transparency
  • Your internal policies on third-party software governance

For more advanced protection, Microsoft recommends integrating security and governance controls from its own ecosystem. This includes services like:

  • Microsoft Defender for Cloud
  • Microsoft Purview (for data governance)
  • Azure Policy and Role-Based Access Control (RBAC)

➡ Read the full guide: Securing DeepSeek and other AI systems with Microsoft Security.


5. Practical Recommendations for Securing Your AI Stack

So, what should you be doing to secure your own AI system on Azure?

Here’s a checklist based on Microsoft’s approach:

Check model cards before use and verify pre-scanning results
Run test environments with sandboxed models before full integration
Implement network controls around AI runtime APIs
Apply role-based access restrictions for who can interact with models
Use Azure Policy to enforce compliance rules around AI usage
Monitor outputs and behaviour for signs of drift or anomalies
Encrypt data at rest and in transit using Azure Key Vault

6. Summary: Azure AI Foundry Security at a Glance

Let’s recap Microsoft’s core security commitments to AI model safety:

Feature Secured By Microsoft
Model malware scanning✔️ Deep pre-release analysis
Customer data protection✔️ No data used to train shared models
Isolation of AI workloads✔️ Hosted on Microsoft infrastructure
Zero-trust architecture✔️ No assumptions about runtime security
Advanced threat detection✔️ CVE, backdoor, and integrity scans
End-to-end governance integrations✔️ Defender, Purview, RBAC, Azure Policy