Agent factory: Create a plan for secure and secure AI agents | Blog Microsoft Azure

The Azure AI Foundry merges with safety, safety and administration in layered processes that can follow companies to build confidence in their agents.

This blog post is the sixth of the six -part series of blog called Agent Factory, which shares proven procedures, design patterns and tools that will help you lead and build AII.

Trust as other boundaries

Trust quickly becomes a defining challenge for business AI. If vision is visible, then there is control of driving. How do agents move from smart prototypes to basic trading systems, businesses ask a harder question: How do we keep agents safe, safety and control when they are expanding?

The answer is not patchwork repair points. It’s a plan. A layered approach that gives confidence the first combination of identity, railing, evaluation, opponents, data protection, monitoring and management.

Why businesses now have to create their plan

During the industry we hear the same concerns:

  • Ciso is afraid of an agent growing and unclear ownership.
  • Security teams need railings that associate with their existing workflows.
  • Developers want safety to be built from the first day, not added at the end.

These pressures control shift on the left phenomenon. Obligations of safety, safety and management are previously in the developer’s workflow. Teams can’t wait to take up agents. From the beginning they need built -in protection, evaluation and integration of policy.

Data leakage, rapid injection and regulatory uncertainty remain the highest blockers to accept AI. For businesses, trust is now a crucial decisive factor as to whether the agents are moving from the pilot to production.

What are the secure and secure agents look like

Five features excel in the adoption of the company:

  • Unique identity: Each agent is known and monitored during their life cycle.
  • Data Protection as designed by: Sensitive information is classified and controlled to reduce relocation.
  • Built -in controls: Filters of damage and risk, alleviating threats and ground control reduce hazardous results.
  • Assessed against threats: Agents are tested by automated safety rating and contradictory challenges before and during production.
  • Continuous supervision: Telemetry connects to the security and compliance tools for investigation and reaction.

These qualities do not guarantee absolute security, but are necessary for building credible agents who meet business standards. Their baking into our products reflects Microsoft’s access to trusted AI. The protection is layered across the levels of the model, system, politician and user experience, and is constantly improving with the development of agents.

How Azure Ai Foundry supports this plan

View of the security and controls settings inside the Azure AI foundry.

The Azure AI Foundry combines safety, safety and management capabilities in layered processes that can follow companies to build confidence in their agents.

  • Id agent Entra
    He will soon come, every agent created in the foundry will be assigned an unique ID of Agent Entra, who will provide organizations to make all active agents across the tenant and help reduce shadow agents.
  • Agent’s controls
    Foundry offers industrial controls of the first agents that are complex and built. It is the only AI platform with a classifier injection that scans not only the call of documents, but also the response to tools, e-mail starters and other untrustworthy sources to indicate, block and neutralize harmful instructions. The foundry also provides controls to prevent incorrectly aligned tools, high -risk actions and sensitive data loss, along with damage and risk filters, grounding and protected material detection.
An example of how Foundry Azure AI calls for security risks.
  • Risk assessment and safety
    The evaluation provides feedback during the life cycle. Teams can carry out damage and risks, grounding and protected materials before deployment and in production. Azure AI Red Agent Agent Agent and Pyrit Toolkit simulates contradictory challenges on a scale to explore behavior, surface vulnerability and strengthen resistance before incidents.
  • Data control with your own resources
    Standard settings of Azure AI Foundry Agent Service allows businesses to bring their own Azure resources. This includes file storage, searching and storage of conversation history. With this setting, the data processed foundry agents remain within the tenant’s boundary under the control of their own security, compliance and organization management.
  • Network insulation
    Agent Foundry supports the insulation of a private network with its own virtual networks and sub -network delegations. This configuration ensures that agents work within a fixed network and interact safely with sensitive customer data under business conditions.
  • Microsoft PurView
    Microsoft PurView helps expand the security and observance of data to AI workload. Foundry agents can honor the sensitivity labels and the principles of DLP, so the protection applied to the data are transmitted to the outputs of agents. Compliance teams can also use the compliance administrator to comply with regulations and related tools to assess the alignment with framework such as the EU AI and NIST AI RMF, and to safely interact with your sensitive customer data according to your conditions.
  • Microsoft Defender
    Foundry Surfaces alerts and recommends from Microsoft Defender directly in agents’ environment, which is to developers and administrators, such as quick injection attempts, risk calls of tools or unusual behavior. The same telemetry also flows into Microsoft Defender XDR, where security operations can investigate incidents along with other business alerts using their established workflows.
  • Co -workers
    The foundry is associated with administration collaborators such as Credo AI and Saidot. These integrations allow organizations to map the results of the assessment to framework, including the EU AI law and the Nist AI risk framework, which makes it easier to demonstrate responsible AI and regulatory procedures.

Blueprint in Action

From the adoption of the company these practices excel in:

  1. Start with identity. Assign the ENTRA Agent ID to determine the visibility and prevent growth.
  2. Built -in controls. Use fast shields, damage and risk filters, ground checks and protected material detection.
  3. Continuously evaluate. Before deploying and during and during production, run the harm and control of risk, grounding, protected scanning of material and contradictory testing with an agent with red and pyrite.
  4. To protect sensitive data. Apply labels and DLPs, so they are honored in exit agents.
  5. Monitor using business tools. Create telemetry to Defender XDR and use foundry visibility for supervision.
  6. Connect the management with regulation. Use Administration Co -workers to map data on evaluation in framework such as EU AI and NIST AI RMF.

Evidence from our customers

Businesses already create safety plans with Azure AI FOUNDRY:

  • Ey It uses AI Foundry rankings and rating to compare models according to quality, cost and safety and helps the scale of solutions with greater certainty.
  • Accenture Tested by Microsoft Ai Red team agent to simulate contradictory instructions on a scale. This allows their teams to verify not only individual answers, but also full workflows with multiple agencies under the conditions of attack before they appear.

More information

Did you miss these posts in a series of agents?

(Tagstotranslate) Agent Factory (T) AI

Leave a Comment