Secure Deepseek and other AI systems with Microsoft Security

Secure Deepseek and other AI systems with Microsoft Security

A successful AI transformation begins with a strong security foundation. With a rapid increase in AI development and execution, companies need their aspiring AI apps and -Tools. Microsoft Security offers threat protection, posture management, data security, conformity and governance to secure the AI ​​applications you have created. These functions can also be used to help companies secure and rule AI apps that were created with the Deepseek R1 model and receive visibility and control over the use of the separate deepseek consumer app.

Safe and govern KI apps that were created with the Deepseek R1 model on Azure Ai Foundry and Github

Develop with trustworthy AI

We announced last week The availability of Deepseek R1 at Azure Ai Foundry and GithubEntry to a diverse portfolio of more than 1,800 models.

Customers are building Azure Ai Foundry today, while making their different security, security and data protection requirements. Similar to other models, which were provided in Azure Ai Foundry, Deepseek R1 has learned strict red teaming and safety ratings, including automated evaluations of model behavior and extensive security checks for reducing potential risks. The hosting protection measures of Microsoft for AI models are designed in such a way that customer data is kept within the safe limits of Azure.

With Azure AI contents security, integrated content filtering is available by default in order to recognize and block malicious, harmful or unfounded content, with opt-out options for flexibility. In addition, the security assessment system enables customers to efficiently test their applications before providing it. These protective measures help Azure Ai Foundry to offer a safe, compliant and responsible environment in order to secure and provide AI solutions safely. See Azure AI foundry and github For further details.

Start with the management of the security position

AI workloads introduce new cyber attack areas and weak points, especially when developers use open source resources. It is therefore important to start the management of the security obligation to discover all AI inventories such as models, orchestras, grounding data sources and the direct and indirect risks to discover these components. When developers create Ki -Workloads with Deepseek R1 or other AI models, they build. Microsoft Defender for CloudThe AI ​​security treatment functions Safety teams can help to exceed the KI workloads, to discover KI cyber attack surfaces and weaknesses, to recognize cyber attack paths that can be exploited by bad actors and receive recommendations for proactive strengthening their security against cyberhreats.

The AI ​​security obligation in Defender for Cloud identifies an attack path to a workload from Deepseek R1, in which a virtual azure machine is exposed to the Internet.
Figure 1. AI Security Halm Management in the defense lawyer for Cloud recognizes an attack path to a Deepseek R1 workload.

By recording Ki-Workloads and the synthesis of security insights such as identity risks, sensitive data and internet exposure, the defender continuously floods contextualized safety problems for cloud and suggests risk-based security recommendations that are tailored to prioritize critical gaps in her AI-workloads. Relevant security recommendations are also displayed within the Azure AI resource in the Azure portal. This offers developers or workload owners direct access to recommendations and helps you to eliminate cyberhreats faster.

Securing Deepseek R1 AI Workads with cyberhreat protection

While a strong security attitude reduces the risk of cyber attacks, the complex and dynamic nature of the AI ​​also requires active monitoring. No AI model is freed from malicious activities and can be susceptible to injection cyber attacks and other cyberhreats. The monitoring of the latest models is crucial to ensure that your AI applications are protected.

Integrated with Azure Ai Foundry, the defender for Cloud continuously monitors your deepseek -Ai -Ai -Ai -Ai -Aiuks for unusual and harmful activities, correlates the results and enriches security warnings with supporters. This offers its analysts of the Security Operations Center (SOC) warnings for active cyberhreats such as Jailbreak cyber attacks, registration information and sensitive data leaks. If, for example, a quick injection occurs cyber attack, Azure AI contents security input prophecies can block you in real time. The warning will then be sent to Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence to understand soc analysts.

In the event of a quick injection attack, the safety of Azure AI contents can recognize and block. The signal is then enriched by Microsoft Threat Intelligence, which means that security teams can carry out holistic studies on the incident.
Figure 2. Microsoft Defender for Cloud integrates into Azure AI in order to recognize and react immediate injection cyber attacks.

In addition, these warnings integrate into Microsoft Defender XDRIn order to enable security teams to centralize Ki -Workcoad warnings into correlated incidents in order to understand the full scope of a cyber attack, including malicious activities related to their generative AI applications.

A jailbreak injection attack on an Azure -ai model provision was identified as a warning in the defender for Cloud.
Figure 3. A security warning for a quick injection attack is marked in defense lawyers for the cloud

Secure and determine the use of the deepseek app

In addition to the Deepseek R1 model, Deepseek also offers a consumer app hosted on its local servers, in which data acquisition and cyber security practices may not match their organizational requirements, as is often the case with consumer apps. This underlines the risks organizations when employees and partners introduce unorganized AI apps that lead to potential data leaks and violations of the guidelines. Microsoft Security offers functions to determine the use of third-party applications in your company, and offers controls for the protection and government of your use.

Secure yourself in the use of deepseek app and win

Microsoft Defender for cloud apps offers finished risk reviews for more than 850 generative AI apps, and the list of apps is continuously updated when new ones are popular. This means that you can discover the use of these generative KI apps in your organization, including the Deepseek app, the assessment of your security, compliance and legal risks and accordingly controls. For example, security teams for high-risk AI apps as non-approved apps and the user’s access can block the apps.

Security teams can discover the use of genai applications, evaluate risk factors and mark apps with high risk as not approved in order to prevent end users from accessing them.
Figure 4. Discover the use and control of access to generative AI.

Comprehensive data security

Additionally, Microsoft PurView Data Security Posture Management (DSPM) For AI, visibility in data security and compliance risks, e.g. B. sensitive data in user conactions and non -compliant use, and recommends control of controls to mitigate the risks. For example, the reports in DSPM can provide insights into the type of sensitive data that are inserted to generative AI consumer apps, including the Deepseek Consumer app, so that data security teams can create and reduce your data security guidelines to protect this data Data leaks.

In the report by Microsoft PurView Data Security Posture Management for AI, security teams can get an insight into sensitive data into user requirements and unethical use in AI interactions. These findings can be divided by apps and departments.
Figure 5. With Microsoft PurView Data Security Posture Management (DSPM) for AI, security teams can gain visibility in data risks and recommended measures to combat theM.

Impede

The leakage of organizational data is one of the most important concerns for security manager with regard to the use of AI and shows the importance for organizations for the implementation of control persons with which users can exchange sensitive information with external AI applications.

Microsoft Purview Data Loss Prevention (DLP) Allows you to prevent users from inserting sensitive data or files that contain sensitive content in generative AI apps from supported browsers. Your DLP guideline can also adapt to the insider risk and use more restrictions for users who are classified as a “increased risk” and less strict restrictions for those who have been classified as “low risk”. For example, users with increased risk may insert sensitive data into AI applications, while users can continuously continue their productivity with little risk. By using these functions, you can protect your sensitive data from potential risks by using external AI applications of third-party providers. Security administrators can then examine these data security risks and carry out insider risk examinations within existence. The same data security risks have appeared in defense lawyer XDR for holistic studies.

    If a user tries to copy and insert sensitive data into the Deepseek Consumer AI application, it is blocked by the Endpoint -DLP directive.
Figure 6. The guideline for the prevention of data loss can prevent sensitive data from gluing in supported browsers on AI applications of third-party providers.

This is a brief overview of some of the skills to help you secure and govern AI apps that you build on Azure Ai Foundry and Github as well as AI apps that users use in your organization. We hope you find it useful!

In order to learn more and start securing your AI apps, take a look at the following additional resources:

Find out more with Microsoft Security

To learn more about Microsoft Security Solutions, visit oursWebsite.HingeSafety blogTo keep up with our expert reporting on security questions. Follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates for cyber security.

Previous Article

The download: offshore rockets starts and how Dogge plans to use AI

Next Article

This is how you strengthen the authority of your brand - whiteboard Friday

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨