Generative Data Intelligence

Everyone Can Now Use Runway’s Gen-1 Text-to-Video AI Tool: Here’s How

Date:

Microsoft has unveiled an AI-powered tool to help cybersecurity professionals understand critical issues and find ways to fix them.

The tool, named ‘Security Copilot,’ will assist cybersecurity specialists to identify breaches, threat signals, and better analyze data using OpenAI’s latest GPT-4 generative AI model.

AI has been the tech buzzword of the year since the successful launch of OpenAI’s AI-powered chatbot, ChatGPT, last November. Since then, the entire tech industry has dived into it, and industry leaders are busy exploring new areas under the AI umbrella.

Many tech giants are keen to launch their own ChatGPT-like products, which can assist in everything from school jobs to writing rap, composing sonnets, and even complex computer programming codes.

Also Read:Prompting: A New Job Opportunity in the Age of Generative AI

Security Copilot will assist in summarizing incidents, analyzing vulnerabilities, and sharing information with co-workers on pinboard.

Next-generation cybersecurity solution

In a blog post, Microsoft acknowledged the recent technological waves relating to AI

“We are ready for a paradigm shift and taking a massive leap forward by combining Microsoft’s leading security technologies with the latest advancements in AI.”

Security Pilot is said to be the first security product that enable defenders to move at the speed and scale of AI.

This advanced “large language model (LLM) is combined with a security-specific model from Microsoft,” allowing for the incorporation of a growing set of security-specific skills.

The product benefits from Microsoft’s unique global threat intelligence and over 65 trillion daily signals. Additionally, it delivers enterprise-grade security and a privacy-compliant experience, running as it does on Azure’s hyperscale infrastructure, per the blog post.

“When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to deploy skills and queries that maximize the value of the latest large language model capabilities,” stated Microsoft.

The company has said its cyber-trained model, which is equipped with a learning system, offers a unique solution for security use cases. By creating and fine-tuning new skills, Security Copilot can identify and address potential security issues that other methods may overlook.

“Security Copilot doesn’t always get everything right. AI-generated content can contain mistakes. But Security Copilot is a closed-loop learning system, which means it’s continually learning from users and giving them the opportunity to give explicit feedback with the feedback feature that is built directly into the tool,” stated the company.

The company is currently tweaking its responses to create answers that are more coherent, relevant, and useful.

Instantly prioritize critical incidents

In response to a text prompt, the chatbot has the ability to generate PowerPoint slides summarizing security incidents, detailing exposure to an active vulnerability or specifying accounts involved in an exploit.

Vasu Jakkal, Corporate Vice President of Security, Compliance, Identity Management, and Privacy at Microsoft, explained that users can confirm a response by selecting a button, or indicate a mistake by selecting an “off-target” button. This type of input will aid in the service’s learning and improvement.

Microsoft’s engineers have been using the Security Copilot inside the company.

“It can process 1,000 alerts and give you the two incidents that matter in seconds,” said Jakkal.

The tool also reverse-engineered a piece of malicious code for an analyst who didn’t know how to do that, explained Jakkal.

“This marks a new era in cybersecurity,” said Mark Russinovich, CTO of Microsoft Azure.

However, some people remain skeptical about the new tool and its results.

“Do you really think it will be new era in CyberSecurity?,” a user questioned in a response to Russinovich.

Another Reddit user, meanwhile, expressed concerns about the boundaries of access and privacy assurances with regards to AI, and urged for AI to be kept out of their OneDrive and email.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?