Google says, defenders should embrace advanced AI tools to help disrupt this exhausting cycle.
The AI Impact Tour - NYC. We'll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications.
As government leaders from around the world come together to debate international security policy at MSC, it's clear that these heavy AI hitters are looking to illustrate their proactiveness when it comes to cybersecurity.
In Munich, more than 450 senior decision-makers and thought and business leaders will convene to discuss topics including technology, transatlantic security and global order.
AI is unequivocally top of mind for many global leaders and regulators as they scramble to not only understand the technology but get ahead of its use by malicious actors.
Expand its $15 million Google.org Cybersecurity Seminars Program to cover all of Europe and help train cybersecurity professionals in underserved communities.
Open-source Magika, a new, AI-powered tool aimed to help defenders through file type identification, which is essential to detecting malware.
Google says the platform outperforms conventional file identification methods, providing a 30% accuracy boost and up to 95% higher precision on content such as VBA, JavaScript and Powershell that is often difficult to identify.
Provide $2 million in research grants to support AI-based research initiatives at the University of Chicago, Carnegie Mellon University and Stanford University, among others.
The goal is to enhance code verification, improve understanding of AI's role in cyber offense and defense and develop more threat-resistant large language models.
Notably, OpenAI said it has terminated accounts associated with five state-affiliated threat actors from China, Iran, North Korea and Russia.
China has been investing heavily in offensive and defensive AI and engaging in personal data and IP theft to compete with the U.S. Google notes that attackers are notably using AI for social engineering and information operations by developing ever more sophisticated phishing, SMS and other baiting tools, fake news and deepfakes.
On the other hand, AI supports defenders' work in vulnerability detection and fixing, incident response and malware analysis, Google points out.
AI can quickly summarize threat intelligence and reports, summarize case investigations and explain suspicious script behaviors.
It can classify malware categories and prioritize threats, identify security vulnerabilities in code, run attack path simulations, monitor control performance and assess early failure risk.
Google says, AI can help non-technical users generate queries from natural language; develop security orchestration, automation and response playbooks; and create identity and access management rules and policies.
Google's detection and response teams are using gen AI to create incident summaries, ultimately recovering more than 50% of their time and yielding higher-quality results in incident analysis output.
The company has also improved its spam detection rates by roughly 40% with the new multilingual neuro-based text processing model RETVec.
Its Gemini LLM is fixing 15% of bugs discovered by sanitizer tools and providing code coverage increases of up to 30% across more than 120 projects, leading to new vulnerability detections.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
This Cyber News was published on venturebeat.com. Publication date: Fri, 16 Feb 2024 00:13:05 +0000