Security teams are confronting a new nightmare this Halloween season: the rise of generative artificial intelligence. Generative AI tools have unleashed a new era of terror for chief information security officers, from powering deepfakes that are nearly indistinguishable from reality to creating sophisticated phishing emails that seem startlingly authentic to access logins and steal identities. The generative AI horror show goes beyond identity and access management, with vectors of attack that range from smarter ways to infiltrate code to exposing sensitive proprietary data. According to a survey from The Conference Board, 56% of employees are using generative AI at work, but just 26% say their organization has a generative AI policy in place. While many companies are trying to implement limitations around using generative AI at work, the age-old search for productivity means that an alarming percentage of employees are using AI without IT's blessing or thinking about potential repercussions. After some employees entered sensitive company information onto ChatGPT, Samsung banned its use as well as that of similar AI tools. Now, as generative AI evolves so quickly that CISOs can't fully understand what they're fighting against, a frightening new phenomenon is emerging: shadow AI. From Shadow IT to Shadow AI There is a fundamental tension between IT teams, which want control over apps and access to sensitive data in order to protect the company, and employees, who will always seek out tools that help them get more work done faster. Despite countless solutions on the market taking aim at shadow IT by making it more difficult for workers to access unapproved tools and platforms, more than three in 10 employees reported using unauthorized communications and collaboration tools last year. Generative AI can add another scary dimension to this predicament when tools accumulate sensitive company data that, when exposed, could damage corporate reputation. Mindful of these threats, in addition to Samsung, many employers are limiting access to powerful generative AI tools. At the same time, employees are hearing time and time again that they'll fall behind without using AI. Without solutions to help them stay ahead, workers are doing what they'll always do - taking matters into their own hands and using the solutions they need to deliver, with or without IT's permission. So it's no wonder that the Conference Board found that more than half of employees are already using generative AI at work - permitted or not. Performing a Shadow AI Exorcism For organizations confronting widespread shadow AI, managing this endless parade of threats may feel like trying to survive an episode of The Walking Dead. And with new AI platforms continually emerging, it can be hard for IT departments to know where to start. There are time-tested strategies that IT leaders and CISOs can implement to root out unauthorized generative AI tools and scare them off before they begin to possess their companies. Businesses can benefit by proactively providing their workers with useful AI tools that help them be more productive but can also be vetted, deployed, and managed under IT governance. By offering secure generative AI tools and putting policies in place for the type of data uploaded, organizations demonstrate to workers that the enterprise is investing in their success. Many workers simply don't understand that using generative AI can put their company at tremendous financial risk. Alarmingly, security professionals are more likely than other workers to say they work around their company's policies when trying to solve their IT problems. Shadow AI is haunting businesses, and it's essential to ward it off. These will help them seize the transformative business value of generative AI without falling victim to the security breaches it will continue to introduce.
This Cyber News was published on www.darkreading.com. Publication date: Thu, 30 Nov 2023 23:19:27 +0000