Forecasts like the Nielsen Norman Group estimating that AI tools may improve an employee's productivity by 66% have companies everywhere wanting to leverage these tools immediately. How can companies employ these powerful AI/ML tools without compromising their security? Below, we'll touch on the risks AI tools can present and offer eight tips that will help your company leverage these tools as securely as possible. All AI models rely on data to statistically generate results for their focus area. Companies using external AI/ML tools risk that the product vendor might use your data to train their algorithms and accidentally leak or steal your intellectual property. An AI tool that uses bad data will generate inaccurate results. Many AI tools are quite powerful and can be used securely, with a few precautions. If you use a free AI tool or service, you should suspect it may leverage the data you provide. Many early AI services and tools, including ChatGPT, employ a usage model that's similar to social media services like Facebook and TikTok. A free AI service can gather data from your devices and keep your prompts, which it uses to train its model. While not seemingly malicious, you never know how an AI service will monetize your data. A simple risk of using external AI tools is that an employee - intentionally or not - could share sensitive or confidential data with the tool, exposing the data to misuse. If you limit employees to access only the precise data they need, you minimize the amount of data they might divulge to an external AI tool. Whether AI tools or services are free or paid, they often have settings for added privacy. Some tools can be set to not store your prompt data. Review your AI tool's privacy settings and configure them to your preferences. While free tools likely reserve the right to use your data, you can pay licensing fees to get enterprise versions of AI tools that offer more protection. These tools often include features of not using your data and keeping it segmented. Vendor and supply chain security validation should be a default practice for any new external tool or service partner you adopt. A formal TPRM process is vital when using AI tools and service partners. Tip 6: Leverage local open source AI frameworks and tools. The accessibility of online, external AI tools that are free and user-friendly brings risk. To assure data privacy, consider avoiding external tools. There are free AI frameworks and tools you can deploy and create yourself. It's difficult to track every asset, software-as-a-service tool, or data store that your employees use, as the cloud and Web services have empowered everyone to use Web-based tools. To monitor which external AI tools your employees utilize, how, and with what data, audit and document AI usage as part of the buying process. In identifying all the internal and external AI tools used, you better understand the risks you face. Companies that traffic in public data may face a low data-sharing risk and allow a permissive AI policy, while those that deal in confidential matters will have stringent rules around data use in external AI tools. Once you have that policy, communicate it and the risks associated with AI tools with your employees regularly. AI tools can deliver quick and easy results and offer huge business benefits while also bringing hidden risks. You can leverage AI tools securely by adopting a few precautions and be on your way to realizing the benefits these tools promise.
This Cyber News was published on www.darkreading.com. Publication date: Thu, 30 Nov 2023 20:25:01 +0000