An August survey by BlackBerry found that 75% of organizations worldwide were considering or implementing bans on ChatGPT and other generative AI applications in the workplace, with the vast majority of those citing the risk to data security and privacy.
Such data security issues arise because user input and interactions are the fuel that public AI platforms rely on for continuous learning and improvement.
If a user shares confidential company data with a chatbot, that information then becomes integrated into its training model, which the chatbot might then reveal to subsequent users.
To better evaluate and mitigate these risks, most enterprises who have begun to test the generative AI waters have primarily leaned on two senior roles for implementation: The CISO, who is ultimately responsible for securing the company's sensitive data, and the general counsel, who oversees an organization's governance, risk and compliance function.
As organizations begin to train AI models on their own data, they'd be remiss to not include another essential role in their strategic deliberations: The CTO. Data Security and the CTO While the role of the CTO will vary widely depending on the organization they serve, almost every CTO is responsible for building the technology stack and defining the policies that dictate how that technology infrastructure is best utilized.
Their strategic insights become all the more important as more organizations, who might be hesitant to go all-in on public AI projects, instead invest in developing their own AI models trained on their own data.
One of the major announcements at OpenAI's recent DevDay conference focused on the release of Custom Models, a tailored version of its flagship ChatGPT service that can be trained specifically on a company's proprietary data sets.
Naturally, other LLMs are likely to follow suit, given the pervasive uncertainty around data security.
In the process of training these AI models, organizations often use customer data as a part of the training sets and store it in source code repositories.
This intermingling of sensitive customer data with source code presents a number of challenges.
Whereas customer data is typically managed within secured databases, with generative AI models, this sensitive information can become embedded into the model's algorithms and outputs.
This creates a scenario where the AI model itself becomes a repository of sensitive data, blurring the traditional boundaries between data storage and application logic.
With less defined boundaries, sensitive data can quickly sprawl across multiple devices and platforms within the organization, significantly increasing the risk of being either inadvertently compromised by external parties or, in some cases, by malicious insiders.
3 Ways the CTO Can Help Strike the Balance Every enterprise CTO understands the principle of trade-offs.
Given their top-down view of the IT environment and how it interacts with third-party cloud services, the CTO is in a unique position to define an AI strategy that keeps data security top of mind.
Educate Before You Eradicate: Given the many security and regulatory risks of exposing data via generative AI, it's only natural that many organizations might reflexively ban their usage in the short term.
The CTO can help ensure that the organization's Acceptable Use Policy clearly outlines the appropriate and inappropriate uses of generative AI technologies, detailing the specific scenarios in which generative AI can be utilized while emphasizing data security and compliance standards.
By enforcing strict access controls, the CTO can minimize the risk of unauthorized access or leaks of sensitive data and establish processes that require code to be reviewed and approved before being merged into the main repository.
The CTO has the technical expertise to understand how data is collected, processed, and used within AI systems, which is essential in creating effective opt-out mechanisms that genuinely protect user data.
The CTO can also play a key role in defining the strategic direction of how these AI solutions are responsibly deployed, ensuring that user privacy and data security are prioritized and integrated into the company's technology strategy.
This Cyber News was published on securityboulevard.com. Publication date: Mon, 19 Feb 2024 15:43:04 +0000