Its direct impact on people's lives has raised considerable questions around AI ethics, data governance, trust and legality.
There are risks that large language models will be used to manipulate data in ways that will make us question the veracity of all sorts of information.
Vulnerabilities and the proliferation of the knowledge on how to use them means that well-meaning initiatives without the right security may put relationships at risk and proprietary data exposed.
Using gen AI in the organization also put trust at risk.
The leaders apply a cyber risk-based framework that is completely integrated into their enterprise risk management program.
They consider cybersecurity risk to a great extent when evaluating overall enterprise risk.
Business leaders and the board need non-technical explanation and a common understanding to agree on governance guardrails and appreciate the risks of having actual business data compromised.
Stories and what-if scenarios can help users gain a gut-level appreciation about the risks of undermining trust.
Users need to appreciate that once corporate data is out in the public environment, it is not coming back.
Legal, Risk, IT, Information Security, Marketing and HR should all be engaged in charting the gen AI journey.
To get ahead of the risks of rogue efforts, establish an environment for users to test the appropriate uses and limitations of various models, and of the data that trained the model.
Identify the prerequisites for sustainable generative AI success-A strategic discussion with business leaders is required to ensure that the generative AI journey actually leads to business value.
A modern data foundation is required to create measurable business value from proprietary data in your model.
A well planned and executed security strategy can mitigate the risk of compromise that comes with generative AI products.
Develop use cases that reinforce trust-Demonstrate to the CEO, the board and other leaders what is possible with generative AI. Also, highlight the privacy and intellectual property risks and propose criteria for evaluating the value of use cases that will inevitably be brought forward by other areas of the business.
In time, generative AI could support enterprise governance and information security, protecting against fraud, improving regulatory compliance, and proactively identifying risk by drawing cross-domain connections and inferences both within and outside the organization.
Know data sources and data lineage-Monitor network traffic and shadow models to prevent data from leaving the enterprise.
Database records, system files, configurations, user files, applications, and customer data may all be at risk of leakage in a public large language model environment.
Not understanding or curating the training data set can lead to inaccuracies, misinformation, discrimination, bias, harm, lack of fairness or adversarial actions like data poisoning.
Lisa O'Connor is Accenture's Managing Director, Global Security Research and Development, a visionary leader who understands both the opportunities and risks of emerging technologies to the business.
This Cyber News was published on www.cyberdefensemagazine.com. Publication date: Fri, 05 Jan 2024 06:13:06 +0000