Singapore has released a draft governance framework on generative artificial intelligence that it says is necessary to address emerging issues, including incident reporting and content provenance.
The proposed model builds on the country's existing AI governance framework, which was first released in 2019 and last updated in 2020.
Also: How generative AI will deliver significant benefits to the service industry.
There is growing global consensus that consistent principles are necessary to create an environment in which GenAI can be used safely and confidently, the Singapore government agencies said.
The draft document encompasses proposals from a discussion paper IMDA had released last June, which identified six risks associated with GenAI, including hallucinations, copyright challenges, and embedded biases, and a framework on how these can be addressed.
The proposed GenAI governance framework also draws insights from previous initiatives, including a catalog on how to assess the safety of GenAI models and testing conducted via an evaluation sandbox.
The draft GenAI governance model covers nine key areas that Singapore believes play key roles in supporting a trusted AI ecosystem.
The framework also offers practical suggestions that AI model developers and policymakers can apply as initial steps, IMDA and AI Verify said.
One of the nine components looks at content provenance: There needs to be transparency around where and how content is generated, so consumers can determine how to treat online content.
Because it can be created so easily, AI-generated content such as deepfakes can exacerbate misinformation, the Singapore agencies said.
It may not be feasible for all content created or edited to include these technologies in the near future and provenance information also can be removed.
The draft framework suggests working with publishers, including social media platforms and media outlets, to support the embedding and display of digital watermarks and other provenance details.
Another key component focuses on security where GenAI has brought with it new risks, such as prompt attacks infected through the model architecture.
This allows threat actors to exfiltrate sensitive data or model weights, according to the draft framework.
These will need to look at how the ability to inject natural language as input may create challenges when implementing the appropriate security controls.
The probabilistic nature of GenAI also may bring new challenges to traditional evaluation techniques, which are used for system refinement and risk mitigation in the development lifecycle.
The framework calls for the development of new security safeguards, which may include input moderation tools to detect unsafe prompts as well as digital forensics tools for GenAI, used to investigate and analyze digital data to reconstruct a cybersecurity incident.
Also: Singapore keeping its eye on data centers and data models as AI adoption grows.
With AI governance still a nascent space, building international consensus also is key, they said, pointing to Singapore's efforts to collaborate with governments such as the US to align their respective AI governance framework.
Singapore is accepting feedback on its draft GenAI governance framework until March 15.
This Cyber News was published on www.zdnet.com. Publication date: Tue, 16 Jan 2024 20:43:05 +0000