Shane Jones, a principal software engineering manager at Microsoft, has sounded the alarm about the safety of Copilot Designer, a generative AI tool introduced by the company in March 2023.
His concerns have prompted him to submit a letter to both the US Federal Trade Commission and Microsoft's board of directors, calling for an investigation into the text-to-image generator.
Jones's apprehension revolves around Copilot Designer's unsettling capacity to generate potentially inappropriate images, spanning themes such as explicit content, violence, underage drinking, and drug use, as well as instances of political bias and conspiracy theories.
Beyond highlighting these concerns, he has emphasized the critical need to educate the public, especially parents and educators, about the associated risks, particularly in educational settings where the tool may be utilized.
Despite Jones's persistent efforts over the past three months to address the issue internally at Microsoft, the company has not taken action to remove Copilot Designer from public use or implement adequate safeguards.
His recommendations, including the addition of disclosures and adjustments to the product's rating on the Android app store, were not implemented by the tech giant.
Microsoft responded to the concerns raised by Jones, assuring its commitment to addressing employee concerns within the framework of company policies.
The company expressed appreciation for efforts aimed at enhancing the safety of its technology.
The situation underscores the internal challenges companies may face in balancing innovation with the responsibility of ensuring their technologies are safe and ethical.
This incident isn't the first time Jones has spoken out about AI safety concerns.
Despite facing pressure from Microsoft's legal team, Jones persisted in voicing his concerns, even extending his efforts to communicate with US senators about the broader risks associated with AI safety.
The case of Copilot Designer adds to the ongoing scrutiny of AI technologies in the tech industry.
Google recently paused access to its image generation feature on Gemini, its competitor to OpenAI's ChatGPT, after facing complaints about historically inaccurate images involving race.
DeepMind, Google's AI division, reassured users that the feature would be reinstated after addressing the concerns and ensuring responsible use of the technology.
As AI technologies become increasingly integrated into various aspects of our lives, incidents like the one involving Copilot Designer highlight the imperative for vigilant oversight and ethical considerations in AI development and deployment.
The intersection of innovation and responsible AI use remains a complex landscape that necessitates collaboration between tech companies, regulatory bodies, and stakeholders to ensure the ethical and safe evolution of AI technologies.
This Cyber News was published on www.cysecurity.news. Publication date: Sat, 09 Mar 2024 18:43:05 +0000