The measures Apple has implemented to prevent customer data theft and misuse by artificial intelligence will have a marked impact on hardware security, especially as AI becomes more prevalent on customer devices, analysts say.
Apple emphasized customer privacy in new AI initiatives announced during the Worldwide Developers Conference a few weeks ago.
Apple has full control over its AI infrastructure, which makes it harder for adversaries to break into systems.
The company's black-box approach also provides a blueprint for rival chip makers and cloud providers for AI inferencing on devices and servers, analysts say.
Apple's AI Approach The AI back end includes new foundation models, servers, and Apple Silicon server chips.
AI queries originating from Apple devices are packaged in a secure lockbox, unpacked in Apple's Private Compute Cloud, and verified as being from the authorized user and device; answers are sent back to devices and accessible to authorized users only.
Data isn't visible to Apple or other companies and is deleted once the query is complete.
Apple has etched security features directly into device and server chips, which authorize users and protect AI queries.
Data remains secure while on-device and during transit via features such as secure boot, file encryption, user authentication, and secure communications over the Internet via TLS. Apple is its own customer with a private infrastructure, which is a big advantage, while rival cloud providers and chip makers work with partners using different security, hardware, and software technologies, Sanders says.
Microsoft's Pluton Approach But Apple's main rival, Microsoft, is already on its way to end-to-end AI privacy with security features in chips and Azure cloud.
Last month the company announced a class of AI PCs called CoPilot+ PCs that require a Microsoft security chip called Pluton.
The first AI PCs shipped this month with chips from Qualcomm, with Pluton switched on by default.
The chip is now primed to protect AI customer data, says David Weston, vice president for enterprise and OS security at Microsoft.
Intel, AMD, and Nvidia are also building black boxes in hardware that keep AI data safe from hackers.
Intel didn't respond to requests for comment on its chip-to-cloud strategy, but in earlier interviews the company said it is prioritizing securing chips for AI. Security Through Obscurity May Work But a mass-market approach by chip makers could leave larger surfaces for attackers to intercept data or break into workflows, analysts say.
Intel and AMD have a documented history of vulnerabilities, including Spectre, Meltdown, and their derivatives, says Dylan Patel, founder of chip consulting firm SemiAnalysis.
In contrast, Apple is a relatively new chip designer and can take a clean-slate approach to chip design.
Intel and AMD work with hardware and software partners plugging their own technologies, which creates a longer supply chain to secure, says Alex Matrosov, CEO of hardware security firm Binarly.
This gives hackers more chances to poison or steal data used in AI and creates problems in patching security holes as hardware and software vendors operate on their own timelines, he says.
Intel and AMD chips weren't inherently designed for confidential computing, and firmware-based rootkits may intercept AI processes.
This Cyber News was published on www.darkreading.com. Publication date: Mon, 01 Jul 2024 14:00:09 +0000