Anthropic releases Claude 3 Haiku, an AI model built for speed and affordability

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation.
San Francisco-based startup Anthropic has just released Claude 3 Haiku, the newest addition to its Claude 3 family of AI models.
Haiku stands out as the fastest and most affordable model in its intelligence class, offering advanced vision capabilities and strong performance on industry benchmarks.
The release of Haiku comes shortly after Anthropic introduced the Claude 3 model family earlier this month, which includes Claude 3 Opus and Claude 3 Sonnet.
Haiku completes the trio, providing enterprise customers with a range of options to balance intelligence, speed, and cost based on their specific use cases.
One of Haiku's key strengths is its speed, with the ability to process 21,000 tokens per second for prompts under 32,000 tokens.
This rapid processing power allows businesses to analyze large volumes of documents, like quarterly filings, contracts, or legal cases, in a fraction of the time it would take other models in its performance tier.
We're excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond.
In addition to its speed, Haiku boasts advanced vision capabilities, allowing it to process and analyze visual input such as charts, graphs, and photos.
This feature opens up new possibilities for enterprise applications that rely heavily on visual data.
Anthropic has also prioritized enterprise-grade security and robustness in the development of Haiku.
The company conducts rigorous testing to minimize the risk of harmful outputs and model jailbreaks, while implementing additional layers of defense like continuous systems monitoring, secure coding practices, and stringent access controls.
The release of Claude 3 Haiku comes at a time when enterprise demand for powerful, efficient, and secure AI solutions is at an all-time high.
As businesses increasingly turn to AI to streamline operations, improve customer experiences, and gain a competitive edge, models like Haiku are poised to play a crucial role in the adoption and scaling of AI technologies across industries.
Anthropic's Claude 3 family of models, which also includes the recently released Opus and Sonnet, has already set new benchmarks for AI performance across a wide range of cognitive tasks.
With the addition of Haiku, the company now offers a comprehensive suite of AI solutions to cater to the diverse needs of enterprise customers.
Claude 3 Haiku is available now through Anthropic's API and for Claude Pro subscribers on claude.
The model will also be coming soon to Amazon Bedrock and Google Cloud Vertex AI, further expanding its accessibility to businesses worldwide.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.


This Cyber News was published on venturebeat.com. Publication date: Wed, 13 Mar 2024 23:13:06 +0000


Cyber News related to Anthropic releases Claude 3 Haiku, an AI model built for speed and affordability

Anthropic releases Claude 3 Haiku, an AI model built for speed and affordability - Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. San Francisco-based startup Anthropic has just released Claude 3 Haiku, the newest addition to its Claude 3 family of AI models. Haiku stands out as ...
1 year ago Venturebeat.com
Anthropic confirms it suffered a data leak - It's been an eventful week for AI startup Anthropic, creator of the Claude family of large language models and associated chatbots. The company says that on Monday, January 22nd, it became aware that a contractor inadvertently sent a file containing ...
1 year ago Venturebeat.com Inception
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
1 year ago Cysecurity.news
Anthropic's Claude 3.7 Sonnet is here and results are insane - “Claude Code was my ‘Feel the AGI moment.’ I’ve thrown bugs at this thing that no other models could fix, but Claude Code blasted through them," one user wrote in a Reddit thread. Additionally, Claude 3.7 Sonnet appears to ...
7 months ago Bleepingcomputer.com
CVE-2025-52882 - Claude Code is an agentic coding tool. Claude Code extensions in VSCode and forks (e.g., Cursor, Windsurf, and VSCodium) and JetBrains IDEs (e.g., IntelliJ, Pycharm, and Android Studio) are vulnerable to unauthorized websocket connections from an ...
3 months ago
Anthropic Pledges to Not Use Private Data to Train Its AI - Anthropic, a leading generative AI startup, has announced that it would not employ its clients' data to train its Large Language Model and will step in to safeguard clients facing copyright claims. Anthropic, which was established by former OpenAI ...
1 year ago Cysecurity.news
Anthropic is testing GPT Codex-like Claude Code web app - Anthropic, an AI safety and research company, is currently testing a new web application called Claude Code, which functions similarly to OpenAI's GPT Codex. Claude Code is designed to assist developers by generating and understanding code, enhancing ...
1 month ago Bleepingcomputer.com
Anthropic’s Claude AI is helping researchers analyze cyber threats - Anthropic, an AI safety and research company, has developed Claude, an AI system that is now being used to analyze cyber threats and enhance cybersecurity defenses. The AI's ability to process vast amounts of threat intelligence data quickly and ...
1 month ago Theverge.com
Claude copies ChatGPT with $200 Max plan, but users aren't happy - Claude has a new subscription tier called "MAX," but it costs a whopping $200 per month, and users aren't happy with how the company enforces rate limits. In another thread, some users alleged that the existing $20 Claude Pro subscription is now ...
5 months ago Bleepingcomputer.com
Malware devs abuse Anthropic's Claude AI to build ransomware - Cybercriminals are increasingly exploiting advanced AI technologies to enhance their malicious capabilities, and the latest trend involves the abuse of Anthropic's Claude AI to develop ransomware. This alarming development highlights the evolving ...
1 month ago Bleepingcomputer.com
Claude is testing ChatGPT-like Deep Research feature Compass - To make things easier for users, Claude is testing some system prompts for the Compass feature, such as "Find credible sources for my research" and "Provide evidence-based insights for my topic". "Compass" will allow Claude to perform certain tasks, ...
6 months ago Bleepingcomputer.com
LMSYS launches 'Multimodal Arena': GPT-4 tops leaderboard, but AI still can't out-see humans - The arena collected over 17,000 user preference votes across more than 60 languages in just two weeks, offering a glimpse into the current state of AI visual processing capabilities. OpenAI's GPT-4o model secured the top position in the Multimodal ...
1 year ago Venturebeat.com
Google to Announce Chat-GPT Rival On February 8 Event - There seems to be a lot of consternation on Google's part at the prospect of a showdown with ChatGPT on the February 8 event. The search giant has been making moves that suggest it is preparing to enter the market for large language models, where ...
2 years ago Cybersecuritynews.com
Best of 2023: Diamond Model of Intrusion Analysis: A Quick Guide - Any intrusion into a network calls for a thorough analysis to give security teams cyber intelligence about different threats and to help thwart similar future attacks. Effective incident analysis has long been held back by uncertainty and high false ...
1 year ago Securityboulevard.com Axiom
How to detect poisoned data in machine learning datasets - Almost anyone can poison a machine learning dataset to alter its behavior and output substantially and permanently. With careful, proactive detection efforts, organizations could retain weeks, months or even years of work they would otherwise use to ...
1 year ago Venturebeat.com
Do Claude Code Security Reviews Pass the Vibe Check? - The article "Do Claude Code Security Reviews Pass the Vibe Check?" explores the effectiveness and reliability of using Claude, an AI language model, for conducting code security reviews. It delves into the capabilities of Claude in identifying ...
1 month ago Darkreading.com
Anthropic's new Claude feature can leak data, users told to monitor chats closely - Anthropic, a leading AI company, has introduced a new feature in its Claude AI assistant that has raised significant security concerns. This feature, designed to enhance user interaction, has been found to potentially leak sensitive user data. Users ...
1 month ago Arstechnica.com
Anthropic Report Sheds Light on Emerging Threats from Generative AI Misuse - These include an influence-as-a-service operation orchestrating over 100 social media bots across multiple countries, credential stuffing attacks targeting IoT camera systems, sophisticated recruitment fraud campaigns targeting Eastern European job ...
5 months ago Cybersecuritynews.com Hunters
Gmail Message Used to Trigger Code Execution in Claude and Bypass Protections - According to the Golan Yosef of Pynt, the attack centers on the MCP (Model Context Protocol) architecture, specifically targeting three key components: the Gmail MCP server as an untrusted content source, the Shell MCP server as the execution target, ...
2 months ago Cybersecuritynews.com
CVE-2025-55284 - Claude Code is an agentic coding tool. Prior to version 1.0.4, it's possible to bypass the Claude Code confirmation prompts to read a file and then send file contents over the network without user confirmation due to an overly broad allowlist of ...
1 month ago
How machine learning helps us hunt threats | Securelist - In this post, we will share our experience hunting for new threats by processing Kaspersky Security Network (KSN) global threat data with ML tools to identify subtle new Indicators of Compromise (IoCs). The model can process and learn from millions ...
1 year ago Securelist.com
CVE-2025-59829 - Claude Code is an agentic coding tool. Versions below 1.0.120 failed to account for symlinks when checking permission deny rules. If a user explicitly denied Claude Code access to a file and Claude Code had access to a symlink pointing to that file, ...
5 days ago
CVE-2025-59828 - Claude Code is an agentic coding tool. Prior to Claude Code version 1.0.39, when using Claude Code with Yarn versions 2.0+, Yarn plugins are auto-executed when running yarn --version. This could lead to a bypass of the directory trust dialog in ...
2 weeks ago
A Single Cloud Compromise Can Feed an Army of AI Sex Bots – Krebs on Security - “Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from ...
1 year ago Krebsonsecurity.com
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
1 year ago Securityboulevard.com

Cyber Trends (last 7 days)