Tokyo startup Sakana AI lands $30M to forge new path with compact AI models

Sakana AI, a Tokyo-based artificial intelligence startup co-founded by two notable ex-Google engineers, announced today that it has raised $30 million in seed funding from high-profile technology investors.
The young company, founded just last year, aims to take a different approach to AI by developing smaller, more efficient models inspired by nature.
The seed round was led by Lux Capital, which has invested in pioneering AI companies like Hugging Face, and also Khosla Ventures, known for investing in OpenAI way back in 2019.
Japanese tech giants Sony, NTT and KDDI also participated in the round, marking a vote of confidence in Sakana from major domestic players.
The startup believes that getting smaller AI models to work together efficiently can be more effective than training single gigantic models on massive datasets, an approach that many leading AI labs have pursued.
Sakana co-founders David Ha and Llion Jones previously led AI research groups at Google.
Jones co-authored a landmark paper on the Transformer model in 2017 that underpins chatbots like OpenAI's ChatGPT today.
Sakana's approach stands out at a time when the AI field has fixated on scaling up models to beat benchmarks through sheer size.
This tactic has produced impressive results, but also led to backlash over the computing resources and environmental impact required to train and run colossal AI systems.
Sakana execs argue that large models become inefficient as they balloon in size, while smaller, specialized models can collaborate to match the capabilities.
The startup likens it to how groups of people each with distinct skills outperform lone polymaths at complex jobs.
The founding team's pedigree and alternative vision were enough to attract top Silicon Valley and domestic Japanese investors to fund Sakana's Tokyo lab after just one year of operating independently.
With fresh capital and partnerships with the likes of NTT, the company will look to staff up and further develop its nature-inspired AI techniques.
Early-stage backing from U.S. and Japanese tech heavyweights signals confidence that Sakana could pioneer a new AI paradigm from Asia, while giving Japan influence in a strategic technology where the U.S. and China have dominated so far.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.


This Cyber News was published on venturebeat.com. Publication date: Tue, 16 Jan 2024 22:13:04 +0000


Cyber News related to Tokyo startup Sakana AI lands $30M to forge new path with compact AI models

Tokyo startup Sakana AI lands $30M to forge new path with compact AI models - Sakana AI, a Tokyo-based artificial intelligence startup co-founded by two notable ex-Google engineers, announced today that it has raised $30 million in seed funding from high-profile technology investors. The young company, founded just last year, ...
5 months ago Venturebeat.com
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG - It underscores the urgent need for robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models. The emergence of advanced models, like Generative Pre-trained Transformer 4, marks a new era in the AI landscape. ...
6 months ago Feedpress.me
Addressing Deceptive AI: OpenAI Rival Anthropic Uncovers Difficulties in Correction - There is a possibility that artificial intelligence models can be trained to deceive. According to a new research led by Google-backed AI startup Anthropic, if a model exhibits deceptive behaviour, standard techniques cannot remove the deception and ...
5 months ago Cysecurity.news
In the rush to build AI apps, don't leave security behind The Register - There are countless models, libraries, algorithms, pre-built tools, and packages to play with, and progress is relentless. You'll typically glue together libraries, packages, training data, models, and custom source code to perform inference tasks. ...
3 months ago Go.theregister.com
5 Unique Challenges for AI in Cybersecurity - Applied AI in cybersecurity has many unique challenges, and we will take a look into a few of them that we are considering the most important. On the other hand, supervised learning systems can remediate this issue and filter out anomalous by design ...
3 months ago Paloaltonetworks.com
Startups Scramble to Build Immediate AI Security - It also elevated startups working on machine learning security operations, AppSec remediation, and adding privacy to AI with fully homomorphic encryption. AI's largest attack surface involves its foundational models, such as Meta's Llama, or those ...
6 months ago Darkreading.com
ML Model Repositories: The Next Big Supply Chain Attack Target - The techniques are similar to ones that attackers have successfully used for years to upload malware to open source code repositories, and highlight the need for organizations to implement controls for thoroughly inspecting ML models before use. ...
3 months ago Darkreading.com
When Looking For Cybersecurity Solutions, Don't Shrug Off Startups - Let's say you're looking for some new technology for your business. That's why it's time to start giving startup tech a fair shake. It's nice to stick with a known commodity, especially when you're talking about securing your company's digital ...
6 months ago Cybersecurity-insiders.com
A New Trick Uses AI to Jailbreak AI Models-Including GPT-4 - Large language models recently emerged as a powerful and transformative new kind of technology. Their potential became headline news as ordinary people were dazzled by the capabilities of OpenAI's ChatGPT, released just a year ago. In the months that ...
7 months ago Wired.com
EU Reaches Agreement on AI Act Amid Three-Day Negotiations - The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long 'trilogue' negotiations between the EU Council, the EU Commission and the European Parliament. The landmark bill will regulate the use of AI ...
6 months ago Infosecurity-magazine.com
Protect AI Unveils Gateway to Secure AI Models - Protect AI today launched a Guardian gateway that enables organizations to enforce security policies to prevent malicious code from executing within an artificial intelligence model. Guardian is based on ModelScan, an open source tool from Protect AI ...
5 months ago Securityboulevard.com
Protect AI Report Surfaces MLflow Security Vulnerabilities - A report published by Protect AI today identifies remote code execution vulnerabilities in an open source MLflow life cycle management tool that can be used to compromise artificial intelligence models. Specifically, the report finds MLflow, which is ...
5 months ago Securityboulevard.com
Meta AI Models Cracked Open With Exposed API Tokens - Researchers recently were able to get full read and write access to Meta's Bloom, Meta-Llama, and Pythia large language model repositories in a troubling demonstration of the supply chain risks to organizations using these repositories to integrate ...
7 months ago Darkreading.com
Singapore keeping its eye on data centers and data models as AI adoption grows - With the adoption of artificial intelligence fast accelerating, Singapore says it has taken steps to ensure data centers operating in the country are energy efficient and government data used to train models are adequately secured. Also: Every AI ...
5 months ago Zdnet.com
Meta's Purple Llama wants to test safety risks in AI models - Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input. Take for example the older models that were used to determine whether a file was ...
6 months ago Malwarebytes.com
ChatGPT and Beyond: Generative AI in Security - The impact of generative AI, particularly models like ChatGPT, has captured the imagination of many in the security industry. Generative AIs encompass a variety of techniques such as large language models, generative adversarial networks, diffusion ...
3 months ago Securityboulevard.com
Enterprises will need AI governance as large language models grow in number - With the number of large language models in the market expected to grow and branch out, businesses will need a governance framework to manage their generative artificial intelligence applications. This approach will encompass the use of paid and ...
6 months ago Zdnet.com
AI, Supply Chain Are Fertile Areas for Cybersecurity Investment - The past year has been a busy one for startups, with investors re-evaluating their rules on what kind of companies to invest in and larger companies going shopping for innovative technologies. Focusing on individual acquisitions or startup launches ...
6 months ago Darkreading.com
Hugging Face dodged a cyber-bullet with Lasso Security's help - Further validating how brittle the security of generative AI models and their platforms are, Lasso Security helped Hugging Face dodge a potentially devastating attack by discovering that 1,681 API tokens were at risk of being compromised. The tokens ...
7 months ago Venturebeat.com
Adapting Security to Protect AI/ML Systems - As companies race to integrate AI and machine learning into every facet of their operations, they are also introducing new security and risk challenges. While some of the same risks that apply in traditional IT security continue to be relevant in ...
5 months ago Darkreading.com
East Texas hospital network can't receive ambulances because of potential cybersecurity incident - GetTime();if(!(u<=a&&d<=l throw new RangeError("Invalid interval");return r.inclusive?u<=l&&d<=a:ut||isNaN(t. Step):1;if(s<1||isNaN(s throw new RangeError("`options. Step):1;if(l<1||isNaN(l throw new RangeError("`options. GetTime()<=n throw new ...
7 months ago Cnn.com
FTC Warns AI Companies About Changing Policies to Leverage User Data - The Federal Trade Commission is warning AI companies against secretly changing their security and privacy policies in hopes of leveraging the data they collect from customers to feed models they use to develop their products and services. ...
4 months ago Securityboulevard.com
McAfee Project Mockingbird defends users against AI-generated scams and disinformation - McAfee announced its AI-powered Deepfake Audio Detection technology, known as Project Mockingbird. This new, proprietary technology was developed to help defend consumers against the surging threat of cybercriminals utilizing fabricated, AI-generated ...
5 months ago Helpnetsecurity.com
ChatGPT Spills Secrets in Novel PoC Attack - A team of researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack for extracting key architectural information from proprietary large language models such as ChatGPT and ...
3 months ago Darkreading.com
Exposed Hugging Face APIs Opened AI Models to Cyberattacks - Security flaws found in both Hugging Face and GitHub repositories exposed almost 1,700 API tokens, opening up AI developers to supply chain and other attacks and putting a brighter spotlight on the need to ensure that security keeps up with the ...
7 months ago Securityboulevard.com

Latest Cyber News


Cyber Trends (last 7 days)


Trending Cyber News (last 7 days)