Financial News

SambaNova Announces That Fugaku-LLM Is Now a Part of Samba-1

ISC24 SambaNova Systems, makers of the only purpose-built, full-stack AI platform, today announced that “Fugaku-LLM”, a Japanese Large Language Model trained on Japan's fastest supercomputer, “Fugaku”, and published on Hugging Face on May 10, has been introduced into SambaNova's industry-leading Samba-1 Composition of Experts (CoE) technology.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240513582498/en/

Fugaku-LLM is now a part of Samba-1. Photo featuring Rodrigo Liang, CEO, SambaNova Systems; Matsuoka Satoshi, Director of the RIKEN Center for Computational Science; Marshall Choy, SVP of Product, SambaNova Systems; Toshinori Kujiraoka, General Manager of APAC, SambaNova Systems. (Photo: Business Wire)

Fugaku-LLM is now a part of Samba-1. Photo featuring Rodrigo Liang, CEO, SambaNova Systems; Matsuoka Satoshi, Director of the RIKEN Center for Computational Science; Marshall Choy, SVP of Product, SambaNova Systems; Toshinori Kujiraoka, General Manager of APAC, SambaNova Systems. (Photo: Business Wire)

Matsuoka Satoshi, Director of the RIKEN Center for Computational Science, said, “We are very pleased that Fugaku-LLM, the Japanese Large Language Model trained on a large scale from scratch by the supercomputer ‘Fugaku’, is introduced into SambaNova's Samba-1 CoE, making the achievements of Fugaku available to many people. The flexibility and scalability of SambaNova's CoE are highly promising as a platform for hosting the results of Large Language Models trained by the world's supercomputers.”

“Samba-1 employs a best-of-breed strategy from open source, which ensures that we always have access to the world's best and fastest AI models,” said Rodrigo Liang, Co-Founder and CEO of SambaNova Systems. “The addition of Fugaku-LLM, a Japanese LLM trained on Japan's renowned supercomputer, ‘Fugaku’, fits into this strategy. We are delighted to incorporate Fugaku's capabilities into this world-leading model.”

SambaNova's unique CoE architecture aggregates multiple expert models and improves performance and accuracy by selecting the best expert for each application. The Fugaku-LLM is implemented on CoE architecture and runs optimally on SambaNova's SN40L chip with its 3-tier memory and Dataflow architecture.

Fugaku-LLM on Samba-1 is being demonstrated at the SambaNova booth #A11, Hall H at ISC24.

About SambaNova Systems

Customers turn to SambaNova to quickly deploy state-of-the-art generative AI capabilities within the enterprise. Our purpose-built enterprise-scale AI platform is the technology backbone for the next generation of AI computing.

Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, and hardware and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celesta, and several others. Visit us at sambanova.ai or contact us at info@sambanova.ai. Follow SambaNova Systems on Linkedin and on X.

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.

Use the myMotherLode.com Keyword Search to go straight to a specific page

Popular Pages

  • Local News
  • US News
  • Weather
  • State News
  • Events
  • Traffic
  • Sports
  • Dining Guide
  • Real Estate
  • Classifieds
  • Financial News
  • Fire Info
Feedback