December 28, 2024

Yuan The latest version of the Llama artificial intelligence model, called Llama 3.1, was released on Tuesday. The latest Llama technology comes in three different versions, one of which is Meta’s largest and most powerful artificial intelligence model to date. Like earlier versions of Llama, the latest version continues to be open source, meaning it is free to access.

The new Large Language Model (LLM) underscores the massive investments social networks are making to keep up with competition from other leaders in artificial intelligence, such as ambitious startups like OpenAI and Anthropic, as well as other tech giants like Google and Amazon.

The announcement also highlights Meta and Nvidia. Nvidia is a key partner of Meta, providing the social networking giant with computing chips called GPUs to help train its artificial intelligence models, including the latest version of Llama.

A Meta spokesperson told a media briefing that while companies such as OpenAI aim to make money by selling their proprietary LL.M. or providing services that help clients use the technology, Meta does not plan to launch a competing venture of its own. Technology business.

Instead, similar to when Meta released Llama 2 last summer, the company is working with a handful of technology companies that will provide customers with access to Llama 3.1 through their respective cloud computing platforms and sell the new software. Meta’s 25 Llama-related enterprise partners include Amazon Web Services, Google Cloud, Microsoft Azure, Databricks and Dell.

Although Meta CEO Mark Zuckerberg told analysts on a previous corporate earnings call that Meta generated some revenue from its corporate Llama partnership, a Meta spokesman said any financial benefit was incremental. Instead, Meta believes that by investing in Llama and related artificial intelligence technologies and making them available for free through open source, it can attract high-quality talent in a highly competitive market and reduce overall computing infrastructure costs, among other benefits.

Meta’s launch of Llama 3.1 coincides with the week that Zuckerberg and Nvidia CEO Jensen Huang are scheduled to speak together at a conference focused on advanced computer graphics. The social networking giant, one of Nvidia’s top end customers, doesn’t run its own business-oriented cloud, and Meta needs the latest chips to train its artificial intelligence models, which it uses internally for targeting and other products. For example, Meta said the largest version of the Llama 3.1 model announced on Tuesday was trained on 16,000 Nvidia H100 graphics processors.

But the relationship is also important to both companies because of what it represents.

For Nvidia, the fact that Meta is training an open source model that other companies can use and adapt to their business (without having to pay licensing fees or ask for permission) could expand the use of Nvidia’s own chips and keep demand high.

But open source models can cost hundreds of millions or billions of dollars to create. Not many companies have the ability to develop and release open source models with similar investment amounts. Google and OpenAI keep their most advanced models secret despite being Nvidia customers.

Meta, on the other hand, requires a reliable supply of the latest GPUs to train increasingly powerful models. Like Nvidia, Meta is trying to cultivate an ecosystem of developers who build AI applications centered on the company’s open source software, even if Meta must largely give up building costly code and so-called AI weight.

Ash Jhaveri, the company’s vice president of artificial intelligence partnerships, told CNBC that the open-source approach has benefited Meta because it gives developers access to its in-house tools and invites them to build on top of them. It also helps Meta, he said, because it uses its artificial intelligence models internally, giving the company access to improvements made by the open source community.

Zuckerberg wrote in a blog post on Tuesday that he was taking a “different approach” to Llama’s launch this week, adding, “We are actively building partnerships so that more companies in the ecosystem can too Provide unique features to its customers.

Jhaveri said that because Meta is not an enterprise vendor, the social networking giant can refer companies inquiring about Llama to one of its enterprise partners, such as Nvidia.

The largest version of the Llama 3.1 series of models is called the Llama 3.1 405B. This giant large language model (LLM) contains 405 billion parameters, which determine the overall size of the model and how much data it can handle. Generally speaking, large LLMs with a large number of parameters can perform more complex tasks than small LLMs, such as understanding context in long text streams, solving complex mathematical equations, and even producing synthetic data that can be used to improve smaller AI models. .

The social networking giant has also released smaller versions of Llama 3.1, called the Llama 3.1 8B and Llama 3.1 70B models. Meta says these smaller models are essentially upgraded versions of their predecessors and can be used to power chatbots and software coding assistants.

Meta also said that the company’s US-based WhatsApp users and visitors to the Meta.AI website will be able to witness the capabilities of Llama 3.1 by interacting with the company’s digital assistant. A Meta spokesperson explained that Meta’s digital assistant will run on the latest version of Llama and will presumably be able to answer complex math questions or solve software coding problems.

A Meta spokesperson said that US-based WhatsApp and Meta.AI users will be able to switch between the new giant Llama 3.1 LLM or a less powerful but faster and smaller version to answer their questions.

watch: Kramer’s Dash: Meta

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *