The FAIR team at Meta, the parent company of Facebook, has unveiled a cutting-edge AI language model named the Large Language Model Meta AI (LLaMA).
Both large tech companies and startups are now placing a strong emphasis on AI progress, using extensive language models such as Microsoft’s Bing AI, OpenAI’s ChatGPT, and Google’s upcoming Bard AI to support various applications.
Nevertheless, Meta has highlighted that its LLM stands out from these models in several ways, such as its scale and accessibility to researchers.
Meta has indicated that the LLaMA models will vary in size, ranging from 7 billion to 65 billion parameters. While larger models have proven effective in enhancing the technology’s capabilities, they can also incur higher costs in terms of “inference” operations. For example, OpenAI’s Chat-GPT 3 has 175 billion parameters.
Meta states that they trained their model using written materials from the 20 languages with the most significant number of speakers, focusing particularly on languages that utilize the Latin and Cyrillic alphabets.
Nonetheless, Meta has not provided any guarantees that their language model won’t encounter hallucinations similar to other models.
Check out other articles we’ve written about AI, such as Google’s Bard and also Anthropic.