Guest Post: AI: Meta Platforms No Longer the Laggard That It Was

0
Jun 10, 2023

Meta’s history with AI is bad. Between 2016 and 2020, most of its hiring involved humans to keep objectionable content to a minimum because its automated processes were not nearly good enough to keep Facebook out of trouble. This had a direct impact on the company’s profitability and in part is why the company has been able to cut so much cost recently without any material impact on its operations.

We have heard and read a lot about the weakness of Meta Platforms when it comes to AI, but it has improved a lot over the last two years and now is in a position to offer a challenge to the leaders.

  • During 2020, signs started to emerge that at least in research, Meta Platforms was beginning to make a proper contribution towards the body of knowledge in AI.
  • This has continued, and although very little has shown up in its products to date, it has also demonstrated good progress in the development of large language models (LLMs) which underpin the latest chat services that everyone is so excited about.
  • Meta has an LLM called LlaMa, which exists in a range of sizes between 7bn parameters and 65bn parameters and these will underpin chatbots in Messenger, and WhatsApp.
  • Versions of LlaMa will also be retrained to improve photo and video editing in Instagram and Reels as well as to use for internal corporate processes.
  • However, where Meta has made its real impact is in the open-source community where its LlaMa foundation models have become the standard upon which thousands of hobbyists and enthusiasts have been tinkering with generative AI.
  • The open-source community has also been quick to adopt new AI techniques that the big companies have not, which has given it the ability to do on laptops what OpenAI and Google still need data centres to achieve.
  • This has caused some consternation among the big companies to the point that OpenAI is considering releasing the full version of GPT-3 with its weights to compete with LlaMa.
  • It is not clear how LlaMa reached the open-source community as foundation models are very difficult to switch in and out and all of the work currently going on is based on LlaMa.
  • LlaMa becoming the platform for open-source development means that Meta now has access to a very large supply of innovations on top of its model that it can use or build on to create other services.
  • This combined with the increase in the quality of academic research coming out of Meta Platforms is what has led us to upgrade Meta Platforms from a laggard to the middle of the pack.
  • In terms of pushing back the boundaries of AI, the two leaders remain Open AI and Google, but Meta Platforms is now right behind them alongside Baidu, ByteDance and SenseTime.
  • Part of the problem with assessing China is that the information flow around the development of cutting-edge technology in China has all but dried up due to the government’s moves to tighten national security.
  • Consequently, it is hard to say with a degree of certainty where the Chinese AI developments lie, but given how quickly open source has managed to catch up, it is difficult to think that the Chinese are not also hot on the heels of the leaders.
  • Therefore, Meta Platforms has greatly improved its position in AI. Its models are rapidly becoming a platform for development in the open-source community.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Summary

Published

Jun 10, 2023

Back To List