The Quest for Massive Recommendation Models: Why Does Meta Want Models 'Orders of Magnitude' Bigger Than GPT-4?
Meta, formerly known as Facebook, recently made a significant claim regarding the size of its upcoming recommendation models. In an attempt to shed more light on its content recommendation algorithms, Meta stated that it is preparing for behavior analysis systems that will be "orders of magnitude" larger than existing large language models such as ChatGPT and GPT-4. This statement raises questions about the necessity and implications of building such colossal models.
Meta periodically emphasizes its commitment to transparency by explaining the inner workings of some of its algorithms. While these explanations can sometimes provide valuable insights, they can also leave us with more uncertainties. The current announcement falls somewhere in between.
In addition to introducing "system cards" that explain the utilization of AI in specific contexts or applications, Meta released an overview of the AI models it employs. One of the challenges Meta faces is correctly recommending content based on nuanced differences, like distinguishing between roller hockey and roller derby in videos that may share visual similarities.
Meta has been actively involved in multimodal AI research, which involves combining data from various modalities such as visual and auditory cues to gain a better understanding of content. Although most of these models are not publicly available, they are known to be used internally to enhance "relevance," a term often synonymous with targeted advertising. However, some researchers are granted access to these models for study and analysis.
Unveiling the Vision for Enormous Recommendation Models
As part of its explanation of expanding computation resources, Meta revealed a fascinating insight:
"In order to deeply understand and model people's preferences, our recommendation models can have tens of trillions of parameters — orders of magnitude larger than even the biggest language models used today."
Upon further inquiry, Meta clarified that these models are still theoretical. The company stated, "We believe our recommendation models have the potential to reach tens of trillions of parameters." Although this phrasing might imply that the models are not yet fully developed, it suggests that Meta is actively pursuing the creation of such massive models to ensure they can be efficiently trained and deployed at scale.
While the exact size of GPT-4 remains unknown, with AI leaders acknowledging that parameter count is not the sole measure of performance, ChatGPT is estimated to have around 175 billion parameters. If Meta's claims hold true, their recommended models could surpass the wildest estimates of 100 trillion parameters. Even if these claims are somewhat exaggerated, the size of Meta's models is still exceptionally large.
The concept of an AI model larger than any previously created is intriguing, to say the least. Imagine an AI system that consumes every action you take on Meta's platforms and predicts your future preferences and behaviors. It's undeniably unsettling.
The Age of Algorithmic Tracking and Recommendation
Meta is not alone in this endeavor. TikTok, for instance, pioneered algorithmic tracking and recommendation, building its social media dominance on its addictive feed of "relevant" content designed to keep users scrolling endlessly. Competitors are openly envious of TikTok's success in this regard.
Meta aims to impress advertisers with its scientific approach, both by proclaiming its ambition to create the largest model in the field and by showcasing technical details:
"These systems understand people's behavior preferences utilizing very large-scale attention models, graph neural networks, few-shot learning, and other techniques. Recent key innovations include a novel hierarchical deep neural retrieval architecture, which allowed us to significantly outperform various state-of-the-art baselines without regressing inference latency; and a new ensemble architecture that leverages heterogeneous interaction modules to better model factors relevant to people's interests."
While this paragraph might not impress researchers or users, its purpose is to captivate advertisers. Meta seeks to convince advertisers that it not only leads in AI research but that AI genuinely excels at understanding people's interests and preferences.
To prove the effectiveness of AI-driven recommendation systems, Meta points out that "more than 20 percent of content in a person's Facebook and Instagram feeds is now recommended by AI from people, groups, or accounts they don't follow." This fact might sound appealing, but it also raises concerns about the extent of AI's influence and the potential manipulation of users' preferences.
The Dilemma of Precision Ad Targeting and User Experience
However, Meta's quest for ever-more-precise recommendation models also sheds light on the underlying machinery of Meta, Google, and other companies whose primary objective is selling ads with pinpoint targeting. The value and legitimacy of such targeting must be constantly reinforced, even as users become more skeptical and advertising proliferates.
In reality, Meta has never taken the sensible approach of directly asking users about their preferences. Instead, it observes users' online activities and serves them targeted ads based on their behavior. Rather than presenting users with a simple list of brands or hobbies to gauge their interests, Meta prefers to monitor their web browsing, presenting it as an impressive feat of advanced AI when they subsequently display relevant ads. However, the superiority of this approach over more direct methods remains uncertain.
The entire web ecosystem has been built around the belief in precision ad targeting. Now, with the deployment of the latest technologies, companies aim to fortify this belief to accommodate a new wave of skeptical marketing spend.
The Pursuit of Mammoth Recommendation Models
Meta's intention to build recommendation models "orders of magnitude" larger than current language models reflects its ambitious vision for understanding and predicting users' preferences. While the need for such enormous models might not be immediately apparent, considering the vast amount of content and metadata available, it becomes evident that Meta and other tech giants are driven by the desire to optimize targeted advertising.
However, this quest also raises concerns about privacy, the influence of AI, and the user experience. As users become more aware of the intricacies of algorithmic tracking and recommendation, companies must continuously emphasize the value and legitimacy of their targeting efforts.
In the end, the pursuit of massive recommendation models signifies a significant milestone in AI research and the ongoing evolution of advertising. As users, we must critically analyze the implications and consequences of such advancements to ensure our preferences and privacy are adequately protected in this algorithmically driven era.
Comments
Post a Comment