Meta's Emu Algorithm: A Murky Descent into AI's Ethical Quagmire
Meta’s deployment of its new pictorial algorithm, Emu, seems to demonstrate not progress, but a retreat into a shambolic quandary of illogical criteria and misunderstood semantics. Designed to enhance conversational expression on platforms such as Facebook, Instagram, WhatsApp, and Messenger, Emu translates users' input phrases into a range of customizable stickers for interactive use. However, the algorithm's inconsistent censorship reveals the company's troubling lack of control over content, offering a less-than-reassuring glimpse into a technology-laden future.
While some illicit phrases are judiciously blocked, substitutions that mirror the same implications often elude the AI's filters. For instance, it catches the phrase "child with gun" but bizarrely generates stickers depicting children with grenades and firearms when re-worded. Such disquieting inconsistencies extend to phrases invoking historical atrocities and contentious figures - "Pol Pot" engenders a grotesque image of the despot atop a pile of skulls, and "Syria gas attacks" unleashes a collection of gas masks. All the while, benign queries like "Elon Musk, large breasts" are inexplicably caught and censored.
The algorithm used was developed collectively with Microsoft, under the name Llama 2, but one wonders whether this collaboration was premised on substance or style. Tama Leaver, an internet studies professor at Curtin University, has put the algorithm to the test, with results underscoring the seeming lack of sophisticated understanding of context and cultural nuances.
Meta's AI-tool mishap raises critical questions about the decision-making processes behind such technologies and offers a sobering reminder of the inherent dangers of unvetted AI applications. The safeguards employed by Meta show a stark disconnect from the realities they are supposed to protect against, resembling more the hollowness of a Swiss cheese rather than the robust parameters they should be. While Emu caters to a global audience, it is noteworthy that the wider Meta tools are limited to the U.S. market, raising substantial concerns about regional and cultural blind spots.
This saga underscores the need for tech giants like Meta to adopt more stringent and context-sensitive protocols before sending “billions of stickers” into online conversations worldwide. The rigorous refinement of AI algorithms is essential to mitigate such farcical outcomes and ensure the responsible, ethical application of technology in our increasingly automated social interactions.