I am not Latino,
I’m a human that stands for truth, justice, freedom, fairness, merit, dignity, respect, progress and determination, My name might be Rodriguez, But that’s only because my blood was conquered by imperialists pillaging the Caribbean with slaves in tow, My spirit comes from the bows of those ships, Aspoao y sazonao con un chin de alegría, Although my iberian nose might fool you, My heart beats to drums of Africa, And yet my conscience was layered at birth with the bravery and boldness of the stars and stripes. I opt out of the identity title that was ascribed to me by marketers, For I am what I stand for, What put my passion behind, How I carry myself, And what and who I serve. Spare me your tribalist identity and social group, My identity need no rationale nor label, I am but a man, Out to make my mark in this world, Allying with the transformatively impactful individuals who balance both divinity and rationality on their shoulders, Who wake up with a light and fight to serve something greater than themselves. You see, I am not Latino, Neither are you white, black, brown, red, yellow, or trans, I am interested in what are willing to die for, Not what color or category you ascribe to. Please, stop telling me what you are, Who you love, And what club you’re in, Instead, show me what you’re doing with your precious life, The rest is self explanatory. I am not Latino, I’m a human that stands for truth, justice, freedom, fairness, merit, dignity, respect, progress and determination, My name might be Rodriguez, But that’s only because my blood was conquered by imperialists pillaging the Caribbean with slaves in tow, My spirit comes from the bows of those ships, Aspoao y sazonao con un chin de alegría, Although my iberian nose might fool you, My heart beats to drums of Africa, And yet my conscience was layered at birth with the bravery and boldness of the stars and stripes. I opt out of the identity title that was ascribed to me by marketers, For I am what I stand for, What put my passion behind, How I carry myself, And what and who I serve. Spare me your tribalist identity and social group, My identity need no rationale nor label, I am but a man, Out to make my mark in this world, Allying with the transformatively impactful individuals who balance both divinity and rationality on their shoulders, Who wake up with a light and fight to serve something greater than themselves. You see, I am not Latino, Neither are you white, black, brown, red, yellow, or trans, I am interested in what are willing to die for, Not what color or category you ascribe to. Please, stop telling me what you are, Who you love, And what club you’re in, Instead, show me what you’re doing with your precious life, The rest is self explanatory. I am not Latino, I’m a human that stands for truth, justice, freedom, fairness, merit, dignity, respect, progress and determination, My name might be Rodriguez, But that’s only because my blood was conquered by imperialists pillaging the Caribbean with slaves in tow, My spirit comes from the bows of those ships, Aspoao y sazonao con un chin de alegría, Although my iberian nose might fool you, My heart beats to drums of Africa, And yet my conscience was layered at birth with the bravery and boldness of the stars and stripes. I opt out of the identity title that was ascribed to me by marketers, For I am what I stand for, What put my passion behind, How I carry myself, And what and who I serve. Spare me your tribalist identity and social group, My identity need no rationale nor label, I am but a man, Out to make my mark in this world, Allying with the transformatively impactful individuals who balance both divinity and rationality on their shoulders, Who wake up with a light and fight to serve something greater than themselves. You see, I am not Latino, Neither are you white, black, brown, red, yellow, or trans, I am interested in what are willing to die for, Not what color or category you ascribe to. Please, stop telling me what you are, Who you love, And what club you’re in, Instead, show me what you’re doing with your precious life, The rest is self explanatory. The family dinner table, the coffee shop, the tabloid, and scholarly publication all are mashed together in small image and text headlines scrolled by billions around the world. The convergence of opinions, news, and entertainment on social media forms the bedrock of modern discourse. The importance of regulating these platforms has never been more evident. Accusations of manipulation by billionaire founders, political factions, and corporate interests highlight the pressing need for reform. While the extent of such manipulations may be debated, the underlying truth remains: to ensure a fair and balanced discourse, algorithmic transparency is essential. The current models of governance, content moderation, and platform management are inadequate, and without reform, the legitimacy of social media as a forum for public debate will continue to be undermined.
The Illusion of Governance: Failures of Current Moderation Models Social media companies, in their attempts to regulate content, have adopted governance models that rely heavily on content moderation teams. These teams, often composed of contract workers, are tasked with filtering vast amounts of information. However, this approach has proven to be both slow and costly. More critically, it lacks the necessary transparency to build public trust in the filtering decisions being made. The opacity of these processes fuels skepticism and conspiracy theories, as users are left in the dark about the criteria used to judge content. A report by The New York Times notes that “the moderation process is so opaque that even the most informed users cannot discern why some content is allowed and other content is removed.” Elon Musk, the owner of Twitter, has proposed a shift towards a more transparent and community-driven governance model, akin to Wikipedia. Whether this was a noble move by Musk, or a means to cut costs and fire ideologies remains to be seen. Wikipedia’s model of open governance and community moderation has been lauded for its democratic approach. However, it is not without flaws. As highlighted in a BBC News investigation, Wikipedia has faced “serious accusations of bias and manipulation by certain editors, some of whom have ties to political or state entities.” This highlights the challenges in achieving a balance between open governance and ensuring the integrity of the information presented. Musk’s vision for Twitter, while ambitious, has yet to reach the level of balance required for effective fact-checking and unbiased opinion sharing. The Power of Algorithms: The Need for Transparency The algorithms that power social media platforms are designed to maximize user engagement, often by exploiting psychological tendencies towards hyperbole and sensationalism. This creates a feedback loop where the most attention-grabbing content is promoted, regardless of its accuracy or fairness. As noted by The Guardian, “the engagement-driven algorithms of platforms like Facebook and YouTube have been shown to amplify misinformation and extreme viewpoints, creating polarized environments.” The result is an environment where misinformation can thrive, and echo chambers are reinforced, isolating users from opposing viewpoints. Algorithmic transparency is crucial to breaking this cycle. By revealing how content is prioritized and filtered, platforms can empower users to make informed decisions about the information they consume. Transparency would also allow for greater accountability, as platforms could be held responsible for the effects of their algorithms on public discourse. Allowing users to modify their feed would then empower people to do as they once did–simply change the channel. The Influence of State Actors and Corporate Interests The influence of state actors and corporate interests on social media is another critical issue that regulation must address. Platforms like TikTok, for example, have been widely criticized for their ties to the Chinese government. A report by The Wall Street Journal highlights that “TikTok’s algorithm is not just a powerful tool for engaging users but also a potential instrument for state influence, particularly concerning the spread of content that aligns with Chinese state interests.” In China, the app provides a very different experience than it does in the United States, reflecting the Chinese government’s desire to manipulate public opinion both domestically and internationally. The potential for foreign influence on platforms used by millions of Americans is a significant national security concern. Similarly, the role of affiliate marketers and influencers in subtly shaping discourse raises questions about the integrity of the information being disseminated on these platforms. The Integration of Public and Private Spheres: The New Digital Public Square Social media has become more than just a platform for entertainment; it is now the primary space for public discussion on a wide range of issues. It combines elements of traditional media, such as television and newspapers, with the personal interactions typically found in coffee shops, restaurants, and even family dinner tables. As noted by The Washington Post, “social media platforms have blurred the lines between public and private spheres, making regulation of these spaces all the more critical to ensure they serve the public interest.” This integration of public and private spheres into a single digital space makes the regulation of social media even more critical. The platforms that host these discussions hold immense power but currently operate with little to no responsibility for the content they promote. Advertising revenue and the sophisticated technology stacks that optimize these platforms have driven the focus of social media companies towards profit rather than public good. However, if we accept that the public square is now online, then these platforms must be held to higher standards both to prevent censorship by plutocrats and limit the spread of misinformation. Discourse in the digital town square must not be an echo chamber layered with undue influence by state actors, corporate interests, and governed by a black box algorithm. Whatever happened to the everyman’s voice,
The one who stood for the farmer’s choice? For the city folk who rise at dawn, Working hard, their dreams not yet gone, A better roof for their children’s heads, A hospital bed, where hope is spread and medicine is plenty, A school to enlighten, food to nourish, Roads to tread, dreams to flourish. A train to go from coast to coast, And a light of opportunity and righteousness, But now, as I look at this election’s stage, I’m perplexed, lost in this modern age. A plutocrat arming the mob as his ear is grazed, Claiming no fear, he promises to drain the swamp, But is this our hero, or a stain on the jeans of our ideals? The strong man claims he’ll help us all, Boom times and oil, his siren call, But his record’s clear, with his pockets lined, He’ll leave us with a bag of coal–behind. He rallied the freedom fighters, They kicked their feet on the desks of the Senators in dignity, Is justice found in a lynch mob’s cry? Meanwhile, Kamala waves the flag of the free, But left a trail of her people behind bars, Moneyed interests pull the strings both left and right, But the blue ties clip wings just the same, For the everyman, what’s left not to despise? When those in power spin only lies. Talking down, locking up, Little Johnny turns into Little Suzy, filling the cup of excuses, A mockery that leaves us on our knees begging for mercy. So, what happened to the everyman’s dream, Where life could be more than it seems? To raise a family, to grind, to soar, To push science beyond the shore. Fireworks that stand for more, Than flashes of dynamite and war. I’m still searching, looking for truth, For he, or she, or it, or the collective youth. But by then, will it be too late? The planet burning, sealing our fate, No life left to see the new day, The monsoon of the fat cat, washing us all away, Vanquished, back to biology's stew. Modern architecture is making us sick. Penguins in the London zoo developed foot disease because of an awkward ramp that looked aesthetically pleasing, but left their feet so sore that they eventually started plunging to their death. To make matters worse, male penguins were spotted raping dead penguins in the pool that lay below.
While it may dazzle the eye, modern architecture poisons the soul. As you sit in your glass house and skyscraper office, you may feel a twinge of emptiness. You have, after all, done away with the embellishment and shadows, and embraced what the Japanese call ma. Death is not far off. The void mistaken for meditative stillness swirls, and you think you are Steve Jobs because your computer is built into the wall and has no wires. Cool. Modernism stands for the negative: the absence of clutter, so as to leave one unencumbered by the uber Zen simplicity. So shocking in its vacuum-like grandeur, the impression that these great works leave us with is similar to the penguins fate: a plunge from great heights, and then, the unthinkable. Ravaged by the architect who saved coin on building materials as you removed all embellishments, only to be overcharged on the uncomfortable IG-sexy furniture. If we continue to fixate on the style that is effectively a golf tee for a screen that shows us everything, please remember the penguins. President Biden’s last wish to restore neutrality to the Supreme Court is naive. Putting aside his memo’s pleas for the shifting line on presidential immunity, shortening the lifetime tenure of Supreme Court justices, and the ethics code on bribery (because after all, it was the vacations that made Clarence Thomas vote the way he has…), the bitterness of Biden’s last wish is doused with an uncertain aftertaste of whether justice can ever be even-handed. The debate over the moral principles of law—whether they should derive from the Constitution, the popular vote (will of the people motivated to leave the couch), the jury, or the educated elite—is a multi-dimensional pendulum that has recently been strangled by the Supreme Court and cloaked in their deft high brow rulings.
The United States legal system is a mix of laws passed by congress and the evolution of case law interpreting those laws in light of the Constitution and “common law” that has developed through the courts dating back to Britain. This spaghetti of laws and applications of those laws to specific circumstances form legal precedents. The application of legal precedent is where the problem lies. It's here that judicial discretion seeps in, and many people tune out of analogies that lawyers cut their teeth (and line their pockets) on. Many precedents involve discretionary weighing of factors and determination of hard lines. There is often a reference to the “reasonable man.” It is an attempt to bring criteria to an inherently subjective task. As we’ve seen in cases of abortion, guns, antitrust, and other pressing issues of our time—even those affecting the legitimacy of democracy, such as voting rights and gerrymandering—it is plain for anyone to see that the judiciary is using discretionary factors and the “ordinary reasonable person” standard to insert their own morality. Nine men and women appointed by the President and approved by the Senate govern the most important issues of our nation. They do so by cloaking their discretion in novel arguments that do not make for news headlines. Some of the Court’s favorites are: determining that the issue is a state, not federal matter (because, after all, when the country started the federal government left the states alone in many matters) and thus allowing conservative states to ban abortion; punting the issue back to a lower court to resolve a relatively obscure legal question knowing exactly how the lower court will decide the issue; refusing to hear an issue (the Supreme Court picks its cases) so as to leave no record of their agreement with the lower court’s ruling; stating that it is an issue for congress, knowing that congress is gridlocked and will never decide it; and switching to a convenient canon of constitutional interpretation (“original” v. “living”/evolving with cultural values, literal v. metaphorical) that results in the outcome they seek. Many disgruntled uneducated young men are knocking at the gates, but the Supreme Court does not seem to care. Americans agree on many issues, yet the judiciary fails to overcome either the ineptitude, acquiescence, indifference or adverse interests of Congress that prevents them from asserting the will of the people. As David Cole wrote in Engines of Liberty, the Court has occasionally acted as a pressure release valve for citizen advocate groups, as seen in the landmark Obergefell ruling recognizing same-sex marriage. But in many other instances, foundational to democracy—like the ruling in Citizens United that impacts campaign finance—the Court appears content with undemocratic principles. Whether you believe human nature is inherently tribal and adversarial, thus rendering an unbiased Supreme Court unattainable, it remains incumbent upon us to construct an imperfect system that constrains judicial discretion while also resolving fundamental issues of fairness and capturing the will of the people. We would be better served making the convoluted algorithm more transparent and removing the sophists from the bench altogether. Biden’s last wish may be the passing plea of a man aging, tired, and witnessing the most powerful nation in the world growing ever more divided. The discretion of the Court and the very system of checks and balances that strives to ensure fairness is failing. It is this delicate balance that makes America a beacon of hope, demonstrating that government can reconcile fairness with freedom. And yet, the old man has bit off more than he can chew. Texas is attracting larger companies than any other state. Texas is the eighth largest economy in the world. And yet, the Texas miracle has a burgeoning underbelly of inequality that is giving more credence to the latter half of the Texas moniker “live free, die hard.” So much for the free living part.
States and countries have long played the game of luring businesses with low taxes and less regulation. The belief is that the rising tide of a growing economy lifts all ships. Research suggests the opposite. In fact, the benefits are short lived and the tide only rises for the ships in the right harbor. In Texas, breakdowns in infrastructure, overcrowding, and rising costs of housing cast a shadow over the economic boom, and these issues are often not resolvable without raising taxes. Texas demonstrates why this aphorism (philosophy?) is a ruse. It is a trojan horse that promises a better life to everyman while keeping away Uncle Sam, but results in widening the wealth gap The Lone Star state is authentic in its principles. Texas has long availed itself to businesses as a low tax jurisdiction, less encumbered by regulation and owned by oil barons. Its politically conservative roots outside of the Austin bubble were a deterrent for many, but now there is a strong draw for the “anti-woke” and free speech absolutists who believe that the left is sensationalizing minority issues and pressuring media to self-censor. Elon Musk claimed that California laws on gender identity were “the last straw” in deciding to move his companies to Texas. Culture wars aside, the real draw for businesses is the combination of low taxes, low regulation and a talent pool of workers. As former Texas Governor Rick Perry recently said, “The solid rock that Texas built its foundation on economically was: don’t overtax, don’t over-regulate, don’t over-litigate and have a skilled workforce…” So far, so good. But he goes on to say, “...that’s the foundation — and then we added to it.” Despite an influx of business, the Financial Times cites that “Texas ranks in the bottom 10 US states for educational attainment, has the highest proportion of people with no health insurance and among the highest rates of child poverty, at about 20 per cent.” The transportation infrastructure is beginning to crack, too. Texas’ shortsightedness in attracting business is common. Take the hypothetical: a computer server company (“Company X”) wants to move into town. Politicians claim that thousands of high paying jobs will be created through attracting this “high tech” company. The model is as follows:
Fast forward to 2018, and the grand vision began to unravel. Despite Wisconsin’s subsidy ballooning past $4 billion, the massive factory intended to produce 75-inch LCD panels for TVs had yet to materialize—and still hasn’t. Instead, Foxconn shifted its focus to a smaller facility dedicated to producing smaller LCD panels. The 13,000 jobs? Largely unfilled, with the few hires made consisting mostly of “knowledge workers” set to build an enigmatic and jargon-filled ecosystem dubbed “AI 8K+5G.” This reimagined vision bore little resemblance to the ambitious original plan, leaving Wisconsin holding the bag while Foxconn made out like bandits. Our representatives must come around to the idea that ribbon cutting and highlighting job “promises” is less important than a concrete plan to usher in long-term economic growth that will be widely distributed and fill up the coffers to ensure that education, transportation, health and the security of citizens is not offered in exchange for a martini. Language Models
Chat GPT has the AI revolution in full swing. The question is, have we really solved the fundamental problem of accessing information and communicating it effectively? It is exciting that we can now have rich dialogue with a computer. For two decades, we’ve asked Google what the most relevant website is to our query. Now, we ask the wise man GPT. And, we now can instruct the wise man to write for us. As exciting as this is, we are making an errant assumption that GPT is wise. The LLM model that GPT is based on does not think. It looks for text sequences in its dataset (which are not all trusted authorities or great works of authorship) that map to users’ text inputs. There is no reasoning going on here. Its outputs are not a formulation of scholars like an encyclopedia, or a ranking of relevance like Google search. Thus, as a societal matter, we should be wary that rapidly increasing the pace of content creation based on information that may neither be factually accurate nor serve to inform the reader could crowd out the very information that made the internet useful. There are solutions. The goal should be to create a system that can synthesize information, make it easier to find the trusted authorities, reason through it, offer up coherent perspectives, and like GPT, author works. In order to accomplish this, we have to tackle some underlying technical challenges. First, anyone who has sat at the backend of a search engine observing user queries has realized that a ton of searches are vague, brief, and or ambiguous. When I handed friends an app with a search bar for image generation, they searched for things like ”soccer,” “woman dancing,” and “dog with flowers.” If you asked an artist to draw one of those descriptions, he would likely scrunch his forehead before peppering you with clarifying questions. There are ways to predict certain things about what you might mean based on your previous searches, previous searches by other users and external data. However, like the artist, the algorithm cannot read your mind. In sum, there is a “garbage in” problem. As an attorney, a significant portion of my job is asking follow-up questions to gather more information. Asking context specific questions is a task of gathering necessary information while making a probability calculation of whether the person is willing or able to be more specific. Many journalists will attest that it’s typically easier to get someone to tell you a story than to describe something with specificity. And yet, software engineers are often fearful of users leaving the app by putting up too much friction in the interface by asking clarifying questions. Nonetheless, without solving the “garbage in” problem, the outputs will continue to be inaccurate, random and or not very useful. The second challenge is structuring software to contextualize and synthesize information. The Onion, a humor publication, is not the same as the Harvard Medical Journal. Double blind studies are not the same as pop psychology that states that alcohol is healthy. These rules and hierarchies of authorities are teachable and therefore they are programmable. Relating information from different domains, formulating complex hypotheses and assembling experiments is something that deep learning is well equipped to do. Deep learning, and more broadly machine learning, are a category of statistical algorithms. LLMs are merely one tool in the toolkit, but at the step of contextualizing and synthesizing, they are the wrong tool. Finally, expressing information in the most digestible and compelling way is another hard problem. Think about the difference of how your kindergarten teacher spoke to you, and then think about how a poet laureate writes, and then how an attorney advises her client. On the one hand, there is colorful wording and the use of a variety of rhetorical tactics–-narratives, analogies and metaphors among them—that allow you to connect with the language. And on the other, there is a spectrum of specificity to understandability. An explanation can generally be simple and reductive or complex and detailed, but not both. A superior solution would allow users to adjust the level of detail they desire. But the reduction of detail should never distort the meaning. This is particularly a concern for specialized domains like law and medicine, where the risk of error is very high. Solving these challenges is the next frontier in language models. It is how we make the internet a better encyclopedia–instead of a graveyard of word sequences mashed together in a gazillion parameter language model with a human face. Oh, Socrates, where art thou? Meta, the company formally known as Facebook, issued an update on their use of facial recognition technology. In the update, Meta announced that they will shut down the facial recognition system as part of the company-wide move to limit the use of facial recognition in their products. It is noteworthy, however, that although these features will no longer be accessible by end users, Meta likely can flip a switch at any moment and utilize the facial recognition they have developed up until now. Although the announcement indicates that Meta will delete more than a billion people's individual facial recognition templates, that does not mean that their algorithm will no longer be able to recognize these faces.
Artificial intelligence facial recognition algorithms train on data sets which map photographs to names or other personally identifying information. After the algorithm’s “weights” are trained (in this case—on a billion faces), deleting the image files will not reduce or in any way hinder the already trained algorithm. What Meta is saying in their announcement is that they likely don't need these files anymore, and to make the public think their custodial action in deleting the files is a benevolent one, they issued a public address. GPT-3’s emergence as a state-of-the-art natural language processing algorithm has drawn headlines suggesting that lawyers are soon to be replaced. As a lawyer who spent the last year studying machine learning, I decided to put GPT-3 to the test as a legal summarizer to evaluate that claim. In this experiment, I input three excerpts of legal texts into GPT-3 to summarize: LinkedIn’s Privacy Policy, an Independent Contractor Non-Disclosure Provision, and the hotly debated 47 U.S. Code § 230 (“Section 230”).
Do you ever open your phone and scroll through a news feed after pressing shuffle on Spotify, before moving on to flicking across the Netflix lineup for something to pass the time? Video stores, music stores, libraries and the newspaper have been reduced to a scrollbar. What appears to be a vast sea of information and entertainment available at the click of a button is instead spoon-fed to you by a statistical formula called a recommender system. Recommender systems are in theory designed to recommend content you may like based on content you have consumed. In practice, recommender systems hamper creative exploration and reinforce ideological entrenchment by myopically evaluating your digital activity and categorizing your interests. After briefly describing how recommender systems function, I explore their pitfalls. My central argument is that we must vigilantly govern digital platforms’ recommender systems because they serve as the go-to sources of information and entertainment—setting the outer limits of creative exploration and truth seeking. Because access to information and art is critical to democracy in a modern society, the gateway platforms must be transparent in their methodology and allow users more choice. Recommender systems based on algorithmic transparency and user input can realize the unfilled potential of the internet by bringing the world’s content to our fingertips. I write not as a nostalgic luddite longing for the return of video stores and Walter Cronkite, but as a concerned citizen who wants the digital age to encourage creative exploration and ideological exposure. Inside the Recommender Black Box Recommender systems record what content users consume and apply statistical formulas to determine what they are probably willing to engage with. It is important to recognize that recommendations are not always explicit under category listings titled “recommended for you.” They are the very ordering of the movie titles and news articles you see on your phone and computer every day. There are two main recommender formulas which platforms commonly employ to recommend content: content-based filtering and collaborative filtering. Content-based filtering takes examples of what you like and dislike (comedy films, female lead vocals, news about COVID-19 cures) and matches you with similarly categorized content. Collaborative filtering matches your preferences with people of similar consumption patterns and recommends you additional content that those people have consumed. Both collaborative filtering and content-based recommender systems use what you have consumed as a proxy for what you like. They interpret your interactions on the platform, and sometimes combine that data with your interactions on other parts of the internet, forming a profile of your preferences. For example, one method is to observe which videos you click on and finish as a proxy for which videos you prefer. A better method would involve rating videos upon completion. An even better system would require a rating and reason why you did or did not like it, but participation in those surveys is spotty. Thus, actions such as playing a song twice and slowly scrolling through an article serve as workable proxies for what you like.
Problem One: Privacy & Control Although this method of preference gathering is imperfect (what if you were cooking and could not skip a song?), the more you consume, the better the platform can determine what you consistently will engage with. By adding data from outside the platform itself, the mosaic of your preferences gets more complex, forming a profile of your defined tastes and predictable ones. Generally, the more data on your preferences the algorithm has as an input, the more tailored of a recommendation it can make. That is a slippery privacy slope, as an enormous data profile can be amassed over time by observing your habits in an effort to order your newsfeed or Spotify playlist. Taken to the extreme, the best recommender would be inside your head, monitoring your thought patterns. We are already half-stepping towards this today. Smart speakers listen to conversations, camera embedded smart TVs can record reactions to content, web scrapers devour social media feeds and online affiliations, and Google logs internet searches. If gone unchecked, the level of psychological exploitation will inevitably grow to capture passing unexpressed thoughts, and pinpoint what type of stimuli makes you happy, sad, and makes you want to continue consuming. The more news, music, movies, and shopping platforms dominate our attention, computers and phones turn from tools into content shepherds that subtly steer us involuntarily. What that means in practice is that we are ceding the right to filter information and media content to the companies that run the major content platforms. The complexity of these recommendation systems occults that there are people behind the wheel of the algorithms they employ, even if their systems are semi-autonomous. If psychological profiles developed for the purpose of recommender systems were to be shared with governments, employers, and schools this same preference data could be used to discriminate and manipulate. Personalized profiles could become the basis of institutional access and job opportunities, not to mention the potential for psychological manipulation of the population at large. By sharing all of your platform usage and content choices, you are accepting that a picture will be painted of you that may or may not represent you, and you will see content through a lens you will not know was prescribed for you. Problem Two: Loss of Uniqueness Collaborative filtering does not consider people to be unique. By design, it attempts to match users with other users who share similar consumption patterns. However, there are preferences, such as those based on unique life experiences that are not shared with anyone on the planet. Even if you share 95% of your consumption in common with another profile, would you not prefer the freedom to scan the random content universe when you go searching for creative inspiration on Spotify, or the truth in your news feed, as opposed to being matched with your 95% consumption twin? Collaborative filtering makes the 95% similar person’s consumption habits the entire universe of news, music and shows available on the screen. Consider that your friends are likely not anywhere near 95% similar to you in consumption patterns, and yet they can serve as an excellent source of recommendations. Why? Take news articles for example. You may disagree with a friend about a certain political candidate, but you may also be interested in the normative arguments they make by analogy to their life experience growing up in a small town in Mississippi, and you may be intrigued by the factual evidence they cite in favor of tax breaks to big businesses as a means to generate economic growth. That builds trust in their recommendation. Based on their analysis, you might be willing to read an article they recommend which discusses an instance of when big business tax breaks led to a boom in economic growth and the rise of certain boomtowns. You can evaluate the value of your friend’s recommendation considering what you know about them, as well as your knowledge of history and economics. Collaborative filtering builds no such credibility of authority nor offers a synopsis of its reasoning. It removes the rationale from the recommendation. The collaborative filtering universe of options does not expand unless your digital twin searches for something outside of their previously viewed content. In other words, relying on collaborative filtering for recommendations ensures there are no hidden gems in your shuffle playlist or diversity of opinion in your newsfeed unless your soul sister from the ether searches for something outside of the recommender system. Problem Three: Preference Entrenchment Content-based recommender systems can broaden the collaborative filtering universe of recommendations by going beyond your consumption twins, but they myopically focus on the categorical characteristics of content you previously consumed. By doing so, content-based recommenders entrench you in your historical preferences, leaving no room for acquiring new tastes or ideas. Consider that if you have only ever listened to electronic dance music on Spotify, you will be hard pressed to get a daily recommendation of jazz. More likely, your playlist will be a mile-long list of electronic dance music. To get recommendation of a new type of music, you have to search for it. Unlike record shopping of old, you would never see the album art that caught your eye on your way to the electronic music section, or hear a song playing in the background of an eclectic record store. This preference entrenchment problem has deleterious consequences in the dissemination of information in the news. For example, a frequent New York Times reader who only clicks on anti-Donald Trump articles is likely only to be recommended more articles critical of Donald Trump. There are no easily accessible back page articles in a newsfeed. The recommender system does not allow for an evolution of political views because it is only looking at users’ historical preferences. That is a problem because one may not yet have formed an opinion on something one has not been exposed to. By tagging what you consume categorically, you are being involuntarily steered and molded into categories that may not represent your interests now, or in the future. Problem Four: Hyper-Categorization of Preferences Preferences are not always so clear cut. Data scientists love to define more granular categories of content to pinpoint preferences. The architect of these feature-definers is often a data scientist who occasionally may be aided by an expert in the field. The data scientist’s objective is to train a machine learning algorithm that automatically categorizes all the content on the platform. For example, Spotify might employ a musician to break down songs by genre, musical instruments, tempos, vocal ranges, lyrical content, and many other variables. The data scientists would then apply those labels to the entire music library by utilizing an algorithm designed to identify those characteristics, occasionally manually spot checking for accuracy. But, are those categories the reasons why you like a given love song? Or might the lyrics have reminded you of your ex-girlfriend or grandmother? The mechanical nature of breaking content down into feature categories often weighs superficial attributes over the ones we really care about. The flaw in the mechanical approach is subtle: by focusing on the qualities of the grains of sand (song features), recommenders fail to recognize they combine to form a beach. In context, a great movie is not great merely because it is slightly different than another with a similar theme and cast. It is great because of the nuanced combination of emotional acting, a complex story, evolving characters, and a climax no one could have anticipated. Similarly, local artists are not merely interesting because they are not famous. Their music might be interesting because their life influences and musical freedom make their music rawer than what is frequently listened to on Spotify. Those are difficult characteristics to describe for a human, let alone a recommender algorithm. Recommenders struggle to categorize descriptions like “beautiful,” “insightful,” and “inspiring” because they are descriptions of complex emotions. Thus, their bias towards clear-cut categories and quantifiable metrics makes them poor judges of art. It is no surprise then why recommenders are horrible at disseminating information. News articles can be broken down based on easy to identify categories such as publication source, quantity of links to other articles (citation count) or mentions of politicians’ names. Those categories might loosely relate to quality and subject, but they would hardly indicate a reason to base a recommendation. Moreover, these attributes do not signal the relevant attributes in news such as truthfulness, good writing, humor or demagoguery. Because news is constantly changing, it is even harder to categorize than a static environment like in music or movies, which can be manually tagged ahead of time. Facebook, Google and Twitter are in an ongoing battle to tag factual accuracy in news articles, particularly because it involves large teams of real people Googling what few credible information sources remain (instead of an algorithm trained to detect the presence of a trumpet in a song). The data scientists employed by these platforms seek to automate categorization whenever possible, but when truth lies in the balance, the stakes are too high. And yet, with screening an ever-growing amount of information, the platforms are struggling to keep up, especially in complex subjects like COVID-19 cures, which require expertise to comprehend. At present, we are allowing recommender systems, aided by data scientists and contract-employee category taggers, to shape public perception. They do not appear to be helping expand access to information. Problem Five: The User Interface is The Content Universe Shifting from retail stores and newspapers to screen scrolling may have kept us from leaving the sofa, but it did not make finding new content altogether a better experience. Strolling through music and video stores of old allowed for quickly browsing spines and close inspections of covers, and the sheer physical nature of the act made it feel more intentional. The advantages of the physical browsing experience are why DJs still love record stores and intellectuals will not let bookstores die. The face-out titles were the curated and popular ones. The more obscure titles were deeper in the stack. It may seem counterintuitive, but the inconvenience of digging made finding the hidden gems more rewarding. Although the subscription all-you-can-consume business model conveniently allows for casual previews and skim reading, one can only finger swipe through movie titles and cover pictures for so long before clicking. Thus, the order of songs, movie titles and articles is highly influential. Clickbait media works because of the equal weighting of articles in a scroll feed. Tabloid news magazines used to stand out with their highlighter colors next to the candy bars in the checkout line at the supermarket. Today, they often appear at least as often as the New York Times or the Economist in your feed. Recommender systems will reinforce clickbait tabloids over long form journalism without batting an eye simply because they are more frequently clicked on. Digital platforms fail to recognize that there is a degree of stewardship in curating news content. The content universe is—and always will be—curated. The status quo means trusting the data scientist architects and their recommender black boxes that influence what you see when you read the news with your morning cup of coffee, sit down at the end of a long day to enjoy a movie, or turn press shuffle to zone out while working. Moreover, it also means accepting that the elaborate apparatus of data collection of your preferences should continue to the extreme of understanding your psychological programming and turn the digital universe into a happy pill time drain—or worse. It is important to note that the biases of recommender systems are not always intended consequences. They are in part limitations of black box algorithms too often left unsupervised or under scrutinized. Engineering oversights happen. Applying recommender systems to the structuring of newsfeeds shifted the way a large portion of the population sees an issue like the viability of vaccines, but that was not likely the platform architects’ intentions. Nonetheless, it is the result of neglect of platform engineers, managers, and executives. The more authority over the flood gates of information are reduced to mathematical formulas contained in black box closed sourced programs, the more likely it is for this neglect to occur. It is vital that we push back to gain control of the digital universe. It is a misconception to think that we have a world of information and content in our pockets if every scroll is based on a recommender system backstopped by a small team of people screening fringe content. The internet dominated by platforms is creating shepherds, not moral stewards. We must subjugate platforms, and their algorithms, to democratic governance and require that they be transparent in their processes to ensure that the potential for creative exploration and dissemination of truth is enhanced by their emergence as a pillar of modern life. Democratic Platform Stewardship: An Alternative to Recommender Systems I do not intend to suggest that the internet should merely look like the back shelf of the library. If instead of being steered by recommender systems, users are given the reigns to select their preferences, the system becomes a useful tool instead of a dictator of preferences. The categories recommenders use could easily be made available to users. Instead of endlessly harvesting data on users’ habits to feed recommender systems, users could select and choose their own preferences. In the interest of privacy, platforms could be required to not record those preference settings. The common big tech response is that the data they collect helps makes services cheaper. Privacy and control are worth a few extra dollars a month. Another solution is altogether more democratic. Users could score content for quality, truthfulness, and other relevant categories for the medium. For a more intimate rating system, users could opt into friends and interest groups to get community recommendations that may deviate from popular opinion or involve groups of credentialed critics. Smaller community discussion could provide useful background information and prevent majority groupthink and interested parties from dominating the narrative. These groups are prone to becoming self-isolating echo chambers, so it is important that they remain public. Encouraging discussions would remove us from the infinite scroll of provocative images and titles pushed by advertisers and reinforced by recommender systems. Newsfeeds require particular attention because they are key sources of information for many. One of the big takeaways from the Cambridge Analytica and Russian election interference scandals was that the propaganda that gained the most interest was found to be that which promoted ideological extremes and reinforced scapegoats. News feeds today are too much driven by an advertising model based on click-through rates and comment engagement, to the detriment of critical thinking and the dissemination of truth. Recommender systems left to their textbook formulas may be good for engagement but are bad for the spread of truth. That is not acceptable. One potential solution in newsfeeds is to have community-based experts score each news posts for truthfulness and have users score for ideology, with a tiered system of users to include community elected expert moderators whose scores are given extra weight. The presence of moderators can provide a check on exploitation by any party who might seek to influence the platform by voting with fake accounts. All news feeds should seek to be balanced in ideology, but always attempt to be truthful. Balancing ideology aims to properly give readers both sides of a given issue. Bias may be impossible to eliminate, but stewardship in the curation of the modern newspaper is essential. Balanced journalism, however difficult to achieve, must at least be strived for in the digital age. |