5742 stories

Why algorithms can be racist and sexist

1 Share
An illustration of old-fashioned balance scales weighing data in their pans. Christina Animashaun/Vox

A computer can make a decision faster. That doesn’t make it fair.

Open Sourced logo

Humans are error-prone and biased, but that doesn’t mean that algorithms are necessarily better. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your home’s risk of fire.

But these systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used. This is commonly known as algorithmic bias. It’s tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box. We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works.

Typically, you only know the end result: how it has affected you, if you’re even aware that AI or an algorithm was used in the first place. Did you get the job? Did you see that Donald Trump ad on your Facebook timeline? Did a facial recognition system identify you? That makes addressing the biases of artificial intelligence tricky, but even more important to understand.

Machine learning-based systems are trained on data. Lots of it.

When thinking about “machine learning” tools (machine learning is a type of artificial intelligence), it’s better to think about the idea of “training.” This involves exposing a computer to a bunch of data — any kind of data — and then that computer learns to make judgments, or predictions, about the information it processes based on the patterns it notices.

For instance, in a very simplified example, let’s say you wanted to train your computer system to recognize whether an object is a book, based on a few factors, like its texture, weight, and dimensions. A human might be able to do this, but a computer could do it more quickly.

To train the system, you show the computer metrics attributed to a lot of different objects. You give the computer system the metrics for every object, and tell the computer when the objects are books and when they’re not. After continuously testing and refining, the system is supposed to learn what indicates a book and, hopefully, be able to predict in the future whether an object is a book, depending on those metrics, without human assistance.

That sounds relatively straightforward. And it might be, if your first batch of data was classified correctly and included a good range of metrics featuring lots of different types of books. However, these systems are often applied to situations that have much more serious consequences than this task, and in scenarios where there isn’t necessarily an “objective” answer. Often, the data on which many of these decision-making systems are trained or checked are often not complete, balanced, or selected appropriately, and that can be a major source of — although certainly not the only source — of algorithmic bias.

Nicol Turner-Lee, a Center for Technology Innovation fellow at the Brookings Institution think tank, explains that we can think about algorithmic bias in two primary ways: accuracy and impact. An AI can have different accuracy rates for different demographic groups. Similarly, an algorithm can make vastly different decisions when applied to different populations.

Importantly, when you think of data, you might think of formal studies in which demographics and representation are carefully considered, limitations are weighed, and then the results are peer-reviewed. That’s not necessarily the case with the AI-based systems that might be used to make a decision about you. Let’s take one source of data everyone has access to: the internet. One study found that, by teaching an artificial intelligence to crawl through the internet — and just reading what humans have already written — the system would produce prejudices against black people and women.

Another example of how training data can produce sexism in an algorithm occurred a few years ago, when Amazon tried to use AI to build a résumé-screening tool. According to Reuters, the company’s hope was that technology could make the process of sorting through job applications more efficient. It built a screening algorithm using résumés the company had collected for a decade, but those résumés tended to come from men. That meant the system, in the end, learned to discriminate against women. It also ended up factoring in proxies for gender, like whether an applicant went to a women’s college. (Amazon says the tool was never used and that it was nonfunctional for several reasons.)

Amid discussions of algorithmic biases, companies using AI might say they’re taking precautions, taking steps to use more representative training data and regularly auditing their systems for unintended bias and disparate impact against certain groups. But Lily Hu, a doctoral candidate at Harvard in applied mathematics and philosophy who studies AI fairness, says those aren’t assurances that your system will perform fairly in the future.

“You don’t have any guarantees because your algorithm performs ‘fairly’ on your old dataset,” Hu told Recode. “That’s just a fundamental problem of machine learning. Machine learning works on old data [and] on training data. And it doesn’t work on new data, because we haven’t collected that data yet.”

Still, shouldn’t we just make more representative datasets? That might be part of the solution, though it’s worth noting that not all efforts aimed at building better data sets are ethical. And it’s not just about the data. As Karen Hao of the MIT Tech Review explains, AI could also be designed to frame a problem in a fundamentally problematic way. For instance, an algorithm designed to determine “creditworthiness” that’s programmed to maximize profit could ultimately decide to give out predatory, subprime loans.

Here’s another thing to keep in mind: Just because a tool is tested for bias — which assumes that engineers who are checking for bias actually understand how bias manifests and operates — against one group doesn’t mean it is tested for bias against another type of group. This is also true when an algorithm is considering several types of identity factors at the same time: A tool may deemed fairly accurate on white women, for instance, but that doesn’t necessarily mean it works with black women.

In some cases, it might be impossible to find training data free of bias. Take historical data produced by the United States criminal justice system. It’s hard to imagine that data produced by an institution rife with systemic racism could be used to build out an effective and fair tool. As researchers at New York University and the AI Now Institute outline, predictive policing tools can be fed “dirty data,” including policing patterns that reflect police departments’ conscious and implicit biases, as well as police corruption.

The foundational assumptions of engineers can also be biased

So you might have the data to build an algorithm. But who designs it, and who decides how it’s deployed? Who gets to decide what level of accuracy and inaccuracy for different groups is acceptable? Who gets to decide which applications of AI are ethical and which aren’t?

While there isn’t a wide range of studies on the demographics of the artificial intelligence field, we do know that AI tends to be dominated by men. And the “high tech” sector, more broadly, tends to overrepresent white people and underrepresent black and Latinx people, according to the Equal Employment Opportunity Commission.

Turner-Lee emphasizes that we need to think about who gets a seat at the table when these systems are proposed, since those people ultimately shape the discussion about ethical deployments of their technology.

But there’s also a broader question of what questions artificial intelligence can help us answer. Hu, the Harvard researcher, argues that for many systems, the question of building a “fair” system is essentially nonsensical, because those systems try to answer social questions that don’t necessarily have an objective answer. For instance, Hu says algorithms that claim to predict a person’s recidivism don’t ultimately address the ethical question of whether someone deserves parole.

“There’s not an objective way to answer that question,” Hu says. “When you then insert an AI system, an algorithmic system, [or] a computer, that doesn’t change the fundamental context of the problem, which is that the problem has no objective answer. It’s fundamentally a question of what our values are, and what the purpose of the criminal justice system is.”

That in mind, some algorithms probably shouldn’t exist, or at least they shouldn’t come with such a high risk of abuse. Just because a technology is accurate doesn’t make it fair or ethical. For instance, the Chinese government has used artificial intelligence to track and racially profile its largely Muslim Uighur minority, about 1 million of whom are believed to be living in internment camps.

Transparency is a first step for accountability

One of the reasons algorithmic bias can seem so opaque is because, on our own, we usually can’t tell when it’s happening (or if an algorithm is even in the mix). That was one of the reasons why the controversy over a husband and wife who both applied for an Apple Card — and got widely different credit limits — attracted so much attention, Turner-Lee says. It was a rare instance in which two people, who at least appeared to be exposed to the same algorithm and could easily compare notes. The details of this case still aren’t clear, though the company’s credit card is now being investigated by regulators.

But consumers being able to make apples-to-apples comparisons of algorithmic results are rare, and that’s part of why advocates are demanding more transparency about how systems work and their accuracy. Ultimately, it’s probably not a problem we can solve on the individual level. Even if we do understand that algorithms can be biased, that doesn’t mean companies will be forthright in allowing outsiders to study their artificial intelligence. That’s created a challenge for those pushing for more equitable, technological systems. How can you critique an algorithm — a sort of black box — if you don’t have true access to its inner workings or the capacity to test a good number of its decisions?

Companies will claim to be accurate, overall, but won’t always reveal their training data (remember, that’s the data that the artificial intelligence trains on before evaluating new data, like, say, your job application). Many don’t appear to be subjecting themselves to audit by a third-party evaluator or publicly sharing how their systems fare when applied to different demographic groups. Some researchers, such as Joy Buolamwini and Timnit Gebru, say that sharing this demographic information about both the data used to train and the data used to check artificial intelligence should be a baseline definition of transparency.

Artificial intelligence is new, but that doesn’t mean existing laws don’t apply

We will likely need new laws to regulate artificial intelligence, and some lawmakers are catching up on the issue. There’s a bill that would force companies to check their AI systems for bias through the Federal Trade Commission (FTC). And legislation has also been proposed to regulate facial recognition, and even to ban the technology from federally assisted public housing.

But Turner-Lee emphasizes that new legislation doesn’t mean existing laws or agencies don’t have the power to look over these tools, even if there’s some uncertainty. For instance, the FTC oversees deceptive acts and practices, which could give the agency authority over some AI-based tools.

The Equal Employment Opportunity Commission, which investigates employment discrimination, is reportedly looking into at least two cases involving algorithmic discrimination. At the same time, the White House is encouraging federal agencies that are figuring out how to regulate artificial intelligence to keep technological innovation in mind. That raises the challenge of whether the government is prepared to study and govern this technology, and figure out how existing laws apply.

“You have a group of people that really understand it very well, and that would be technologists,” Turner-Lee cautions, “and a group of people who don’t really understand it at all, or have minimal understanding, and that would be policymakers.”

That’s not to say there aren’t technical efforts to “de-bias” flawed artificial intelligence, but it’s important to keep in mind that the technology won’t be a solution to fundamental challenges of fairness and discrimination. And, as the examples we’ve gone through indicate, there’s no guarantee companies building or using this tech will make sure it’s not discriminatory, especially without a legal mandate to do so. It would seem it’s up to us, collectively, to push the government to rein in the tech and to make sure it helps us more than it might already be harming us.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Read the whole story
5 hours ago
Chicago, IL
Share this story

Kickstarter Employees Win Historic Union Election


Kickstarter employees voted to form a union with the Office and Professional Employees International Union, which represents more than 100,000 white collar workers. The final vote was 46 for the union, 37 against, a historic win for unionization efforts at tech companies.

Kickstarter workers are now the first white collar workers at a major tech company to successfully unionize in the United States, sending a message to other tech workers.

“Everyone was crying [when the results were announced],” Clarissa Redwine, one of the Kickstarter United organizers who was fired in September, told Motherboard. “I thought it would be close, but I also knew we were going to win. I hope other tech workers feel emboldened and know that it’s possible to fight for your workplace and your values. I know my former coworkers will use a seat at the table really well."

"Today we learned that in a 46 to 37 vote, our staff has decided to unionize," Kickstarter's CEO Aziz Hasan said in a statement. "We support and respect this decision, and we are proud of the fair and democratic process that got us here. We’ve worked hard over the last decade to build a different kind of company, one that measures its success by how well it achieves its mission: helping to bring creative projects to life. Our mission has been common ground for everyone here during this process, and it will continue to guide us as we enter this new phase together."

The union at the Brooklyn-based crowd-funding platform arrives during a period of unprecedented labor organizing among engineers and other white collar tech workers at Google, Amazon, Microsoft and other prominent tech companies—around issues like sexual harassment, ICE contracts, and carbon emissions. Between 2017 and 2019, the number of protest actions led by tech workers nearly tripled. In 2019 alone, tech workers led more than 100 actions, according to the online database “Collective Actions in Tech.”

“I feel like the most important issues [for us] are around creating clearer policies and support for reporting workplace issues and creating clearer mechanisms for hiring and firing employees,” said RV Dougherty, a former trust and safety analyst and core organizer for Kickstarter United who quit in early February. "Right now so much depends on what team you’re on and if you have a good relationship with your manager... We also have a lot of pay disparity and folks who are doing incredibly jobs but have been kept from getting promoted because they spoke their mind, which is not how Kickstarter should work.”

In the days leading up to Kickstarter vote count, Motherboard revealed that Kickstarter hired Duane Morris, a Philadelphia law firm that specializes in labor management relations and “maintaining a union-free workplace.” Kickstarter confirmed to Motherboard that it first retained the services of Duane Morris in 2018 before it knew about union organizing at the company, but would not go into detail about whether the firm had advised the company on how to defeat the union and denied any union-busting activity.

Dating back to its 2009 founding, Kickstarter has tried to distinguish itself as a progressive exception to Silicon Valley tech companies. In 2015, the company’s leadership announced it had become a “public benefit corporation.” “Benefit Corporations are for-profit companies that are obligated to consider the impact of their decisions on society, not only shareholders,” the senior leadership wrote at the time. The company has been hailed as one of the most ethical places to work in tech.

Indeed, rather than dedicate its resources to maximizing profit, Kickstarter has fought for progressive causes, like net neutrality, and against the anti-trans bathroom law in North Carolina.

But in 2018, a heated disagreement broke out between employees and management about whether to leave a project called “Always Punch Nazis” on the platform, according to reporting in Slate. When Breitbart said the project violated Kickstarter’s terms of service by inciting violence, management initially planned to remove the project, but then reversed its decision after protest from employees.

Following the controversy, employees announced their intentions to unionize with OPEIU Local 153 in March 2019. And the company made it clear that it did not believe a union was right for Kickstarter.

In a letter to creators, Kickstarter’s CEO Aziz Hasan wrote in September that “The union framework is inherently adversarial.”

“That dynamic doesn’t reflect who we are as a company, how we interact, how we make decisions, or where we need to go," the company’s CEO Aziz Hasan wrote to creators in September. "We believe that in many ways it would set us back.”

In September, Kickstarter fired two employees on its union organizing committee within 8 days, informing a third that his role was no longer needed at the company. Following outcry from prominent creators, the company insisted that the two firings were related to job performance, not union activity.

The two fired workers filed federal unfair labor practice charges with the National Labor Review Board (NLRB), claiming the company retaliating them for union organizing in violation of the National Labor Relations Act. (Those charges have yet to be resolved.) Days later, the company denied a request from the union, Kickstarter United, for voluntary recognition.

The decision to unionize at Kickstarter follows a series of victories for union campaigns led by blue collar tech workers. Last year, 80 Google contractors in Pittsburgh, 2,300 cafeteria workers at Google in Silicon Valley, and roughly 40 Spin e-scooter workers in San Francisco voted to form the first unions in the tech industry. In early February, 15 employees at the delivery app Instacart in Chicago successfully unionized, following a fierce anti-union campaign run by management.

By some accounts, the current wave of white collar tech organizing began in early 2018 when the San Francisco tech company Lanetix fired its entire 14-software engineer staff after they filed to unionize with Communications Workers of America (CWA). Later, the company was forced to cough up $775,000 to settle unfair labor practice charges.

Update: This story has been updated with comment from Kickstarter.

Read the whole story
5 hours ago
Chicago, IL
Share this story

Binging Less Netflix Isn’t Going to Stop Climate Change

1 Share

In case you hadn’t heard, streaming one episode of The Office on Netflix has the same environmental impact as driving 4 miles.

At least, that’s according to one report, published in 2018 and recently re-circulated by The Big Think.

As many headlines put it, climate scientists are coming for our binge-watching habits, thereby making everyday consumers of BoJack Horseman and The Marvelous Mrs. Maisel feel guilty for their environmental transgressions.

It’s part of a familiar trope that has developed in recent years: that thing you enjoy doing is actually very bad for the forests and oceans, and you should feel bad.

“I would characterize the major deficiencies of climate reporting as problems of emphasis and omission: news stories often emphasize the wrong information and omit relevant context,” David M. Romps, a climate physicist and professor of earth and planetary science at UC Berkeley, told Motherboard.

Whether you’re throwing out too much food, playing video games on the cloud, ordering something from Amazon, drinking coffee, streaming music, or wearing jeans, these stories suggest that you should be worried about how your individual actions are accelerating climate change. Unsurprisingly, kids in particular are confused and freaking out.

This narrative blames everyday consumers for the quickening deterioration of the planet, while obscuring the role played by corporate, governmental, and organizational forces—like the 20 companies that are responsible for one-third of all carbon emissions. In a now-deleted tweet following its annual meeting of the world’s richest people, the World Economic Forum posted a misleading infographic warning of the environmental damage caused by sending “Thank You” emails. The graphic was based on a study funded by Ovo Energy, the second largest energy supply company in the U.K.

Key to how these stories travel is the risk of misinformation. For example, the 2018 report was released by a Paris-based non-profit called The Shift Project, but that report has never been fully verified. From there, it was picked up by a variety of publications and social media users, appearing again every few months after someone else seemingly stumbles upon it.

Data Center Knowledge, a news site covering the cloud computing industry, recently contested the Shift Project’s math and methodology, and found that the carbon footprint generated by a half hour stream is actually closer to driving 461 feet. But these corrections barely matter (and it’s worth keeping in mind the self-interest of an industry publication). The Shift Project's report distracts from bigger, more urgent systemic problems.

“Energy efficiency is always laudable, but we will beat global warming only if we rapidly shut down the burning of coal, oil, and gas,” said Romps. “There will be plenty of clean electricity to power all the Netflix we want. The question is: how quickly do we transition to clean energy? If we drag our feet, then we will roast the planet. If we act quickly, then we can prevent Earth's temperature from rising much further.”

The warnings about streaming, then, are something of a distraction. As Romps points out, we certainly do need to decarbonize the electric grid, but “in a sense, we need to use more electricity, because all non-carbon sources of energy that can scale up to power human civilization—wind, solar, and maybe nuclear—all make electricity.” What we need to do, he said, is stop burning gas, oil, and coal, and begin powering everything with (decarbonized) electricity. In this version of the way forward, data centers and the computing industry are already consistent with a non-carbon future.

When publications refer to the environmental impact of streaming, they’re generally referring to the energy usage of data centers. Research published in 2018 by Nature showed that data centers currently account for just 0.3% of overall carbon emissions, which could grow by 3 to 8 percent by 2030 according to different experts.

Even accounting for all information and communications technology—including mobile networks and smartphones—only reaches just over 2 percent of global emissions, according to the Nature report. That’s not an insignificant amount (it’s about on par with fuel emissions from air travel), but only further highlights how absurd it is to focus on data centers and, even more, streaming video.

The thinking goes that, as more platforms like Disney+, Apple TV+, and HBO Max launch, our couch potato habits will escalate and data centers will be forced to process evermore data, causing further harm.

Romps argues that these fears are understandable, but unwarranted. “The fossil fuel companies deserve heaping piles of blame for their funding of disinformation campaigns and lobbying,” he said. “Keep your eye on the prize. We all have a responsibility to reduce our use of fossil fuels by, for example, ditching the gasoline-powered car and cutting out the use of oil and gas to heat our homes”—and streaming less TV is not high on that priority list.

The most worrying predictions for data center usage are based on an extrapolative model that fails to account for the major gains made every year in energy efficiencies, which are almost certain to increase along with our demand and usage of data centers, as many recent studies have noted.

“Given the demand for data center services, the industry is performing well to keep that trend line [of efficiency gains] near horizontal,” Bill Carter, chief technology officer with the Open Compute Project, an organization devoted to redesigning hardware in sustainable ways, told Motherboard. “These gains will continue as new technologies are applied.”

Amazon Web Services, the cloud computing platform that powers Amazon Prime as well as a majority of Netflix, has promised to run entirely on renewable energy by 2030—a relatively modest timeline, but a significant move all the same. Scholars and scientists are researching new and innovative ways to reduce electricity consumption at data centers, but that research must also be implemented to have an impact.

The real danger of these sidetrack discourses is that it becomes more difficult to properly understand the nuances of data centers and their environmental impact, and where that usage is situated within the wider ecosystem of climate damage. Instead, we’re stuck revisiting stale arguments about individual responsibility.

Politicians, for example, will tell you to turn down the heat to save on energy while making moves to get deeply harmful oil pipelines built on indegenous land.

“The conversation needed is how long we use products and whether we can waterfall our old products to other users,” Carter said. “By extending the useful life of products, consumers can reduce energy and greenhouse gasses. Consumers do play a role here—do we need that new TV, or new smartphone?”

The small things you do, the changes in behavior, matter. But ideally, those things lead to more ambitious work, or political action. Individual behavior is only going to be effective if it leads to change on a systemic scale: the way we produce energy, not the way we consume it. “For someone who cannot yet afford to replace their gasoline car with an electric car,” Romps said, “I would recommend that they stay home and watch Netflix.”

Read the whole story
5 hours ago
Chicago, IL
Share this story

America’s monopoly problem, explained by your internet bill

1 Share
A Monopoly board with the locations replaced by companies like Facebook and American Airlines. Across industry after industry, power and market share are being consolidated. | Sarah Lawrence for Vox

We should be asking the government and corporate America how we got here. Instead, we just keep handing over our money.

In the summer of 2017, I decided it was time to put on my big-girl pants and try to talk to my internet provider about my bill. It had been gradually ticking up over the past several months without explanation — let alone better service — and I wanted to know what was up. When I called the company’s customer service line, the woman on the phone knew something I did not: I didn’t really have other service options available in my area. So, no, my bill would not be reduced.

More than two years later, I’m still mad about it. And yes, that could seem a little petty. But that monthly annoyance speaks to a broader trend that all Americans should be aware of — and angry about. Across industry after industry, sector after sector, power and market share have been consolidated into the hands of a handful of players.

Lately, you’ve probably heard a lot of complaints about the size and scope of big tech companies: Facebook, Google, Amazon, and Apple. But competition is lacking across countless industries, including airlines, telecommunications, lightbulbs, funeral caskets, hospitals, mattresses, baby formula, agriculture, candy, chocolate, beer, porn, and even cheerleading, just to name some examples. When you look, monopolies and oligopolies (meaning instead of one dominant company, there are a few) are everywhere. They’re a systemic feature of the economy.

There’s little denying that since the 1970s, the way antitrust has been approached in the United States has led to a landscape where a smaller number of big players dominate the economy. Incumbents — companies that already exist — are growing their market shares and becoming more stable, and they’re getting harder and harder to compete with. That has affected consumers, communities, competitors, and workers in a variety of ways.

Proponents of the laissez-faire, free market thinking of recent decades will say that the markets have basically worked themselves out — if an entity grows big enough to be a mega-corporation, it deserves its status, and just a handful of players in a given space is enough to keep prices down and everyone happy. A growing group of vocal critics of various political stripes, however, are increasingly warning that we’ve gone too far. Growth and success at the top often doesn’t translate to success for everyone, and there’s an argument to be made that strong antitrust policies and other measures that curb concentration, combined with government investments that target job-creating technology, could spur redistribution and potentially boost the economy for more people overall.

If two pharmaceutical companies make a patent-protected drug and then raise their prices in tandem, what does that mean for patients? When two cellphone companies talk about efficiencies in their merger, what does that mean for their workers, and how long does their subsequent promise not to raise prices for consumers actually last? And honestly, wouldn’t it be a lot easier to delete Facebook if there was another, equally attractive social media platform out there besides Facebook-owned Instagram?

We should be asking the government and corporate America how we got here. Instead, we just keep handing over our money.

Seriously, be mad about your internet bill

In 2019, New York University economist Thomas Philippon did a deep dive into market concentration and monopolies in The Great Reversal: How America Gave Up on Free Markets. And one of his touchpoints for the book is the internet. Looking at the data, he found that the United States has fallen behind other developed economies in broadband penetration and that prices are significantly higher. In 2017, the average monthly cost of broadband in America was $66.17; in France, it was $38.10, in Germany, $35.71, and in South Korea, $29.90. How did this happen? In his view, a lot of it comes down to competition — or, rather, lack thereof.

To a certain extent, telecommunications companies and internet service providers are a sort of natural monopoly, meaning high infrastructure costs and other barriers to entry give early entrants a significant advantage. It costs money to install a cable system because you have to dig up streets, access buildings, etc., and once one company does that, there’s not a ton of incentive to do it all over again. On top of that, telecom companies paid what were often super-low feesmaybe enough to create a public access studio — to wire up cities and towns in exchange for, essentially, getting a monopoly.

But that’s where the government could come in by regulating the network or forcing the company that built it to lease out parts of it to rivals. As Philippon notes, that’s what happened in France: An incumbent carrier was compelled to lease out the “last mile” of its network — basically, the last bit of cable that gets to your house or apartment building — and therefore let competitors have a chance at also appealing to customers.

In the US, however, just a few big companies, often without overlap, control much of the telecom industry, and the result is high prices and uneven connectivity. In 2018, Harvard law professor Susan Crawford examined the case of, what do you know, New York City in an article for Wired. The city was supposed to be “a model for big-city high-speed internet,” she explained, after then-Mayor Mike Bloomberg struck a deal with Verizon to install its FiOS fiber service in residential buildings in 2008, ending what was then Time Warner Cable’s local monopoly. In 2015, a quarter of New York City’s residential blocks still didn’t have FiOS, and one in five New Yorkers still don’t have internet access at home.

“New York City could be in a very different position today if those Bloomberg officials had called for a city-overseen fiber network. The creation of a neutral, unlit ‘last mile’ network that reaches every building in the city, like a street grid, would have allowed the city to ensure fiber access to everyone,” Crawford wrote.

Instead, multiple states (though not New York) have put up roadblocks to municipal broadband to keep cities from providing alternatives to and competing with local entities. It’s an example of lobbying at its finest, so that powerful corporations can keep competitors out and charge whatever they want.

And it’s hardly just the internet. Philippon found similar phenomena in cellphone plans, airline prices, and multiple other arenas, due to a lack of competition. In an interview with the New York Times, he estimated that corporate consolidation is costing American households an extra $5,000 a year.

“Broadly speaking, over the last 20 years in the US, we see profits of incumbents becoming more persistent, because they are less challenged, their market share has become both larger and more stable, and at the same time, we see a lot of lobbying by incumbents, in particular to get their mergers approved or to protect their rents,” Philippon told me.

Incumbents have gotten good at keeping out competitors — and they’ve been allowed to do it

The government is supposed to use antitrust law to ensure competition and stop companies from becoming so big that they push everyone else out. Basically, antitrust is supposed to prevent anticompetitive monopolies. In the US in recent decades, regulators, enforcers, and the courts have taken a laxer attitude toward antitrust, which has resulted in more mergers, or companies growing to the point that it’s hard for rivals to stay in the game.

“We basically had a whole legal framework prior to the 1970s that was dedicated to making sure that our businesses were protected from concentrated capital, and so producers were allowed to collaborate in a lot of different ways through unions or coops or various associations, and they got help in the form of lending, supports, patents, copyrights, etc.,” said Matt Stoller, research director at the American Economic Liberties Project, an organization aimed at combating corporate power, and author of Goliath: The 100-Year War Between Monopoly Power and Democracy. “Those were all things that were dedicated to protecting the producer from the capitalist, and we just reversed those assumptions.”

Basically, the prevailing view has been that the market, by and large, can take care of itself, and the government doesn’t need to take such a hands-on approach. And that’s led to gradual concentration over time.

For example, traditional economic thinking is that if profits in a certain industry become very high, it becomes attractive for new incumbents to enter the market, and those excess profits get competed away. But that’s become less and less true over time in the United States. “It’s true sometimes, you could even argue that it’s true often, but it’s not always true — and if you’re not careful, you can end up in a situation where it’s not true anymore, and that’s exactly where we are today,” Philippon said.

Incumbents have a lot of mechanisms to make it hard for competitors to enter, and they use a variety of tactics to keep them out — predatory pricing, patents, contracts, etc.

Amazon boxes in a warehouse Rick T. Wilking/Getty Images
Amazon was able to buy up a competitor out of business by lowering prices until the company was forced to sell.

In 2016, Lina Khan, now counsel on the House subcommittee on antitrust, penned an influential paper on the antitrust issues surrounding Amazon. In it, she used the example of Amazon and Quidsi, an e-commerce company that ran Diapers.com. Amazon tried to buy Quidsi in 2009, and after its founders declined, Amazon cut its prices for diapers and other baby products and launched a new service, Amazon Mom. Quidsi couldn’t keep up — Amazon has the resources to drop prices and take a hit in order to compete, Quidsi does not. And so it wound up selling to Amazon in 2010. Regulators looked at what happened but didn’t pursue a case against Amazon, and Amazon later scrapped the discounts and went back to what it was charging before. By dropping its prices, it basically pushed Quidsi out.

Varsity Brands, which is owned by the private equity firm Bain Capital, has a monopoly on the cheerleading industry. Stoller recently laid out the tactics it’s engaged in to achieve its position and maintain it. The company has managed to vertically integrate multiple levels of the cheerleading industry, ranging from competitions to apparel, and has gobbled up competitors big and small. Its rivals aren’t allowed to showcase their apparel at Varsity events, and it offers contracts to gyms that give them a cash rebate if they send cheerleaders to its competitions and get them to buy its equipment. It took a copyright case over its uniforms to the Supreme Court. In the 2020 Netflix series Cheer, Varsity’s monopoly is featured, and the consequences of it are evident: To see cheerleading competitions, people have to pay for a specific Varsity app. They’re no longer shown on ESPN.

“Varsity uses the great aspects of cheerleading to generate incredible revenue that only benefits them,” said Kimberly Archie, founder of the National Cheer Safety Foundation.

Amazon declined to comment for this story, and Varsity Brands did not respond to a request for comment for this story.

This isn’t all to say that anticompetitive behavior is always allowed, and mergers aren’t sometimes blocked. In February, the Federal Trade Commission sued to block the personal care company Edgewell from acquiring razor startup Harry’s. The Justice Department has also probed Live Nation on its practices after its 2010 merger with Ticketmaster and alleged that the combined company pushed venues into using Ticketmaster over other ticketing companies.

This is about prices, but there’s also more to it

A lot of the concern about corporate concentration comes down to its potential to drive up prices. The fewer options there are, the fewer places consumers have to shop, and the less pressure there is to keep prices low.

Antitrust enforcers and regulators, when examining a potential merger or acquisition, or considering if a company is engaging in anticompetitive behavior, are supposed to apply a consumer welfare standard. Basically, it’s fine for a company to be really big, as long as a consumer isn’t harmed. The concept was first introduced by conservative judge Robert Bork in 1978, and it’s guided a lot of US antitrust policy ever since. Court rulings over time have been more permissive in antitrust cases, rendering practices that were once illegal legal. And the DOJ and the FTC, the two federal regulators most involved in antitrust matters, have also become more lax.

Most directly, the consumer welfare standard has translated directly to whether they’re paying higher prices. But a lot of the time, prices go up anyway.

Sometimes, as Philippon’s book shows, the price hikes are gradual. With fewer players in a space, there’s no one to compete to drive them back down. Or competitors will raise prices in tandem — for example, in the pharmaceutical industry, the prices of competing drugs will sometimes go up at the same time. When companies merge, they’ll often argue that “efficiencies” — combined supply chains, shared resources, or worker redundancies that can translate to layoffs — will make things better for consumers and bring costs down, but if there’s no one to compete with them, the opposite can occur. A New York Times report in 2018 found hospital mergers raised prices for hospital admission in the majority of cases.

But beyond consumer pricing, antitrust advocates note that there are other factors to consider. Corporate concentration means companies have to compete less for workers, and therefore could push wages down. Monopolies and oligopolies can also harm suppliers — if Amazon gets big and powerful enough, it could control what shippers such as FedEx and UPS can charge it.

Consumers also lose the ability to vote with their wallets and eyeballs — basically, to say, I don’t like what a company is producing, what it’s charging, or how it’s behaving and go somewhere else. Just look at Facebook. “As soon as they achieved monopoly, they said forget the rules, and they were right. Every time they were caught cheating, nothing happened because there was nowhere else to go,” Philippon said.

Amazon does drive prices down, and Facebook’s services are free for consumers, but that doesn’t mean that their dominance is good. More and more research is connecting concentration to higher prices for consumers, lower wages for workers, and other developments you wouldn’t expect to see in a competitive economy.

Just as the shift toward monopolization has been gradual, getting more competition could take a long time, too

There’s no one remedy for getting more competition back into the US economy, and even sector by sector, it’s really complicated. It’s one thing to call for Instagram to be broken away from Facebook, but no one agrees on how to fix virtually anything in the American health care system.

It’s a good thing that antitrust is getting more airtime, with politicians, the press, and the public paying more attention to corporate concentration and its effects. Tech giants have been a main area of focus as of late, with regulators and lawmakers at the state and federal levels launching probes and holding hearings. Sens. Elizabeth Warren and Bernie Sanders have railed against powerful corporations on the campaign trail, and on the right, Republican Sen. Josh Hawley has taken on a crusade against Big Tech.

But it’s going to take a lot more than public pressure for things to change. For one thing, it’s often hard to recognize how monopolized the economy has become. Dozens of brands can be housed under a single umbrella, and a lot of people don’t even realize it. But as I noted in 2018, monopolies are really everywhere:

Four companies, for example, control 97 percent of the dry cat food sector: Nestlé, J.M. Smucker, Supermarket Brand, and Mars. According to the report, Nestlé has a 57 percent hold on the industry, owning brands such as Purina, Fancy Feast, Felix, and Friskies.

Altria, Reynolds American, and Imperial have a 92 percent market share of the cigarette and tobacco manufacturing industry. Anheuser-Busch InBev, MillerCoors, and Constellation have a 75 percent share of the beer industry. Hillenbrand and Matthews have a 76 percent share of the coffin and casket manufacturing industry.

Experts and advocates have laid out a range of ideas for restoring healthy competition in the economy and reviving regulators. Some of it would entail new laws and frameworks, which, given the current state of affairs in Washington, seems unlikely — Congress can barely agree to fund the government, let alone enact a major overhaul of the workings of the US economy. But it has happened in the past, and as recently as the 20th century. “What happened in the New Deal was a systemic attack on every aspect of the old order, and the old order was somewhat similar to what we have now,” Stoller noted.

But even without sweeping legislation, there’s a lot that regulators, enforcers, and the courts can do now under existing law. The FTC and DOJ can be more active in their scrutiny of mergers and companies’ practices, and judges can strike down deals. After the FTC approved the pharmaceutical company Bristol-Myers Squibb’s acquisition of fellow drugmaker Celgene in November of last year, Democratic Commissioner Rohit Chopra in his dissent warned of the dangers of regulators ignoring obvious risks and instead clinging to the status quo. “When watchdogs wear blindfolds or fail to evolve with the marketplace, millions of American families can suffer the consequences,” he wrote.

So back to my internet bill, where this all began: in the summer of 2018, I moved apartments and gleefully called my internet provider to cancel my service. The person on the other end of the line asked where I was moving; I told them it was the same borough, different area. Wouldn’t you know — that discount I’d had originally, the one that went away as my bill gradually went up, was now somehow again available. Turns out in my new building, there was more than one option.

Sign up for The Goods’ newsletter. Twice a week, we’ll send you the best Goods stories exploring what we buy, why we buy it, and why it matters.

Read the whole story
5 hours ago
Chicago, IL
Share this story

It Seems A Playable Xbox Build Of StarCraft Ghost Has Leaked (Update)


Here’s something I didn’t expect to see today: New gameplay footage of StarCraft Ghost. The third-person shooter had a rocky and well-known development history before being officially canceled by Blizzard, something confirmed back in 2014. Now an early Xbox build seems to have leaked out and popped up online.


Read the whole story
2 days ago
Chicago, IL
Share this story

Go read this grim Motherboard report about what it’s like to work at Target delivery company Shipt

1 Share
Target Flickr

The gig economy of ride-sharing and grocery delivery has become notorious for exploiting and mistreating workers — and now, Motherboard has an inside look at one particularly bad example. A new piece digs into Target-owned grocery delivery company Shipt, examining all the extra work drivers feel pressured to do just to stay active and keep receiving work on the platform. It’s a thorough investigation of a company culture that seems based on fear and intimidation.

Workers say Shipt customers often live in gated and upscale communities and that the app encourages workers to tack on gifts like thank you cards, hot cocoa, flowers, and balloons onto orders (paid for out of their own pocket) and to offer to walk customer’s dogs and take out...

Continue reading…

Read the whole story
4 days ago
Chicago, IL
Share this story
Next Page of Stories