In Part I of this series I was trying to set the table for everything else. In summary, I described how the post-WWII United States had emerged from two world wars and the Great Depression in a rare progressive mood.
A Keynesian economic philosophy picked up where FDR’s New Deal had left off. Unemployment was kept low with public works projects, and Labor Unions entered into a period of unprecedented power and influence. Things went great for a while (unless you were a woman or a racial minority, then things weren’t so hot). Our standard of living underwent revolutionary improvements. But a series of problems in the 1970s (an oil embargo, the Vietnam War, and, perhaps, unsustainably high wages) created an even bigger problem: Uncontrollable inflation. Up until this point, the Keynesian philosophy typically saw inflation as a minor inconvenience which could be calculated and planned for. Central to the Keynesian view of inflation was the Phillips Curve, an indirect relationship between unemployment and inflation. According to the Phillips curve, when unemployment went down, inflation went up proportionally by the same amount (and vice versa). This relationship was so reliable that Keynesians believed you could simply pick an acceptable amount of unemployment and the Phillips curve would tell you how much inflation to expect. But the inflation of the 1970s didn’t follow this tidy rule. It continued to increase to a point that nobody wanted to invest any money because there wasn’t an investment on Earth that would outmatch the inflation rate. This lack of investment contributed towards a phenomenon dubbed stag-flation, the simultaneous existence of unemployment, economic stagnation, and inflation. According to the Phillips curve, the phenomenon of stag-flation wasn’t supposed to be possible. Clearly, it WAS possible! This contradiction served to temporarily sink the legitimacy of Keynesianism.
As Keynesianism stumbled, Milton Friedman’s Monetarism stepped in to fill the vacuum. Milton Friedman didn’t see inflation as a minor inconvenience. To Milton, it was an ever present monster which required constant vigilance. Inflation became public enemy number 1 and the Federal Reserve adjusted its monetary policy to prevent it at all costs. This, as I explained, was beneficial for creditors and investors, but negatively impacted people with debt (which, today, is pretty much all of us). The balance of power began its tilt towards the wealthy owners of capital.
In addition to creating a boogeyman out of inflation, Reagan finally had a willing audience for his supply-side economics (Reaganomics). Keynesians saw unemployment as a solvable problem: Simply provide jobs for the unemployed and support unions. With more money in the pockets of everyday working people, businesses and entrepreneurs could feel confident that there existed plenty of potential customers. This is a demand-side philosophy of economic stimulus. Reagan, on the other hand, believed in an objective and hyper-efficient market which was not to be tampered with. Any sort of economic stimulus simply served to distort the natural state of the market and caused bigger problems further down the line. According to Reaganomics, the cause of unemployment wasn’t a lack of available work. It was workers charging too much for their labor. The solution? Break up the unions and bring down the price of labor. The medicine might be painful for workers, but there simply was no alternative.
So began the stagnation of working class wages.
Reagan’s “efficient market hypothesis” has become the centerpiece of a neo-liberal conservative agenda which has guided politics over the last 40 years. Both Republicans and Democrats have bought into this philosophy. Bill Clinton and Barack Obama were using the same neo-liberal playbook as Ronald Reagan and George W. Bush. Instead of economic and class issues, Democrats and Republicans have decided to fight over issues of identity. Democrats might occasionally pay lip service to their working class roots, but they’ve lost all credibility (and rightfully so). Now our President is a gameshow host and our Governor is a machine gun wielding S&M enthusiast. Welcome to 2018.
Enough summary. Let’s dig into Part II and see how this belief in an all-knowing, omniscient, and benevolent God efficient market has paved the way for a re-interpretation of anti-trust law and the expansion of capital markets.
With the hard fought monopoly battles 70 years in the past, our government in the 1980s was open to a new way of looking at anti-trust legislation. In the late 1800s and early 1900s, laws such as the Sherman Act, the Clayton Act, the Robinson-Patman Act, and the FTC Act were created to regulate and break up corporations such as Standard Oil and the Northern Securities Company (a railroad monopoly) which were seen to have become far too big and powerful. These acts were originally written with a broad definition of “monopoly”. In short, the entire structure of the market was taken into account. It was accepted as fact that a market with a few large companies was necessarily less competitive than a market full of smaller companies. Monopolistic and oligopolistic market structures were seen to enable cartel-like behaviors. This dominance allowed them to block new entrance into the market and gave them enhanced bargaining power against consumers, suppliers, and workers. In response to the consolidated industries of the early 1900s, anti-trust legislation was applied liberally. Theodore Roosevelt alone brought forward 44 anti-trust suits during his administration.
The philosophical transformation of the 1980s had opened the door for a new way of thinking about industrial consolidation. The Chicago School of thought, a philosophy which emphasized the efficiency of self-regulating free markets and opposed government intervention, along with Robert Bork, a judge and the author of The Anti-Trust Paradox, disagreed with the aforementioned structural interpretation of monopoly power. Instead, they asserted that the best way to look at monopoly power was through the more conservative lens of price theory. Foundational to this view is a faith in, you guessed it, the efficiency of markets. The Chicago School believed that what ultimately guides the size, concentration, and corporate structure of an industry is market forces. By assuming that market forces are always positive, they concluded that businesses should be left alone to settle at an optimal size. If they consolidated into massive, vertically integrated conglomerates, then that was what the market deemed most efficient. Attempting to shrink or break them up, except in the most extreme of circumstances, would simply lead to a less optimal market and higher prices for everybody. Prices, in fact, were seen to be the only thing that mattered- the lower, the better. The only threat that a monopoly posed was a potential increase in prices, and this wasn’t seen to be much of a problem. The Chicago School believed that, if this were to occur, competition would quickly step in and put the monopolist out of business.
This disregard for industry structure and the institution of price theory changed the way we litigate anti-trust law in this country. To see how, let’s look at the anti-competitive tactic of predatory pricing. Predatory pricing is the act of undercutting a competitor by introducing a product to market at below the cost of production in order to put the competitor out of business and take over the market. This is forbidden by the Robinson-Patman Act. In the 1960s, a Robinson-Patman suit was brought against Continental Baking Company on behalf of Utah Pie Company. Thanks to the nearby location of its manufacturing headquarters, Utah Pie had cheaper access to Salt Lake City. As a result of this competitive edge, it had taken over a substantial portion of the frozen dessert pie market in Salt Lake City.
In an attempt to break into this lopsided market, Continental Baking Company sold its dessert pies in Salt Lake City at a loss. In order to remain profitable, those losses were recuperated by selling at cost or above cost in other locations. According to the Robinson-Patman Act, this is considered an anti-competitive action. The Supreme Court ruled in favor of Utah Pie.
If this case were ruled today with our modern interpretation of anti-trust, it’s very likely that the decision would be reversed. In fact, the ruling was controversial even back then. Utah Pie arguably had monopoly control over the Salt Lake City market (~60%), and the entry of Continental Baking Company had forced down the prices of frozen dessert pies for everybody in Salt Lake City. If lower prices is the aim, then predatory pricing by Continental Baking Company seems to have gotten the job done.
However, I agree with the original decision by the Supreme Court that Continental Baking Company was utilizing anti-competitive tactics. While it’s true that the price of pie came down in Salt Lake City, the factors which contributed towards competitiveness had nothing to do with the quality of the frozen pies nor did it have anything to do with more efficient deliveries or manufacturing. Instead, the price was being driven down because Continental Baking Company was big enough to eat its losses and sell pies at below cost. If we allow this sort of competition to determine success within a market, then we are no longer incentivizing companies to improve quality and efficiency. Instead, we are incentivizing them to grow as big as possible and as quickly as possible so that they can leverage their size to take over a diverse range of markets (*cough* Amazon *cough*). If this is the case, then we lose the collective benefits of competition while retaining the negative consequences of corporate consolidation.
Around the 1980s, the utilization of Robinson-Patman by the FTC was all but abandoned. We can see the effects of this abandonment in the 1993 case of Brooke Group Ltd. Vs Brown & Williamson Tobacco Corp. This case was similar to the Utah Pie situation. Liggett sold generic cigarettes for less than the price that Brown & Williamson sold their branded cigarettes. As Liggett became more successful and started eating into Brown & Williamson’s profits, it was alleged by Liggett that B & W introduced their own line of generic cigarettes at below cost in an attempt to price Liggett out of the market. This was possible, according to Liggett, because B & W could recoup those costs at a later date by raising prices on its many other cigarette brands. Unlike the Utah Pie Case, this was decided against Liggett. The rationale behind the decision argued that Liggett did not successfully prove that B & W had a viable strategy of raising prices at a later date in order to recoup losses. This is where the introduction of something called “the recoupment test” started to take precedence. Essentially, the plaintiff had to prove beyond a reasonable doubt that the defendant was going to raise prices in the future to pay for losses in the present (price theory in action). The recoupment test makes litigation by the Robinson-Patman act very difficult.
As I mentioned before, when you can’t defend against predatory pricing, the market will always reward larger companies over smaller companies, regardless of quality and efficiency. This promotes a situation where big companies get bigger, and by virtue of being bigger, are able to become even bigger still. This additional burden of proof has become especially devastating in the modern digital economy. The internet has fundamentally altered how we interact with large businesses. The rules of the game have changed, and our interpretation of anti-trust legislation is woefully out of date.
Technology giants such as Google and Amazon don’t compete against each other within individual industries. They aren’t selling dessert pies or cigarettes. They are selling an entire platform. Their goal is to lure users into their massive ecosystems. Once they get you, they “encourage” you to stay through a variety of tactics. For example, you pay the sunk-cost of $120 to be an Amazon Prime member. Another example, why do you think it is so difficult to switch from an iPhone to an Andoid phone? Once they have you, you are more likely to buy every other product from them. You become a very sticky customer, and monopoly becomes a far easier game to play.
If you want to see how this plays out in real life, let’s take a look at a predatory pricing case that was brought against Amazon. Amazon broke into the e-book market by pricing best-sellers (Harry Potter, 50 Shades of Gray, etc) at a loss. They undercut everybody else on price and took over the best-sellers market. Unsurprisingly, some book retailers took offense to this and sued Amazon for predatory pricing. Ultimately, the courts sided with Amazon because, in total, their e-book business was profitable. They may take a loss on best-sellers, but they made up for it by selling other e-books at a profit. According to the courts, this wasn’t predatory pricing, it was loss leading. Loss leading is an acceptable strategy if you’re primarily looking through a price theory lens. Amazon was free to continue the practice.
Consider what this means in real life, however. By taking over the best-seller market, Amazon was able to lure millions of readers into their e-book and Kindle market. These readers bought a Kindle and signed up for Amazon Prime, probably because they were offered a discount of some sort. Now they had millions of customers in their ecosystem. They would buy future e-books through Amazon because they now owned a Kindle. Outside of books, they would buy everything else through Amazon as well because it was just easier to do it that way once you are inside of their system.
By selling books at a loss, Amazon was able to bring millions of customers into their ecosystem and were able to leverage success in one market to success in other, unrelated markets. Today, 44% of everything sold on the internet goes through Amazon (even I utilize Amazon’s affiliate marketing links. Don’t worry, the irony isn’t lost on me). It has since spread into grocery stores, cloud computing, and it’s even trying to break into pharmaceuticals and health insurance (though, ironically, it’s being blocked by already existing monopolies in those industries). Amazon is such a large and complex company that losses in any individual market can be shuffled and recuperated by some other aspect of its business in not altogether obvious ways. One does not simply compete against Amazon. It’s akin to competing against a nation-state.
So far I’ve only focused on the predatory pricing aspect of monopoly. But the topic is bigger than that. I haven’t even touched upon vertical integration, where large companies can purchase companies higher up in the supply chain and use that leverage to block competitors out of the market in ways that aren’t always obvious. Luxottica, the Italian eyewear company is an example of this. Luxottica not only manufactures glasses, but it owns retail outlets like Sunglass Hut and the eyewear departments at Target and Sears. It even owns large vision insurance firms. It was able to buy up Oakley, another sunglasses company, by dropping it from its retail outlets, crashing the price of its shares, and then buying it at a discount. Afterwards, Luxottica reintroduced Oakley to its retail outlets, taking the profits for itself. This is another example of a type of competition which does not benefit anybody except the shareholders of that individual company.
If you’re interested in reading more about modern day monopoly power, you can read this piece by Lina Kahn. It’s a long read, but really goes into detail about the problems of monopoly and oligopoly in our present day economy.
There’s another self-perpetuating aspect to behemoth companies such as Amazon or Luxottica. Thanks to their mammoth sizes, large companies are able to do things like this. Amazon issued bonds at 4% to buy Whole Foods as part of its effort to break into the grocery market. 4% is nothing. The inflation target is 2%. It’s basically free money that corporations can use to buy back their own stock (to artificially inflate stock prices and keep their shareholders happy… why do you think the Dow Jones is breaking records right now?) and/or merge into even larger entities in order to increase their ability to issue cheap bonds. Large companies can do this because many of them have become so dominating and competitive that there’s very little risk of them defaulting. This creates a self-perpetuating cycle of growth for winners like Amazon, at the expense of everyone else. The bigger a company gets, the cheaper it becomes for it to borrow money. It can then use that cheap money to grow even larger, thus continuing the cycle of growth. This ability to raise money cheaply has come about partly thanks to the replacement of bank loans with other forms of financing in a process known as disintermediation. Disintermediation has transformed the very restrained banking industry of the 40s, 50s, and 60s, which restricted cross border capital flow and speculation, into the deep capital markets that we’re familiar with today.
When Ronald Reagan was elected president in 1980 inflation was still roaring at 15%. He put in Paul Volcker as head of the Federal Reserve. In order to deal with the damaging inflation, Volcker jacked up interest rates to nearly 20%. By increasing interest rates to this level, people were encouraged to save or invest their money instead of spending it. This strategy was ultimately successful at stamping out inflation, but those sky high interest rates had some unintended consequences. In 1933, as a reaction to the collapse of a large portion of the US banking system, the Glass-Steagall Act was passed by Congress. One of the regulations of Glass-Steagall put a cap on the interest rates that banks were allowed to offer on deposits. This regulation (Regulation Q) was supposed to protect against banking speculation. In the lead up to the Great Depression, banks would compete for deposits by raising interest rates (the rate at which your deposits would grow). In order to pay for those increasing interest rates, they were forced to take on riskier assets. This sort of risky speculation opened them up to a collapse. By putting a cap on that rate, this speculation would hopefully be prevented.
Unfortunately Glass-Steagall didn’t quite do the trick. As interest rates spiked in the 1980s people found a way around the banking regulations by investing their money elsewhere. With caps on their own interest rates, banks were unable to compete with stocks, bonds, and money market funds. Unlike banks, these funds did not have caps on the yields they could offer. As a result, market funds were able to leverage those higher interest rates on US Treasury Bills to offer a better rate of return to their investors. As a result, money flooded out of banks and into the capital markets. The cap on bank interest was phased out over the course of the 1980s, but the damage was already done; This diversion of money away from banks and into the open market created a profitable new industry.
As the capital markets proved more profitable than bank deposits, large pension funds transitioned away from banks in search of higher yields, adding huge amounts of liquidity into the capital markets as a result. Corporations, instead of going to the banks for loans, could now take advantage of these new sources of cash. Issuing corporate bonds turned out to be a much less expensive way to raise money than traditional bank loans. The proportion of corporate debt which was issued through bonds rather than bank loans skyrocketed in the 1990s and 2000s until debt securities came to make up 70% of the average corporation’s debt ratio (up from around 30% in the 1950s). This convenient new way to raise money, along with the neutering of anti-trust legislation, is a major reason we’ve seen fewer and fewer companies owning every aspect of our economy.
In the 1980s, with new technology and massive profits to be found in the financial industry, Great Depression era regulations started to become outdated and unpopular. Glass-Steagall was dismantled piece by piece until it became irrelevant. Congress loosened the reins on banks, allowing them to further involve themselves in the capital markets. As interest rates from the Federal Reserve began to come down throughout the 1990s and 2000s, banks and investment funds had to find new and creative ways to spin up a profit. Banks began to utilize a technique known as securitization. They bundled contractual debt obligations such as mortgages or car loans into a single asset, called a Collateralized Debt Obligation (CDO), and then sold these CDOs to other banks for a profit. The CDO market really took off in the 1990s. Our banking system operates in what we call the fractional reserve system, banks must keep a certain amount of cash in their reserves in proportion to their loans. For example, if you have loaned out $100, you must keep at least $40 of cash in your vault. Through the securitization process, banks were able to transform their loans into assets, sell those assets, and then use the cash received as fractional reserve in order to justify loaning out even more money. As time went on and interest rates continued to fall, competition forced banks to compete more fiercely over narrowing profit yields. The best way to make money in an environment of decreasing yields is to increase your leverage by loaning out even more money. Loaning out more money required keeping more cash on hand, and the best way to do that was to bundle debt and sell it for cash. The very speculation that Glass-Steagall was enacted to prevent began to re-emerge as a problem.
Eventually, as banks devised new ways to securitize and move liabilities off of their books, they came up with the mortgage-backed security (MBS). You probably recognize this term from post-mortem descriptions of the financial crisis. An MBS is essentially a bundle of mortgages. It was believed that MBS’s were very stable since they were carefully constructed to be diverse. Each MBS was comprised of mortgages for differently valued houses in different cities. It was considered impossible that every type of home in every area of the country could lose value at the same time. Viewing MBS’s as a sure thing, banks loaded up on them. Demand for mortgages on the secondary market was strong and mortgage companies were making money hand over fist for every mortgage they signed. Eventually, the profit seeking motive led to the issuing of sub-prime loans to people who had no business taking out a mortgage. The demand for homes was no longer coming from the people buying the homes. Instead, the demand was coming from the banks who were buying up mortgages on the secondary market in order to bundle them into MBS’s. Just another example of the “efficient market” in action…
It was this phenomenon which created the housing bubble. That bubble eventually burst and led to the Great Recession. That topic is a little outside of the scope of this article, but here’s a key takeaway: All of the factors which led to the greatest economic recession of our lifetimes were market driven. In response to the market driven crisis in the private banking sector, our government bailed out the bankers by injecting trillions of dollars into their failing companies. Today, in order to deal with this astronomical debt, our elected officials are trying to cut away at what remains of our social safety net.
Let that sink in for a minute. Our government deregulated the financial industry in pursuit of a more free market. Then, that free market created an asset bubble which came crashing down to Earth. Our government bailed out the companies which collapsed and then blamed the resulting debt on poor people and social welfare programs. Now, so-called “deficit hawks” are doing everything in their power to dismantle our social safety net while continuing to pour money into the military at an outrageous pace.
Make of that what you will.
So here we are: An inflationary episode in the 1970s lead to a political and economic revolution in the 1980s which has created the conditions under which we have seen power and capital consolidate to absurd levels. Here in St. Louis, we have become very familiar with the loss of our corporate headquarters: Anheuser-Busch is gone. Sigma Aldrich is gone. St. Louis Bread Co. is gone. Monsanto is in the process of being bought. Express Scripts is in the process of being bought. Centene probably isn’t too far behind. This is a direct result of the forces I outlined in this article. When these companies leave, we lose administrative brainpower and leadership. We lose part of our history and part of our future. We lose jobs. We lose population. Our neighborhoods become filled with vacancies, property values drop, and crime goes up.
Currently the only strategy being pursued on the State level is a race to the bottom continuation of the same policies of the last 40 years. Locally, thing’s aren’t completely hopeless, but our fragmented government and fierce cultural divides are making things more difficult than they need to be.
Monopoly and capital markets are an interesting topic. On one hand, it’s good that we have innovative companies with the ability to create huge piles of cash to invest into innovative new ideas. This can be very advantageous for our country. Amazon will soon enter the healthcare market and could very well make it less expensive.
On the other hand, should we be cheering as a few multi-national corporations begin to match the size and power typically reserved for nation-states? Apple is larger than most countries. It’s also an opaque institution dedicated only to the profit of its shareholders. Say what you will about the US Government, but it is, for the most part, a transparent and democratically constructed institution which must serve the needs of everybody. We should think twice before cynically dismantling it and relinquishing power to large corporations. We can’t forget that, for the most part, human history is the history of empire. The market naturally consolidates power into the hands of very few. It’s up to us to make sure we organize and prevent that from happening. At the end of the day, our freedom depends on it.
RECOMMENDED READING
Can Democracy Survive Global Capitalism?
By: Robert Kuttner
Austerity: The History of a Dangerous Idea
By: Mark Blyth
“All of the factors which led to the greatest economic recession of our lifetimes were market driven.”
Are government restrictions market driven?
The Community Reinvestment Act required certain lending in areas where people were less likely to repay loans. The MBS’s were created to take those higher risk loans that banks would have difficulty reselling and bundle them up to obfuscate the risk. As they saw the profitability of this approach, these organizations threw jet fuel on it without understanding the risk to themselves. I would not say they carefully constructed the MBS’s. They simply took all their crap loans and put it in a box with a triple-A rating.
We are certainly not better off as a country as a result of the bailouts. There was no consequence to the banks’ poor choices and willful frauds. None. Whatever the next domino to fall is will certainly be larger and more painful than what we went through in 2008/9. There’s not even any discussion of how to prevent banks from becoming “to big to fail” which will guarantee future bailouts.
If there isn’t any discussion of how to structure banking to not be “too big to fail”, where the banks are an easy target to rally against. . .how much harder will it be to reduce the size and stupidity of government and right size the bloated bureaucracy that we currently find ourselves governed by? At the city level, we can’t even get rid of a handful of alderman without legislative chicanery and do-overs.
I hate corporatism and big government equally, but sadly they go hand in glove. You can’t attribute failures of these two entities as a failure failure of the market, generally.