A movement in Europe would require banks to fully back deposits. In 1666, King Charles II put control of the money supply into private hands.
The privatization of the money-creation process gave birth to the system we use today.
Christmas did not offer much good cheer to the world’s bankers, who have received a sustained kicking since the financial crisis erupted in 2008.
In the latest blow, Switzerland announced that it would hold a referendum on a radical proposal that would strip commercial banks of the ability to create money, depriving them of a great deal of their profit-making capabilities. If the Swiss proposal catches on around the world, it could shred core business assumptions that have underpinned the banking model over the past three centuries.
From Babylon to central bank
The earliest banks we know of, in ancient Babylon, were temples that doubled as repositories where one could store wealth. At some point, the guardians of the stored treasure realized they could put this accumulated wealth to work, and banks accordingly began to lend capital. Borrowers would pay interest on what they borrowed, and this interest would ultimately find its way back to the lenders after the banks had taken a cut. The banks became trusted intermediaries that brought lender and borrower together and ensured neither would be cheated. Paper money emerged after people found it was easier to buy things using deposit slips from their bank than carrying gold around.
The next evolution happened when bankers realized that since depositors almost never simultaneously withdrew all their funds, banks could lend more capital than had been deposited. This allowed banks to “create” money in the sense that bankers could issue loans not necessarily backed up by hard deposits. Creating revenue in this way proved lucrative, but it brought banks into conflict with rulers, who were notionally in charge of the state’s money supply and any gains to be made from it. In England, whose financial system is in many ways the progenitor of today’s global system, this battle was played out between banker and ruler in the 16th and 17th centuries.
The way to root out the instability inherent to the system is to require banks to back their loans 100% with reserves.
Ultimately, in 1666 King Charles II — well aware of the limits of his own power thanks to the beheading of his father 17 years earlier — put control of the money supply into private hands. The privatization of the money creation process gave birth to the system we use today, in which private or commercial bank loans are responsible for 97% of the money circulating in the modern global economic system. In another change, 28 years after Charles II’s reform, an enterprising group of businessmen offered the government cheaper loans in exchange for certain privileges, such as a monopoly over the printing of physical currency, and so the Bank of England was born.
The benefits of the new system proved immediately apparent. Interest rates on government borrowing dropped from 10%-14% in the 1690s to 5%-6% in the early 1700s. This allowed Britain a great deal of leeway when it came to military spending, which it soon put to use. But the privatization of money creation also came with drawbacks, namely the economic cycle of boom and bust. Leaving the money-lending and -creating decisions up to banks resulted in a system of extremes where bankers created speculative bubbles via vast quantities of loans and money when times were good, only to refuse to lend — in a sense destroying money — once an ensuing speculative bubble burst.
This led to liquidity crises, with the South Sea Bubble of 1720 providing early evidence of this mechanism kicking into action. The fact that banks were lending more money than they could back up with capital also left them exposed to bank runs whenever the public lost confidence in them. The reserve ratio, which requires banks to keep a fraction of their loans backed by safer assets such as government debt or central bank money, is an attempt to keep this threat at bay. But it is an inherent characteristic of so-called fractional reserve banking that the risk of bank runs is ultimately inescapable.
Britain, and indeed all the other countries that came to adopt the system, grew accustomed to a regular waxing and waning of the money supply and to the consequent up-and-down economy. There were ways to palliate this cycle, with the Bank of England slowly developing into the stabilizing force it is today. In times of crisis, the Bank of England would lower interest rates and flood the market with liquidity, bailing out any solvent but illiquid banks to keep the system functioning, thus smoothing the money supply’s wilder fluctuations.
As British and then American influence spread, so did banks’ power, and capital flowed ever more freely around the world as domestic deposits were used to finance international projects. The system was heading for a fall, however, when World War I created great economic imbalances between Europe and the United States.
In the 1920s, the Federal Reserve attempted to restore prewar parity by keeping interest rates artificially low, but this led to abundant speculative U.S. capital flooding across the Atlantic, particularly into Germany. The ensuing giant bubble finally popped in 1929, leading to the dramatic liquidity shortages of the Great Depression and creating the circumstances that culminated in World War II. The experience led to the partial reining in of banks, with the Glass-Steagall legislation in the United States in the early 1930s limiting their ability to take part in speculative investments.
Time has a way of chipping away at such precautions, however, and the banks gradually escaped their shackles and capital came to flow freely around the world once again. More countries became accustomed to the ebb and flow of bubble and crisis, though these crises tended to be more regional in scope (e.g., Latin America, Asia, Scandinavia).
When global crisis finally struck again in 2008 it was different from 1929 in that there was no world war to blame for the global economic imbalances; this crisis followed an extended period of the banks having had things pretty much their own way. Instead, it was a giant version of the regular crises inherent in the system. This led to the thinking that it is the banks, and indeed the system they created around themselves, that need changing. In the eight years since 2008, layer upon layer of 1933-style regulation and restriction have thus been heaped on the banking sector.
A radical reform
It is into this atmosphere that the idea of stripping banks of their money-creating abilities has gained currency (regained, in fact, since calls for it date back at least to the 1930s). According to its proponents, the way to root out the instability inherent to the system is to require banks to back their loans 100% with reserves. This essentially would be a step back to the point where banks would again function as conduits rather than creators of capital.
Under the reformed system the creation of new money would instead be the prerogative of the central bank and the government. These national institutions would in theory be motivated by the needs of the state rather than by short-term profit and would keep the money supply growing at a fixed rate, doing away with the wild fluctuations of the credit cycle. (One challenge to overcome would be politicians attempting to hijack the money supply for short-term political gain.)
For banks, the prospect is of course nothing less than a nightmare scenario, especially coming on top of all of their existing woes.
Proponents of such a system point to many expected benefits: Bank runs would be eliminated, the proceeds of money creation would go to the government and, thus, the taxpayer rather than to the banking elite, government debt would be a thing of the past, and private debt would be greatly reduced. (Indeed, the predominance of debt in today’s world is partly a product of it being required in the money creation process.)
But there also would be great risks involved, the main one being the fear of the evil unknown. Though the economic instabilities of the past 300 years appear to have resulted largely from the fractional reserve system, was it also responsible for the relatively breakneck growth over the same period? Moreover, the changeover from one system to the other would be extremely tricky, requiring vast quantities of central bank money-printing and debt buybacks. That would be a recipe for an extremely fraught period carrying immense risks of mismanagement. In truth, another full-blown financial crisis may have to take place before such a changeover could be made at the global level.
But the theoretical upsides are great, as are frustrations with the current system, and the idea has begun to gather momentum. In 2012, the International Monetary Fund published an influential research paper laying out the case for the proposed system, and in 2015 the Icelandic government commissioned a report on the prospect of undertaking the changes. In Switzerland, a law requiring a referendum to take place should 100,000 signatures be gathered has set the country on a course to possibly being first to undertake the great experiment.
Strikingly, the revolution is being considered at both ends of the spectrum: Iceland has lately proved among the most financially adventurous players on the global economic scene, while Switzerland has long been one of the most conservative. Considering the risks involved, adoption in a smaller economy such as Iceland or Switzerland would be a useful test case from a global perspective. It would limit the cost of failure to the global economy while helping establish the best way of adopting the changes should the reforms actually work.
For banks, the prospect is of course nothing less than a nightmare scenario, especially coming on top of all of their existing woes. These have included not only increased regulation but also the threat from a disruptive new technology undercutting their basic model in the form of Bitcoin, the new electronic currency that emerged almost exactly as the financial crisis struck. While Bitcoin has suffered its own wild fluctuations in the eight years since its birth, the technology that underpins it, Blockchain, has truly historic potential.
The architects appear to have created an electronic system in which both parties in a transaction can act with confidence without the need for an intermediary, though there is some added risk for the payer, since reversing transactions is more difficult than in traditional banking.
The world’s banks therefore face both the prospect of losing their money-creation privileges, as well as a potential usurper threatening their long-established role as the middleman through which all capital must flow. As 2015 has faded into 2016, it is hard to think of a time in the past 300 years when the banker’s position in society has been more at risk.
Mark Fleming-Williams follows political economies, trade and financial trends around the world for Stratfor. He joined the company with more than a decade of experience working in the financial sector in the City of London.
Author: Mark Fleming-Williams
This article was published with the permission of Stratfor, the Austin, Texas-based geopolitical-intelligence firm.