Rethinking Covid policy

The following are not controversial, I think:

  1. The goal a year ago was to vaccinate everyone, thereby stamping out Covid.
  2. We know this failed:  80% of California is vaccinated;   many or most of the remainder have already had Covid;  yet omicron swept unhindered through the vaxxed population.  Therefore it’s patently obvious, and research confirms, that the vaccine reduces severity, but does not prevent infection or spread. 
  3. Implication:  we need a new Covid policy, designed around an endemic virus — a policy  we can sustain for decades or centuries.
  4. Public health policy for centuries, except for the past 2 years, has been to (1) identify who is at risk;  (2) protect, treat and isolate that risk group;  (3) let everyone else go about their business.
  5. We now know a lot about the risk group:  elderly, obese, diabetic, immunocompromised.  No one else is at material risk.  Children in particular are at almost no risk, with fewer than 1000 deaths to date among 70 million Americans under age 18.
  6. We now know that for those in the high risk group, vaccine + booster is highly effective at managing severity.  So we can’t prevent them from getting it, but can prevent most of the consequences.
  7. Focusing policy on just the high risk group, which is <15% of the general population, would free up time, attention and money for treatment.  Treatment appears have been a bit neglected in the push for prevention, because prevention looked promising a year ago.  Now that prevention is off the table, we need to rethink.

Conclusions:  we learned three things in the past year.  First, we cannot control the circulation of Covid.  Second, we know exactly who is and isn’t at risk.  Third, we know how to manage Covid with minimal social impact:  vaccinate, boost and aggressively treat just the high-risk group, and let everyone else go about their business. 

Therefore, that is what we should do.

Questioning universal vaccination

Given what we know so far (year end 2021), the Covid vaccine appears to makes sense for anyone at significant risk from Covid.  If you’re over 50, or diabetic, or obese, it’s a no-brainer.

However, the current public health policy for Covid — essentially “vaccinate everyone,” including those at no direct risk of harm from Covid — appears unfounded and even irrational, given what we now know about Covid and the mRNA vaccines.

Rational policy must consider both benefit and cost.  Let’s consider the benefit side first, and then the cost.

Benefits of universal vaccination

Public explanations for the benefit of universal vaccination have drifted over time, but appear to be some combination of:

  1. Permanently stamp out the virus by rendering everyone immune to it.
  2. Prevent transmission from the healthy to the vulnerable (old and immuno-compromised).
  3. Prevent a Covid surge from overwhelming hospitals, which would crowd out treatment for other ailments.

For universal vaccination to achieve the policy benefits above, what assumptions must be true?  Let’s examine each in turn.

What must be true for universal vaccination to successfully stamp out the virus?

  • Covid must not mutate faster than vaccines can be updated and distributed.
  • Vaccine must stay effective long enough to vaccinate the entire world at once.
  • Vaccine must prevent transmission of Covid.

None of the above are true;  therefore, policy benefit #1, stamping out the virus, is indefensible.

What must be true for universal vaccination to prevent transmission from healthy people to vulnerable ones?

  • Vaccine must reliably prevent transmission of Covid from healthy people to vulnerable ones.
  • “Breakthrough” infections must be extremely rare.
  • Covid must not mutate faster than vaccines can be updated and distributed.
  • Vaccine must remain effective for a predictable amount of time.

None of the above are true.  Therefore, policy benefit #2 is also indefensible.

What must be true for universal vaccination to prevent hospital surges?

  • Hospitals must be observed to have been widely overwhelmed by recent Covid variants.
  • New variants of Covid (omicron) must be severe enough to overwhelm hospitals.
  • There must still be many people who have had neither Covid nor the vaccine.
  • “Breakthrough” infections must be extremely rare.

Again, none of the above are true.  A few hospitals were overwhelmed by the delta variant, but they were too few to justify influencing national policy.  Therefore, this benefit, too, is unfounded.

Much more than half the US population has already had either Covid or the vaccine.  Based on seroprevalence, it’s estimated that over half the US population has already had Covid, and obviously the unvaccinated fell mainly into that category.  Of the remainder who have not had Covid, nearly all are vaccinated.

This would argue that the risk of serious complications is already vastly lower than a year ago.  In addition, the current “omicron” panic in particular appears unfounded, as the first known mass infection, in South Africa, resulted in few hospitalizations, and ended in a few weeks.

Taken together, the above suggests that the asserted benefits of universal vaccination are largely unfounded.

Let’s now examine the costs.

Costs of universal vaccination

There is no official recognition that universal Covid vaccination carries any cost or risk.  That by itself should raise a red flag in your mind with regard to rationality, because obviously, every policy decision carries both benefits and costs.

Here are some costs and risks of universal vaccination.

  1. For healthy individuals under age 18, the known (so far) risk of complications from the vaccine (primarily heart issues in lean males) is higher than the known risk of complications from Covid.
  2. The long-term risks of this particular vaccine are unknown and unknowable, because two of its functional components had never been tested at scale prior to spring of 2020:  mRNA delivery in general, and spike protein production specifically.

22% of the US population, about 73 million people, are under 18.  Only about 600 of them have been hospitalized for Covid, nearly all of whom had prior health issues.  The remainder —  99.999% of those under 18 — are at no known risk from Covid, and actually face increased risk, both known and unknowable, from the vaccine.


Official vaccination policy fails to acknowledge the following:

  • For the reasons itemized above, universal vaccination offers little marginal benefit beyond just vaccinating those at immediate risk from Covid:  the old, diabetic, obese and immuno-compromised.
  • Vaccinating people who are not at risk from Covid exposes them to both known and unknown risks, with little or no benefit to those most at risk from Covid infection.

It’s hard to escape the conclusion that current policy is rolling the dice with America’s healthiest and youngest people — in other words, its future — in return for little obvious benefit.

So obviously we should not be mass-vaccinating anyone under 18.  What should be the age cutoff for mass vaccination?  Probably higher than that.  Possibly as high as 35.  At that level, I predict you will see no loss of benefit, and certainly lower risk.


Enabling small software teams

A fundamental way in which the software industry differs from other industries is that, by leveraging open source, occasionally someone builds a software stack that allows a single person, or a small team, with limited knowledge, to build something commercially useful.

Here are two examples.

  • Ruby on Rails in the mid-2000s looked like magic.  Suddenly a single person, knowing a bit of Ruby but no SQL or HTML, could build a functioning database-backed website in a few hours or even minutes.  It did this through code generation and some simplifying assumptions about what the creator wanted to build.  It had limitations on performance and flexibility, but those very features enabled skeleton teams with limited programming chops to build large businesses by launching on Rails.  Canonical example: Twitter
  • In the 2010s, the best analog might be Node.js and similar JavaScript engines.  Suddenly you could build a whole application, front to back, including sophisticated client-side behavior, knowing only a single language.

The benefit of small teams is to minimize coordination issues, allowing extreme agility, and thereby faster innovation.  But the typical barrier to small teams is the fixed cost to learn multiple software stacks;  it’s rare to find one or two people that understand disparate enabling tech well enough to build a complete commercial solution.

The solution to that is to make a few simplifying assumptions about what people want to build, and then consolidate everything needed to build those things into a single language or framework.

Over time, the magic of these consolidated platforms ebbs.  To build a commercial Rails app today, you need to know a whole pile of non-Ruby stuff.  Presumably it’s the same with Node.js, though I haven’t worked with it.

What will the next consolidated platform be?


Implementation challenges for a carbon tax

Let’s stipulate that carbon exhaust generates a negative externality in the form of climate change.  Further stipulate that this externality is large enough to justify limiting carbon exhaust through a carbon tax and/or cap & trade.

Given those presumptions, how would you implement?  Here are some of the challenges.

  1. How to estimate the value of carbon emissions.  Cap and trade is supposed to deal with this through market pricing.  But that presumes markets have enough information to set a rational price.  Markets set prices through individual agents making specific cost/benefit calculations. The benefit of carbon reduction appears very large — but how large, specifically?  The specific benefit of carbon reduction, measured in dollars per kg, is not only unknown, but unknowable.  This remains true even when we all agree the number is large.  Markets cannot solve this problem.
  2. Prisoner’s dilemma.  All countries are better off if they all follow the rules.  But each individual country has an incentive to cheat, because carbon-dirty energy (coal) is, for now, cheaper than clean energy.
  3. Collusion.  Carbon buyers and sellers each have an incentive to form cartels.
  4. No global enforcement mechanism.  What do you do if a country simply ignores its carbon obligation, or does something like #2 or #3 above?  Since 1945, the United States has been the backstop for global multilateral agreements, backed up by force if necessary, whether in the form of trade, financial or military intervention.  As US GDP as a % of world GDP declines, and as the US population becomes less willing to intervene, this situation cannot persist indefinitely.  There is nothing to replace it.
  5. Incentive to cheat varies by country, due to differences by country in energy intensity, energy cost, energy mix and so on.  Markets are supposed to take care of this problem through cap & trade, but this leads back to the problem in #1 above.
  6. Costs are borne unevenly within countries.  In the most extreme case, a despot in a poor country might sell all his carbon credits and keep all the proceeds, reclining in his solar-powered palace as his subjects shiver in unheated hovels outside.  Actual countries won’t approach this extreme, but it’s easy to see that the greater the concentration of power, the more the benefits of cap & trade will go to leadership, while the costs are borne by the masses.  Therefore, cap & trade probably increases income inequality in non-democratic countries.

Note what this essay does NOT say.  It does NOT say that carbon limitations are a bad idea.  It does NOT deny climate change due to carbon emissions.  It solely concerns implementation challenges, which appear huge.


Skilled work: the missing link

Politicians and even economists refer to jobs as if they were a fungible commodity. “America needs jobs.” “We’re bringing jobs back to America.”

But jobs are not fungible. Instead, an ever-increasing share of employment requires specialized skills. And to an ever-increasing degree, only these specialized skills can sustain a middle class.

This misapprehension about the nature of employment helps fuel the mistaken idea that you can create prosperity through the creation of undifferentiated low-skill “jobs” — which, in turn, contributes to the misapprehension that if you just cut interest rates, prosperity will follow. Which leads to the policy error alluded to in a prior post.

Skill development is the missing link in American employment policy. It’s missing from colleges and universities. It’s missing from public schools. It’s missing even from many technical and trade schools. It’s not prioritized by economists or politicians.

This is mystifying, because obviously, in the long run, prosperity derives from productivity.

In the old days, the industrial era, productivity was fueled mainly by deploying natural resources: if you add more electricity and motors to a production line, unskilled assembly workers go faster, with no change in skills.

It’s different now. Everything has already been more or less optimized for automation through energy and motion.  To make that production line go still faster, you can’t just add more energy.  Instead, you may need to understand statistical analysis, or programming. Further productivity gains require skills.

US labor policy will continue to flounder until this becomes a priority.

Spanish colonial defaults

You’re probably aware Argentina has defaulted on its sovereign debt nine times since its independence in 1816.

You probably also know defaults are historically widespread across Latin America.  At least five former Spanish colonies have defaulted on their sovereign debts nine or more times each:  Argentina of course, and also Venezuela, Ecuador, Costa Rica and Uruguay.

What you may not know is the colonial backstory.  Argentina’s former colonial overlord, Spain, has defaulted on its sovereign debt at least 22 times that I could find:  1557, 1575, 1596, 1607, 1627, 1647, 1652, 1662, and 1666, plus another six times in the 1700s, and another seven times in the 1800s. (source, source)

Are Latin American defaults a cultural artifact of Spanish occupation?  Well, let’s compare to England.  Turns out that six of the ten countries that have never defaulted are England and former English colonies:  Canada, Malaysia, Mauritius, New Zealand, Singapore.  (The United States is an edge case, having paid interest late a couple of times, and having reneged on gold exchangeability of US bonds in the 1930s.)

When it comes to paying debts, there appears to have been a bug in the Spanish cultural program that was passed to its colonies.  What might that be?

It may be related to Spain’s essentially extractive view of colonization:  find gold and silver, ship it back to Madrid, and spend it on extending empire and Catholicism.  When that spending was not enough, Spain borrowed, and used precious metal extraction to pay the interest.  When that still was not enough, Spain defaulted and started over again.  And again.

Meanwhile, France and England developed industries to sell many of the things Spain was buying.  Spain’s extractive approach brought more long-run benefit to Spain’s rivals than to Spain itself.

Thus Spain was a victim of the Dutch Disease long before it afflicted the Dutch.  Instead of oil & gas, it was gold & silver that hollowed out the Spanish economy.  And Spain’s “bad choices” may to have led to the languishing not only of Spain, but also of certain of its former colonies, by transmitting to them a culture of spending rather than investment.

How the IT revolution leads to central bank policy errors

The Industrial Revolution reorganized the economy to replace labor with capital and/or “land” (resources). Progress was easy to measure. In contrast, the Information Revolution reduces the need for capital and resources, through much more efficient allocation. This is hard to measure, leading to policy errors.

Industrial progress was easily measurable by GDP and productivity. Before 1970, a policymaker could be confident that producing more stuff (more capex, more resource use) with fewer people (less labor) would improve quality of life. Therefore, rising GDP and rising productivity were good measures of progress.

That has changed. Free computing and communications instead let you reorganize to increase quality of life while reducing capital, land and labor, all at once, by using those inputs much more efficiently. We can do more with less. This leads to a measurement problem: lower GDP leads to higher quality of life.

What happens when I save money buying used on eBay? My quality of life goes up, because I spent less than buying new. Yet GDP goes down, because fewer new goods need to be produced. Information has replaced industrial production. A macroeconomist, using traditional measures, would wrongly conclude the economy is getting worse, when in fact it’s getting better.

Many of the innovations of the past twenty years look similar:

  • Fix things easily with online instructions like iFixit, so new items need not be built
  • Rent cars, houses, workspaces, so fewer of these need be built.
  • Phone-controlled self-driving cars empower car sharing, so you needn’t own a car

Of course, we can only participate in this increased efficiency if we have skills: reading, computer use, critical thinking. Demand for skills goes up. So we get skilled labor shortages, and unskilled unemployment, at the same time.

What will the traditional Fed chief do when presented with these conditions: rising unskilled unemployment, stagnant GDP, stagnant productivity? Why, they’ll cut interest rates, every time. They are trying to promote capex, to make GDP go up again.

This is a policy error. It falsely presumes that if GDP is stagnant, then quality of life must be stagnant. It also falsely assumes that stagnant GDP means capital is too scarce, when in fact, post IT revolution, the opposite is true. Artificially cheap money actually delays the natural free-market process of using information to replace capital, by making capex artificially more competitive with information.

The real scarce resource in the information-driven economy is not capital, but skills. The policy response to unskilled unemployment is to turn the unskilled into the skilled. I.e. education. Really nothing else will work.

Understanding economic security

Why are people with middle-class incomes more anxious about their economic situation than a generation ago?  The usual answers are either “income stagnation” or “you’re imagining this, there’s actually not a problem.”

I suggest it is something else:  an ever-rising percentage of middle-class expenses fall outside an individual’s control, giving the individual less economic autonomy and resilience.

You’ve probably seen this:

This graph is a political Rorschach test.  Depending on your pre-existing political beliefs, you will tend to think the high-inflation items above (which I’ll abbreviate as “HI”) outpaced the low-inflation ones (“LO”) because of one of these three things:

  • HI are unfettered monopolies, while LO are not.
  • HI are produced domestically, while LO are offshored.
  • HI are government-subsidized, while LO are not.

Girls, don’t fight:  you’re all pretty.  Rather than argue over causes, think about the likely effect.

With few exceptions, the LO items can all be delayed for months or years in a pinch, but the HI items are urgent necessities.  You can skip this year’s new TV, or buy one used off eBay.  You can’t delay or negotiate food, rent or a hospital visit.

Compounded over decades, it is easy to see that an ever-rising share of middle-class income consists of costs that fall outside your span of control.  This leads to a sense that you don’t control your destiny.

Which of the following two options feels more secure:  a $50k after-tax income with $25k in non-negotiable fixed costs?  Or a $100k income with $99k in fixed costs?  You’d sleep better at night taking option A.  It offers less income, but more security, because more of your cost structure is under your own control.

Many or most people with incomes below the 90th percentile have been forced, by the changing expense mix shown in the graph above, into option B:  superficially prosperous, but head barely above water, exposed to the slightest economic dislocation.  That’s stressful and frustrating.

Wonder why people under 30 don’t own cars?  Why socialism is suddenly popular?  Why no one has more than a few weeks’ savings?  Why so many are so pessimistic about their future?  This!  This is why.


Income inequality = skill gap

For everyone except the top 0.01% of earners, “income inequality” is actually skill inequality.  To see why, consider two thought experiments.

  1. Suppose the US labor market starts in equilibrium with 10m skilled jobs done by 10m skilled workers, and 30m unskilled jobs done by 30m unskilled workers. Suddenly, we figure out how to train 10m unskilled to become skilled. We now have “too many” skilled, and “too few” unskilled, for the available jobs.  From Econ 101, what must happen to skilled and unskilled salaries?  Obviously, they get closer to each other.  Income inequality goes down.
  2. Suppose the US is in equilibrium with 30m unskilled jobs done by 30m unskilled workers. Then we bring in 10m unskilled immigrants. The new arrivals can perform only unskilled work, but they demand work of various skill levels (doctors, lawyers etc). Thus, even if the new arrivals’ overall impact on the labor market is exactly neutral, they will create a shortage of skilled workers, and a surfeit of unskilled.  Thus the price of unskilled work must fall, and for skilled work must rise.  Income inequality goes up, unless you can do #1 above — train lots of unskilled to become skilled.

This argues that immigration policy and education policy are inextricably linked.  Unless you are able to train unskilled workers, you must not admit more unskilled workers, or you will end up with rising inequality and political instability.


The end of industrial warfare?

Cyber warfare and decapitation strikes may signal the sunset of industrial warfare.

If a cyberattack can shut down a country’s infrastructure — disabling its power plants, refineries and ports, for example — then war starts to resemble a nation-scale ransomware attack.

If small, self-guided drones can kill key military or political leaders, then war starts to resemble a nation-scale decapitation duel.

You think war means blowing stuff up and shooting people, because that’s been true for centuries.  But that’s not the goal;  it’s just the means.  The goal is to force another country to do your bidding.  War, in the most general sense, is “the continuation of diplomacy by other means,” as Clausewitz put it.  

The industrial model was expensive. You had to devote much of your manufacturing base to making explosive widgets;  hiring semi-skilled labor, aka soldiers, to operate those explosive widgets;  and then using them to blow up your opponent’s soldiers and factories until they cried uncle.

If you can instead force another country to do your bidding without spending trillions, without firing a shot, then of course you will. If these new modes of warfare work, they will replace the old model.

This will upset some assumptions.  Under the old industrial model, it was a big advantage in warfare to have more people, more factories, and less constitutional democracy (those pesky voters always want to stop fighting).

But to win a decapitation duel, for example, you no longer need lots of soldiers or widget factories.  Instead, you need a constitutional republic, a political system that is sufficiently codified to continue functioning even when its leaders are repeatedly killed off by surprise drone attacks.  By contrast, autocracies in general, and cults of personality in particular, are highly vulnerable to this sort of warfare.