8 Comments

This way of looking at economics is amazing. This stuff should directly go into graduate level texts in universities. A simplified content would make a lot of sense to school children getting introduced to economics.

Expand full comment

This was excellent overall but I don't see how the "productivity monitoring" thesis leads to lower labor share (which is a function of average, not median, wages)?

The stylized facts as I understand it are:

- a firm has 100 workers, 10 worth $90/hr and 90 worth $10/hr, and it makes $3000/hr

- in 1990 they can't tell who's who, so they have to pay everyone the blended wage ($18/hr) for a 60% labor share

- in 2010 they now know exactly how much everyone is worth, so they pay the stars $90/hr and everyone else $10/hr. median wages go down to $10/hr (and inequality rises), but total wages and therefore the labor share are still constant!

what is this model missing from the productivity monitoring theory? you might say that in 1990 the high-earning workers know their own productivity and get frustrated and go elsewhere, but if no other employers can measure their productivity either then it doesn't matter.

maybe you could construct a scenario where those workers go off and do something else (e.g. self-employment), but wages are sticky so the remaining (low-quality) worker pool captures a greater share of the pie; whereas today everyone gets what they're worth. but that looks like a world where entrepreneurship and worker mobility are lower today than in the pre-2001 era, which I don't think fits the facts.

I'm sure my simple model is missing something?

Expand full comment

This is a good question and something I should have been more explicit about. I think my model would be something like: in 1990, when they can't tell who's who, workers have more bargaining power, because any given worker *could* be a productive worker. As a result, say that all workers can negotiate for $20/hr. So the total wage bill is $2000/hr=66% labor share -- and then things are as you say in 2010, with workers only getting 60%. Now these numbers are pretty artificial and I don't have an exact model of how this "worker bargaining power" mechanism works, but it seems plausible to me that greater uncertainty about worker ability allows workers to extract a greater surplus in the bargaining process? This paper presents a formal model along the lines of what I'm talking about: https://www.sciencedirect.com/science/article/pii/S0164070409000676

Of course, I'm not committed to this paper in particular being the correct formal model of what is going on.

I'm sure many other cool theory papers could be written on this (and probably have been).

Expand full comment

...and reading the paper you shared, I see the gap between that and my model is that this one focuses on the unobservable as a problem of worker effort rather than a problem of latent differences between workers

They say, if I'm oversimplifying correctly, that in a world where you can't monitor workers, you have to give them a big share of the surplus in order to incent them to give full effort. But when you can monitor workers effectively, you can set up a more specific reward structure that incentivizes them to give full effort without giving away much surplus.

YMMV on how strong that effect can be but it at least offers a solid bridge away from where I was getting stuck.

Expand full comment

There's something to that idea -- I know of literature indicating that firms pay a premium for workers with uncertain productivity a priori because the firm can capture the full benefit if the employee turns out to be better than expected whereas they can fire the employee if they turn out worse. (of course these rely on some assumptions like the employee not being able to achieve or credibly demonstrate the same productivity elsewhere.) So if worker output is certain, that option premium vanishes, and workers may get paid less in aggregate (even if the best individuals get more).

It seems like a stretch to link that to the micro evidence at hand though -- does increasing software intensity of output really allow companies to better predict worker output before they get hired? Maybe you could extend that model to on-the-job monitoring but I'm not sure if it still works given wage rigidity.

Expand full comment

yeah I agree with you that more research needs to be done on this but I do find it plausible that increasing software intensity has strong effects on monitoring: in traditional manufacturing jobs you have straightforward productivity measures (like boxes moved/hr) that didn't previously exist, and in more white-collar jobs I think you have more visible metrics like "how fast did this person respond to emails" and other things like that. Maybe I'm speculating too much in defense here but it seems to me like there are many little margins where a more technological workplace leads to superior monitoring of worker performance. And I think that superior monitoring leading to better prediction of pre-hiring output comes from things like references now containing more signal than before (also I think there certainly is a role for on-the-job monitoring even with rigidity). But despite all this defense I think you make very good points and I should re-visit this in more detail.

Expand full comment

one source on the risky workers wage premium claim: https://www.nber.org/papers/w5334

Expand full comment

Thank you for writing this informative and stimulating analysis.

Expand full comment