Tech Takes the Pareto Principle Too Far

There’s a reason video games build what’s called a ‘vertical slice’. If you’re not familiar, a vertical slice is a single playable area, with all mechanics, final art, vfx, sfx, music, etc. Basically, a little piece of exactly how the finished game will look and feel and play. The vertical slice is what game developers show publishers and investors to demonstrate not only that the game itself is going to be good, but also that the game development team has all the skills necessary to deliver the game to the level of polish the market demands.

Contrast this with the tech industry, which submits for approval an ‘MVP’. A minimum viable product is the absolute least one can create that someone will pay for. It seems tech investors have gotten very used to evaluating MVPs, and rightly so — they need to be able to assess the potential of these prototypes so they can decide which are worth their investment. The problem is that MVPs don’t actually establish whether the team /could/ get to a finished product, and in practice many can’t.

Recently, getting VC funding has become such an end in itself that engineers in the tech industry, centered around Silicon Valley, have optimized their skillset for prototyping. There’s an oft quoted idea called the Pareto Principle, which states that 20% of the effort produces 80% of the results. So, if you can just prioritize the right 20%, you can get most of the way towards the desired outcome. The whole sector has become great at this, nearly to the exclusion of all else. And who can blame them? Look at the inverse — doing 80% of the work to complete only 20% of the job doesn’t sound like much fun.

What the Pareto Principle doesn’t capture, and what its adherents seem to forget, is that you still HAVE TO DO that last 20%. End users usually don’t enjoy using 80% of a website, or driving 80% of a car. Unfortunately, with so many digital products abandoned at the funding stage or forced to release early, engineers and designers often don’t have any practice with the last 80% of the effort required to finish something. I’ve worked with many such engineers, and it can be really sad knowing that you can never get to the level of polish the concept deserves.

Because there is such a culture around early adoption of new technology, there’s a big population willing to overlook that the things they use are unfinished. It’s good that there are folks willing to try out nascent products, but we don’t need those products lionized, nor the people who use them. Because of their approval, the broader population has begun to defend the truncated. This applies to games as well. Day 1 patches are the norm, as are DLC which feels like it ought have been part of the core game. When products remain incomplete it’s often because all potential customers have already paid and there’s no financial incentive to finish. How many of the products you use every day feel like they needed a few more iterations to really work correctly?

However, there is another, more frustrating reason why a product might remain unfinished. Maybe it’s literally impossible to complete. I think that’s the situation we find ourselves in for certain applications of AI, like self-driving cars, image generation, and text generation. Even people who advocate for these technologies rarely assert that the results are useable as-is, especially in a world where people are accustomed to a much higher, human-level quality. At best they are useful as a starting point for a human to then finish the image, or the cover letter, or to take the wheel. The problem is I don’t think that the current methodology is capable of taking us the other 80/20% of the way.

I’ll break from my main point briefly to justify that assertion. It’s funny to think about now, but in the 70s, AI researchers believed they were most of the way towards achieving AGI (artificial general intelligence, aka the AI from the movies). They thought that if an expert system, or a perceptron, or a set of predicates was just developed far enough they would eventually reach sentience, or at least eliminate tedious work. Many believed that the hardware explosion Moore’s law promised was enough to create AI, and the software would take care of itself. Some of that was true — expert systems handle things like WebMD, and constraint solvers manage the incredible logistics of modern freight. The limits of the techniques of the time weren’t felt until much later. That’s where we are with Generative AI, too.

I think that the Pareto Principle is technically true in a lot of fields, but I also feel our society would be a lot better off if we didn’t know about it. As I said, doing the last 80% of the work to produce only the final 20% of the result is hard on morale. It’s no wonder that work is so often abandoned, or outsourced. Perhaps more investors should demand to see a vertical slice, instead.

If we took a more craftsmanlike view of software, we would realize that a chair is not 80% done when you can sit on it. It’s the details and the polish that make something worthy of use. So while from a utilitarian standpoint something may have most of the features a person might ask for, from a humanist point of view 20% of the work still only produces 20% of the results.


Bobby Lockhart is an award-winning designer of learning games.

Keep in touch on Bluesky or on LinkedIn

Category: Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *