Before any of this recognition can occur, let’s clarify our subject-matter: the quantified-self and algorithmic inference. What are those?

I’m going to start with the latter.

When I talk about algorithmic inference, I’m talking about the part of Amazon’s and Netflix’s product and business models that make recommendations (well, I’m talking about way more than just Amazon and Netflix, but those are probably two of the more widely used examples). When you buy something on (or their subsidiaries like Zappos or IMDB), or even when you just search or browse or click anywhere on the site, a cookie is left on your hard drive and a record is made in a database somewhere on Amazon’s servers. This is how the “recommended products” list is populated — by software on their side that infers from a set of parameters (beyond just what you do, but we’ll get to that later) what you will probably buy.

The quantified-self movement — from the step-tracking of Fitbit users to the probability-crunching by 23andMe genetics test subjects — also uses algorithmic inference. Do you have a recognizable type of travel pattern in your day? Are your genetic patterns similar to others in the database? Are you consuming foods of a cuisine popular with other users? These are all questions that the apps, devices, and platforms that help us quantify our steps, calories, and genetic makeup (and others) use to make recommendations.

With that, then, let’s move on to better understand Mr. Hegel.