The Higher Dimensions Series — Part Three: The Distance Between Points

Chris Rowe
13 min readMay 4, 2020

Welcome to Part Three of the Higher Dimensions Series, where we explore some of the strange and delightful curiosities of higher dimensional space. Currently, I have completed Parts One, Two, Three, Four, and Five, with hopefully more to come. If you have not already done so, I encourage you to read the earlier parts before continuing on with our current expedition.

Today we will be exploring a phenomenon that is somewhat related to what we saw in Part One of our journey. If you recall, that is where we examined what happens to the distance between the origin and points selected randomly from within n-balls as we moved into higher and higher dimensions. Incredibly, we saw that the vast majority of points were concentrated right near the boundary of the ball in high dimensions; in other words, ever increasing proportions of a ball’s volume is concentrated farther from the origin as the dimension of the ball increases. I don’t want to freak anyone out, but as a side note, the volume of an n-ball actually goes to zero as n goes to infinity. Ruminate on that one for a bit!

For today’s journey, instead of looking at the distance between a randomly selected point and the origin, we will be looking at the distance between two randomly selected points. Fortunately, we will be using a lot of the same concepts and tools that we covered in detail in Part One (e.g., understanding the definition of an n-ball, the idea of sampling random points from within an n-ball, and the high dimensional generalization of the Pythagorean theorem), so we won’t spend any time re-defining those ideas.

Measuring The Distance Between Two Points

First, we are going to take a bit of a detour to learn how to measure the distance between two points in space of arbitrarily high dimensions. If you are already familiar with these ideas, or don’t really care to know exactly how such distances are measured, you can skip ahead to the next section. I personally find these concepts to be extremely fascinating and powerful, but they are not essential for having your mind blown by the higher dimensional phenomena that we seek. For those who remain, we will start with a simple example in two dimensions. Consider the following two points, one blue and one red, which were selected randomly:

How would you go about measuring the distance between these points? You could use a ruler, but surely there must be a better way. Indeed there is! To tackle this, we are going to take a brief (though some would probably say lengthy) detour from to learn about vectors, which are incredibly useful for understanding and navigating higher dimensional space. Vectors show up everywhere in mathematics and physics and are a core focus of linear algebra. A vector represents a quantity that has both a magnitude and a direction, which is often represented as a line segment of a particular length (representing its magnitude) and a particular spatial orientation (representing its direction). Something very cool and very convenient about n-dimensional Euclidean space (which is the type of space we’ve been dealing with) is that there is a nice correspondence between a point in space and the vector that begins at the origin and ends at that point. For instance, let’s represent the red and blue points from above as vectors:

Here, the magnitude of each vector is the just the distance from the origin to each point (which we learned how to calculate using the pythagorean theorem in Part One!) and the direction of each vector is just the direction in space from the origin to the position of each point. We can simply refer to these vectors by the coordinates of their corresponding points. Now that we are thinking about our points as vectors, we can talk about adding these vectors together, something that doesn’t really make sense if we just have a pair of points. You may be wondering why we care about adding vectors since our goal is to measure the distance between two points, but fear not - if you bear with me, your patience will be rewarded! Anyway, to add two vectors together, we just take one of the vectors and tack it onto the end of the other vector. A cool thing about vector addition is that it doesn’t matter if we start with the blue vector and add the red vector, or start with the red vector and add the blue vector, we will always end up in the same place! This is true regardless of how many vectors we are adding together. Looking at the figures below, it should be clear that when we add our two vectors together we get a third vector, which I’ve shown as a purple vector:

Unrelated to our current journey but very cool nonetheless, perhaps from the plots above you can see the relevance of vector addition to navigation for something like an airplane! Anyway, what if we wanted to know the coordinates of the purple point associated with the purple vector? It is as simple as summing the individual coordinates associated with the red and blue vectors! For our current setting, let’s call our blue vector x, our red vector y, and our purple vector z; let’s also refer to coordinates associated with each of our original vectors as (x1, x2) and (y1, y2). If z = x + y, then the individual coordinates of z are given by:

Okay, now that we understand vector addition, what about subtraction? The process is very similar, but instead of tacking the second vector onto the end of the first vector, we tack the opposite of the second vector onto the end of the first vector. By “opposite”, I technically mean the product of the vector and the number -1, which gives us a vector of the same magnitude but pointing in the exact opposite direction in space. Technically, vector “subtraction” is just vector addition, but where the vector to be subtracted is multiplied by -1 and added to the other vector. For example, here is the original red vector as well as what we get when we multiple it by -1 (the faded red vector):

Note that if the original red vector is defined by the coordinates (y1, y2), the new faded red vector is defined by the coordinates (-y1, -y2). So, if we want to subtract the red vector from the blue vector, we would tack this new faded red vector onto the end of the blue vector, which gives us a new purple vector:

And once again, if we want to calculate the coordinates of the point at the end of this new purple vector, we would just add the individual coordinates of the blue vector (x1, x2) with those of the new faded red vector (-y1, -y2). This is equivalent to subtracting the coordinates of the original red vector (y1, y2) from those of the blue vector (x1, x2). That is, if z = x - y, then the individual coordinates of z are given by:

Okay, we are finally ready to reveal the meaning of all this: the magnitude of the new purple z vector is equal to the distance between the points associated with the blue x and red y vectors! Wow! To show this more clearly, lets plot the two original vectors and the vector we obtained by subtracting those vectors in the same plot:

In the plot on the left, we have the original red and blue vectors and the purple vector we obtained from subtracting the red vector from the blue vector. In the plot on the right, we have moved the purple vector to show that its magnitude exactly equal to the distance between the two points associated with the red and blue vectors. So cool! Let’s recap: if we want to calculate the distance between two points in space, all we need to do is represent the points as vectors, subtract one of the vectors from the other, and calculate the magnitude (i.e., length) of the resulting vector. How powerful is that! Although I won’t show it, if we had defined z as y - x instead of x - y, we would end up with a vector of the exact same magnitude but pointing in the opposite direction as the purple z vector above. Thus, since we only need the magnitude of the resulting vector to calculate the distance between two points, it doesn’t matter which vector is “subtracted” from which.

So now that we obtained this “difference” vector whose magnitude is equal to the distance between the two original points, how do we calculate its magnitude? Remember in Part One when you learned how to calculate the distance from the origin to any point in space using a generalization of the Pythagorean theorem? Well, the magnitude of a vector that begins at the origin is equivalent to the distance between the origin and the corresponding point in space! Now you have everything you need to calculate the distance between two arbitrary points in two-dimensional space.

However, for me, one of the coolest things about this whole idea is that it works in exactly the same way in arbitrarily high dimensions. Specifically, all this machinery of defining vectors and adding them together applies the same in any dimension we choose. For instance, say we have two vectors (or points) in n dimensional space with coordinates defined as follows:

And similarly, if z = x - y, then coordinates of z are given by:

Once we have z, we can calculate it’s magnitude using the generalization of the pythagorean theorem and that gives us the distance between the two points x and y! Let’s take a second and reflect on the concept of vectors in higher dimensions. The plots above show nice illustrations of vectors in two-dimensions, and we can imagine line segments emanating from the origin and shooting out towards points in three-dimensional space, but what about higher dimensions? As we are now well aware, we are not able to explicitly visualize the entirety of these spaces, but we can explicitly define the location of points existing in some n-dimensional space and thus also the direction and magnitude of their associated vectors. Whether we are in two-dimensional space or 100-dimensional space, a point is a point and a vector is a vector. With these definitions in hand, we can now measure the distances between pairs of points in arbitrarily high dimensional space!

How Far To The Next Point?

Remember, today’s quest is to examine the distance between two randomly selected points from within an n-ball that is centered at the origin and has a radius of one. To do this, we are going to randomly generate pairs of points inside a ball of a given dimension and calculate the distance between each pair of points using the tools we developed above. Let’s start with dimensions we can visualize, two and three, then see what happens when we increase n.

Before we take the leap, let’s take a second to orient ourselves. We are talking about n-balls, so we know that regardless of the dimension n, two points will be a minimum of zero units away from each other (if the two points are in exactly the same location) and a maximum of two units away from each other (if the two points are on the boundary of the ball on the opposite ends of the same axis). Okay! Now let’s check out the 2-ball, or the two-dimensional circle. Here is a histogram of the distances between 10,000 pairs of points:

Seems reasonable enough, right? There are pairs of points that run the gamut of possible distances from each other. We see some points that are very close together (i.e., with a distance close to zero), some that are far apart (i.e., with a distance close to two), and everything in between. It also looks like middle-of-the-road distances are most common.

Here is the distribution of distances between pairs of points randomly sampled from within a 3-ball, or three dimensional sphere:

Again, no real surprises here. Let’s fire it up to 10 dimensions!

Interesting! Here in the wild world of 10 dimensions, there are no pairs of points that are particularly close together. Onward to 100 dimensions!

Incredible! Things are really starting to look different now. Scroll up and look at the distances between points in the 2- and 3-balls again: they covered the whole range from zero to two. Here in 100 dimensions, it looks like the vast majority pairs of points are between 1.10 and 1.70 units apart. In fact, among these 10,000 pairs, not a single one pair is closer than one unit apart! The concept of a close neighbor is completely non-existent in 100-dimensional space.

Okay, let’s finish up with 1000-dimensions:

Wow! The trend continues! The distances between all pairs of points are highly concentrated around the average distance (1.42 units). Let’s really take a second and meditate on this for a bit. In higher and higher dimensions, it appears that all pairs of points are approximately equidistant apart. That is, if we select a random pair of points from within a 1000-ball, they will be roughly 1.42 units apart. If we select another point, it will be roughly 1.42 units from each of the first two points, ad nauseam! Every single point sampled from within the 1000-ball is about 1.42 units from every other point. Every. Single. Point.

To demonstrate this idea in a slightly different way, let’s select ten random points from within a 1000-ball and calculate the distance between each point and each of the other 9 points. That way, we are not constraining ourselves to single pairs of points, but rather all the pairwise distances between a set of 10 points. This is actually exactly the same thing as looking at many random pairs of points, but it feels a bit different so perhaps worth exploring. Let’s see what we get!

Pairwise Distances Between 10 Random Points Selected From Within a 1000-Ball

Aha! Exactly what we expected. Every single point is approximately the same distance from every other point. What a crazy world it is out there in higher dimensions!

Remembering the Distance to the Origin and the Hypercube

You may recall the somewhat similar phenomenon that we witnessed in Part One of the series. There, we saw that in higher dimensions, all points were approximately equidistant from the origin and were highly concentrated at the outer boundary of the ball. In these high dimensional balls, the distribution of distances to the origin exhibited a similar shape as the distribution of distances between pairs of points presented above: a sharp concentrated peak with little variability. So, inside the n-ball, randomly sampled points are concentrated at the outer boundary of the ball and each of these points are equidistant from all the other points. Unbelievable!

Perhaps you are wondering whether this strange phenomenon of points being the same distance from each other is just some quirky characteristic of the n-ball. Does this hold in other spaces constrained by other shapes? Indeed it does! Remember the insane “spikey” hypercubes from Part Two, where most of the volume appeared to be highly concentrated in the corners? Well, not only are randomly sampled points almost universally found in the corners of these hypercubes, but those points are all the same distance from each other. What does this even mean? How can points be so highly concentrated in some particular region of space (e.g., the boundary of an n-ball or the corners of an n-cube), but also be equally spread apart from each other? Honestly, I have no idea, but it is insane and it makes my mind feel strange!

Wrapping Up and Looking Forward

Today we saw that points in high dimensional spaces are all approximately the same distance from each other. This is very clearly not the case in lower dimensions. We know that points randomly selected in in lower dimensional spaces, say inside a two-dimensional circle or a three-dimensional cube, may be close together or may be far apart, and they will exhibit a wide range of distances from each other. Once again, just as we were hoping, things got very strange as we ventured forth into the higher dimensions!

The next installment in the series draws connections between some of these extraordinary phenomena and probability theory, which are certain to twist and expand your mind! Until then, rest up, for we will need all our mental acuities for the onward journey!

--

--