# 3 Random Graphs: Degree Sequences

A limitation of the Erdős–Rényi model is that it always generates graphs whose degree distributions are approximately Poisson. This isn’t very flexible, and most real-world networks have very different-looking degree sequences. In this set of lecture notes, we’ll discuss two more classes of random graph that emphasize the important role played by the *degrees* of nodes in determining graph structure.

## 3.1 The Configuration Model

While sampling from the Erdős–Rényi model was as easy as flipping \(\binom{n}{2}\) weighted coins, sampling from the configuration model is actually rather challenging. A classical approach for sampling from the configuration model is to perform a *stub-matching* algorithm. Here’s how it works.

- First, each node is assigned \(k_i\)
*stubs*or*half-edges.* - Until there are no more stubs left:
- Two stubs are selected uniformly at random and matched, creating an edge.

The stub-matching algorithm can produce multi-edges and self-loops, which can cause the graph to not be simple. However, it can be proven (Bollobás 1980) that, when the graph is sparse, the expected number of multi-edges and self-loops does not grow with network size. One can, as a result, show these structures are *rare*, and can often be ignored in arguments.

### Moments of the Degree Sequence

Let \(p_k\) be the proportion of nodes with degree \(k\) in a configuration model. Then, \(p_kn\) is the *number* of nodes with degree \(k\). The *mean degree* is \(\langle k \rangle \triangleq \sum_{i \in N} k p_k\). More generally, the \(\ell\)th moment of the degree sequence is \(\langle k^{\ell} \rangle \triangleq \sum_{i \in N}k^\ell p_k\).

### Configuration Models with Constant Moments

Recall that, in the case of Erdős–Rényi random graphs, we called a sequence of models **sparse** if the mean degree \(c\) did not increase as a function of \(n\). Here, we use a similar working definition

We’ll assume that we’re considering configuration models with constant moments. This assumption justifies “large \(n\)” reasoning of the same type we did with Erdős–Rényi random graphs.

### Branching Process Approximation for Giant Components

Recall that the **branching process approximation** in Erdős–Rényi random graphs suggests that the size of the component containing a uniformly sampled node has expected size approximately \(1 / (1-c)\), where \(c\) is the average degree. On the other hand, when \(c > 1\), the branching process approximation suggests that the component containing a uniformly sampled node could be very large indeed.

We can perform a similar kind of analysis for the configuration model. There is only one new thing to figure out.

The reason to care about the excess degree distribution is that it answers the following question:

Suppose I follow an edge in a configuration model random graph, arriving at node \(i\). How many

additionaledges are attached to node \(i\)?

This is an important question, because it is precisely the question that we used to determine whether the branching process would die out quickly or continue forever in the case of the Erdős–Rényi random graph. The same heuristic works here:

Indeed, this result can be formally proven by rigorous arguments, although these are beyond our scope here.

### Friendship Paradox

The “friendship paradox” refers to the observation which is commonly phrased as:

Your friends have more friends than you do.

This is only true in a statistical sense: *on average*, in a graph, if you check the degree of each node and compare that to the mean degree of its neighbors, you’ll find that the average neighbor degree is higher than the average original degree.

The math here is that the neighbor degree nothing other than the degree of a node that we reached by following an edge. That is, its distribution is the edge-biased degree distribution. The mean of this distribution is \(\mu = \langle k^2 \rangle/\langle k \rangle\). The Cauchy-Schwartz inequality implies that \(\langle k^2 \rangle \geq \langle k \rangle^2\), which in turn implies that \(\mu \geq k\).