Understanding graph algorithms I: graph theory & graph representations

Some visualisations or graphics in this article might be difficult to see on the black background, my apologies. I have yet to find the time to recolour them.

Personal Roam Research Graph

Here's what I'm going to cover in this article:

Prerequisites

  • (Very) Basic Set Theory
  • Properties of Relations (Symmetry)
  • Matrices
  • Asymptotic Notation

Goals

  • Understand graph theory
  • Build foundations for Understanding Graph Algorithms II: Traversal Algorithms (not published yet)

A Bit of Graph Theory

Let's start from scratch. What is a graph? A graph is basically just a structure which consists of a bunch of nodes, formally called vertices, and a bunch of lines which connect the vertices, called edges.

graph LR A((A)) --> B((B)) B((B)) --> C((C)) B --> D((D)) D --> D

Here, we have a graph with 4 vertices (AA, BB, CC, and DD) and 4 edges (the lines between the vertices). If we were really picky, we'd say that there aren't 4 lines but 4 arrows. But we'll get to that later. We'll refer to this graph again, so let's call it FF.

Definition of a Graph

Let's try to formalise this a bit more, and define a graph GG:

G=(V,E), where V is a set and EV×V.G=(V,E),\ where\ V\ is\ a\ set\ and\ E\sube V\times V.

Ok, what are we saying here? We have a graph GG which is defined by a set VV of vertices and a set EE of edges. We also have some additional information: EE is a subset of the Cartesian product V×VV\times V. So by this definition, an edge is just an ordered pair of vertices.

In the example above, (a,b)(a,b) would be an element of EE since there is a line between aa and bb. Similarly, (a,d)E(a,d)\notin E.

VV and EE are sets, so we can write the following:

  • n=Vn = |V| is the cardinality ('size') of VV, i.e. the number of vertices
  • m=Em=|E| is the cardinality of EE, i.e. the number of edges

Visualisation of a Graph

We've seen the visualisation of a graph above. Vertices are represented by labelled circles, and edges are visualised using lines to connect the vertices. Note that we have two different kinds of edges:

  • directed edge: represented by an arrow, also called arch
  • undirected edge: represented by a plain line

We use directed edges when we have an edge going from some vertex uu to some vertex vv but not vice versa:

graph LR A((A)) --> B((B))

We use undirected edges when we have an edge going from uu to vv and another edge going from vv to uu:

graph LR subgraph same: undirected edge A((A)) --- B((B)) end subgraph reciprocal adjacency B1((B)) --> A1((A)) A1 --> B1 end

Directed and Undirected Graphs

We have two different types of graphs, directed and undirected graphs. A graph is undirected if, and only if, all of its edges are undirected. Otherwise it's directed. So if a graph has some edge (a,b)(a,b) (that is, going from vertex aa to vertex bb), it needs to have an edge (b,a)(b,a) too in order to be undirected. And that must be true for all edges eEe\in E.

From a mathematical perspective, it's fairly easy to distinguish between these two types. To this end, we note that EE is a binary relation as it is a subset of a Cartesian product. Let's also recall the definition of a symmetric relation:

Let RR be a relation on a set AA. A relation RR is symmetric if, and only if, for every x,yAx,y\in A, if xRyxRy then yRxyRx.

This is just another way of saying that for all pairs (x,y)(x,y) in RR, there must also be a complementing pair (y,x)(y,x). A graph G=(V,E)G=(V,E) is undirected if, and only if, EE is symmetric.

More Notation for Directed Graphs

For directed graphs, we have a bit of additional notation. If we pick some random vertex vv, where vVv\in V, we can write in(v)in(v). This denotes the in-degree of vv, that is, the number of incoming edges into vv.
In a similar manner, we also have out(v)out(v), denoting the number of outgoing edges from vv.

A vertex which has only edges going in but no edges going out is also called a sink node.

graph TB A((A)) --> D((Sink node)) B((B)) --> D C((C)) --> D

More Graph Properties: Planar and Connected

A graph is planar if it is possible to draw the graph (formally, to embed the graph in the plane) so that no edges cross each other.

A graph is connected if you start from any vertex and there always exists a path to any other vertex.

graph LR C((C)) --> D((D)) A((A)) --- B((B)) B --- C

This graph is not connected as it has a sink node.

Graph Representations

Now that we've covered the basic theory of graphs, we can move on to the next problem: how can we represent graphs in memory? What data structures are suitable?

There are two popular approaches:

  • Adjacency matrices
  • Adjacency lists

Adjacency Matrices

Let's assume we have some graph GG with nn vertices. We can name these vertices 0,...,n10,...,n-1. Then, the adjacency matrix of GG is an n×nn\times n matrix, that is, a matrix which has nn rows and nn columns. Let's call that matrix AA.

aija_{ij} is an entry of AA, where ii indicates the row and jj indicates the column. aija_{ij} is 11 if there is an edge which goes from vertex ii to vertex jj. Otherwise, aija_{ij} is zero.

For our graph FF, this would be AA, assuming that the vertices are ordered in lexicographical order:

A=(0100001100000001)A = \begin{pmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}

For instance, if our indexing is zero-based, a3,3=1a_{3,3}=1 since DD is the 3rd vertex and has an edge pointing to itself.

Adjacency Lists

Let's take GG with its nn vertices again. We come up with an array with nn entries, one for each vertex. Each entry contains a list of all vertices to which the respective node has outgoing edges too. So an entry is a list of vertices adjacent to the node for which the entry is.

Image of an Adjacency List. Source: https://commons.wikimedia.org/wiki/File:Adjacencylist_array_of_linkedlists_directedgraph.svg

This adjacency list represents a graph G=(V,E)G=(V,E), where

V={a,b,c,d,e}V=\{a,b,c,d,e\}\\

and

E={(a,b),(a,d),(a,e),(b,c),(c,d),(d,a)}.E=\{(a,b),(a,d),(a,e),(b,c),(c,d),(d,a)\}.

Comparison: Adjacency Matrices and Adjacency Lists

I'll just throw this table at your head; explanations (and a bit of additional information) will follow:

Efficiency Criterion Adjacency Matrix Adjacency List
Space Efficiency Θ(n2)\Theta (n^2) Θ(n+m)\Theta (n+m)
Time to check if ww is adjacent
to vv (adjacency check)
Θ(1)\Theta (1) Θ(out(v))\Theta (out(v))
Time to visit all ww adjacent
to vv (adjacency visits)
Θ(n)\Theta (n) Θ(out(v))\Theta (out(v))
Time to visit all edges (visit-all) Θ(n2)\Theta (n^2) Θ(n+m)\Theta (n+m)

Advantages of Adjacency Matrices

Easy Implementation

Adjacency matrices have a major advantage that they are very easy to implement. We can simply use a two-dimensional array, or even one-dimensional if we use an injective, or ideally bijective, mapping function.

Good for Undirected-Check

It's easy to check whether a graph is undirected or not. If it's an undirected graph, the corresponding adjacency matrix is symmetric around the diagonal. If you don't know why this is the case, I'd advise you to draw an undirected graph and write down the adjacency matrix.

Good for Adjacency-Check

Given that we have direct access to all entries of AA using array indices, we can check very efficiently if, given a pair of vertices, they are adjacent, i.e. there exists an edge between the vertices.

Index-accesses and simple comparisons happen within constant time, so worst-case running-time: θ(1)\theta (1).

Disadvantages of Adjacency Matrices

Space-Inefficient Implementation

We have an n×nn\times n matrix, so the implementation will use up θ(n2)\theta (n^2) space. So rather space-inefficient in comparison to adjacency lists where we don't have all the zeros of AA.

Slow Adjacency Visits

We do have direct access via indices, so we can directly access the row for vertex uu. However, we don't know which of the cells of that row are 11 and which are 00. So we need to go through the entire list which is of length nn. Worst-case running-time of θ(n)\theta (n).

Slow Visit-All

Similar to the prior section, but now we're going through all nn rows and for each row through all nn columns. So worst-case running-time of θ(n2)\theta (n^2).

Advantages of Adjacency Lists

Space-Efficient Implementation

In comparison to the adjacency matrix, the adjacency list is a more compact way of representing a graph as we don't have all the zeros of AA.

The list has nn entries and within these entries, all mm vertices are split up. So space-efficiency of θ(n+m)\theta (n+m).

Good for Adjacency Visits

If we want to visit all vertices adjacent to vv, we simply index into the entry for vv, a θ(1)\theta(1) operation, and go through the entry list which has length out(v)out(v).

Worst-case running-time of θ(out(v))\theta (out(v)), pretty good!

Good for Visiting All Vertices

We have a faster asymptotic worst-case running-time for visiting all vertices which are adjacent to a given vertex. We simply direct-access the entry for each of nn vertices and go along the respective entry list, which has length mm in total.

Worst-case running-time of θ(n+m)\theta(n+m).

Disadvantages of Adjacency Lists

Slow Adjacency Check (is ww a neighbour of vv?)

Since each of the entries of the adjacency list is an unordered list of neighbours, we need to go through the list until we find ww (or we don't, then the check evaluates to False).

In the worst case, we need to go through the entire list. Recall that the length of that list is denoted by out(v)out(v). So we have a worst-case running-time of θ(out(v))\theta(out(v)).

TL;DR

Use adjacency lists unless the adjacency check plays a major role in your algorithm.

Additional Bits

Stuff that is good to know but may not be used in my article series about graphs.

Sparse and Dense Graphs

Recall that nn is the number of vertices of a graph. Then, the maximum number of edges is n2n^2. This is easy to understand: if we start with the first vertex 00, we can have at most nn edges going out from 00, including an edge to itself (yup, that's definitely possible). For vertex 11, we can also have at most nn edges going out from the vertex. And so on, up until vertex nn. So the maximum number of edges is n+n+...+nn + n +...+n, with nn summands. Hence, at most n2n^2 edges.

Recall that mm is the number of edges of a graph GG. If mm is close to n2n^2, we say that GG is a dense graph. You can think of this property like this: a dense graph is a graph which has proportionally many edges, it is well connected.

If mm is much smaller than n2n^2, we call GG a sparse graph.

There is no exact definition how much smaller or how close to n2n^2 mm should be, these terms are usually used when we're talking asymptotically about a graph, checking whether it hast most or few edges. In that context, we'd usually talk about θ(m)\theta(m) rather than mm as an exact number.

The Corollary of Kuratowski's Theorem

Kuratowski's theorem is useful when working with planar graphs. The theorem has the corollary that

E3V6.|E|≤3|V|-6.

We can use this to:

  1. Check whether a graph is planar given the number of edges E|E| and the number of vertices V|V|. The graph is planar if the inequality remains true.
  2. Given the information that a graph is planar, we can use the criterion to obtain a quite tight upper bound for the number of edges. Looking at the inequality, that number will asymptotically be the same as the number of vertices for planar graphs.

References