Raft Leader Election in Consul
A small paper reading group has assembled at work. We give ourselves two to three weeks to read a paper, meetup after hours, eat pizza, and discuss it. Our last paper focused on the Raft consensus algorithm, and I was chosen to lead the discussion.
In order to help the impact of Raft hit closer to home, I put together a small demo of Raft’s leader election process using Consul. The demo spins up a three node Consul cluster using containers, then interleaves all of the debug log output filtered with
raft. Reading through parts of the Raft paper, you can see how the logging output of HashiCorp’s implementation lines up.
Section 5.2 of the Raft paper focuses on leader election, and starts off with:
When servers start up, they begin as followers.
Sure enough, the first
raft filtered logs start with:
Next is the the beginning of an election:
If a follower receives no communication over a period of time called the election timeout, then it assumes there is no viable leader and begins an election to choose a new leader.
That corresponds with:
Now that the election started, there needs to be a winner:
A candidate wins an election if it receives votes from a majority of the servers in the full cluster for the same term.
Which goes with:
AppendEntries is used to communicate the new leader to all other candidates:
While waiting for votes, a candidate may receive an AppendEntries RPC from another server claiming to be leader.
consul1 show that it is replicating to