In MongoDB ReplicaSet, the voting is done by a majority of voting members.
Imagine a Replica Set with three (voting) members. Let's say that Node A is primary, and nodes B+C are secondaries. Node A goes down, so nodes B+C go to election. They still do form a majority (two out of three). The election is first decided by priority. If both Nodes B & C have the same priority, then the one who is most up to date in respect to the failed primary (oplog) wins. Let's say it's Node B.
Once node A comes back alive, there is no new election. Node B remains the master, and C+A are now secondaries.
On the other hand, if two nodes go down you don't have a majority, so the replica set can't accept updates (apply writes) anymore until at least one of the two failing servers becomes alive (and connected by the single surviving node) again.
Imagine now a Replica Set with four (voting) members. Let's say that Node A is primary, and nodes B+C+D are secondaries. Node A goes down, so nodes B+C+D go to election. They of course form majority (three out of four)
However, if two nodes go down you don't have a majority (two out of four), so the replica set is again at read-only mode.
So that's why an odd number is recommended; If you loose a single member in a 3 members replica set, it's the same as loosing a single member in a 4 members replica set: you still gain quorum majority and a new primary can be elected (the RS can still elect a new master by majority).
On the other hand, if you loose two members in a 3 members replica set or a 4 members replica set (or n/2 members of n-members replica set) – again – the impact is the same: No new leader can be voted by election.
So, to make a long story short, there is no redundancy gain by having an even number of members in a replica set.
Few Q&A's of why we need Odd number nodes required?More success for odd number concept is an internal mechanism of MongoDB? Or is just to increase Fault tolerance as a common behavior?
MongoDB uses a quorum/consensus method to hedge against "split brain" problems. This means that an odd number of nodes is the preferred layout. In this approach, which is known as "pessimistic", availability is sacrificed for consistency – meaning that a network partition will limit access to the divided sub-partitions. The sub-partition with the majority of the nodes will remain available; while any sub-partition without a strict majority will become read-only (all nodes in that sub-partition will be Secondaries)
Then it's not Mandatory It's not mandatory, but strongly recommended
Let's imagine that a replica set has even number of nodes (4, for example). Then an unfortunate network partition happens that splits the set in half (2 + 2). Which partition should accept writes? First? Second? Both? What should happen when network is restored? These are all hard questions.Having odd number of nodes eliminates the questions entirely.
The set can't be split exactly in half. So the bigger part will accept writes (to be exact, node must see more than half of nodes (including self) to be elected as primary. So it's 1 of 1, 2 of 3, 3 of 5, 4 of 7 and so on).An odd number of members ensures that the replica set is always able to elect a primary.
Fault tolerance for a replica set is the number of members that can become unavailable and still leave enough members in the set to elect a primary. In other words, it is the difference between the number of members in the set and the majority needed to elect a primary. Fault tolerance is an effect of replica set size, but the relationship is not direct.
Read More: https://kovidacademy.com/blog/need-use-odd-number-nodes-mongodb-replicaset/
YOU ARE READING
Why we need to use Odd number of nodes in MongoDB ReplicaSet
RandomIn MongoDB ReplicaSet, the voting is done by a majority of voting members. Imagine a Replica Set with three (voting) members. Let's say that Node A is primary, and nodes B+C are secondaries. Node A goes down, so nodes B+C go to election. They still...