Node in the Network

Node's entire lifetime

This section describes the flow a node will follow during its entire lifetime.
When they join the network, they won't yet be a member of any section.
First,they will have to bootstrap with their proxy node, receive a RelocateInfo and attempt to join the section that this RelocateInfo is pointing to.
Once they have a full section, they will be able to operate as a full section member.

OWN_SECTION refers to this node's own section.
It is an Option<Prefix>.
While a node is being relocated, the value will be none. Once they get accepted into a section, it becomes Some(that_section_s_prefix)

This function gets called when a node just joined the Network.
At this stage, we are connected to a proxy node and they indicate to us which section we should join.

Once a node has joined a section (indicated by OWN_SECTION.is_some()), they will be able to perform as a member of that section until they are relocated away from it.
See StartSectionMemberNode graph for details.

First Create new identity with public key-pair
The node connects to a proxy with the new identity to use for joining the new section as a full node.

Once a node knows where to be relocated, they will follow this flow to become a full member of the section.
This covers resource proof from the point of view of the node being resource-proofed.
The output of this function is an Option. If we fail resource proof, it is none, which means we will have to bootstrap again. If it is Some, it contains the RelocateInfo we need to join this section.
See JoiningRelocateCandidate graph for details.

graph TB Start --> LoopStart style Start fill:#f9f,stroke:#333,stroke-width:4px LoopStart --> HasSection HasSection(("Check?")) HasSection --"OWN_SECTION.is_none()"--> BootstrapAndRelocate BootstrapAndRelocate["BootstrapAndRelocate:
Get RelocateInfo"] BootstrapAndRelocate --> ReBootstrapWithNewIdentity HasSection --"OWN_SECTION.is_some()"--> StartSectionMemberNode StartSectionMemberNode["StartSectionMemberNode
"] style StartSectionMemberNode fill:#f9f,stroke:#333,stroke-width:4px StartSectionMemberNode --> ReBootstrapWithNewIdentity ReBootstrapWithNewIdentity["Rebootstrap
with new relocated identity
output: RelocateInfo"] ReBootstrapWithNewIdentity --> JoiningRelocateCandidate JoiningRelocateCandidate["JoiningRelocateCandidate(RelocateInfo)
output: JoiningApproved"] style JoiningRelocateCandidate fill:#f9f,stroke:#333,stroke-width:4px SetNodeApproval["OWN_SECTION=JoiningApproved"] JoiningRelocateCandidate --> SetNodeApproval SetNodeApproval --> LoopEnd LoopEnd --> LoopStart

Becoming a full member of a section

This is from the point of view of a node trying to join a section as a full member.
This node is going to try to be accepted as candidate until it receives a Rpc::NodeConnected to complete this stage successfully.
This node is going to perform the resource proof until it receives a Rpc::NodeApproval to complete this stage successfully.
If the node is not accepted, after time out, it will try another section as a new node.

Timeout triggered to resend messages that have been lost: Either Rpc::CandidateInfo or Rpc::ResourceProofResponse

Timeout to stop trying to connect if so much time has elapsed that we will not be able to succeed.

Timeout to stop trying to resource proof if so much time has elapsed that we will not be able to succeed.

Return the next part of the resource proof for the given elder.
inputs:
- elder name

Return the last part of the resource proof for the given elder that we sent.
inputs:
- elder name

Return all the elders we are currently sending resource proofs to.

Collection of elders we have not sent a ResourceProofResponse during the timeout.

Indicate whether we successfully connected, and if so resend Rpc::ResourceProofResponse instead of Rpc::CandidateInfo.

The RelocatedInfo given to the routine and used to send CandidateInfo

graph TB JoiningRelocateCandidate --> InitialSendConnectionInfoRequest JoiningRelocateCandidate["JoiningRelocateCandidate
(Take RelocatedInfo)"] style JoiningRelocateCandidate fill:#f9f,stroke:#333,stroke-width:4px InitialSendConnectionInfoRequest["RELOCATED_INFO = RelocatedInfo
send_rpc(Rpc::CandidateInfo from RELOCATED_INFO)
to target NaeManger

schedule(TimeoutResendInfo)
schedule(TimeoutConnectRefused)"] InitialSendConnectionInfoRequest-->LoopStart LoopStart --> WaitFor WaitFor(("Wait for 0:")) LocalEvent((Local
Event)) WaitFor --> LocalEvent LocalEvent -- ResourceProofForElderReady --> SendNextResourceProofPartForElder LocalEvent--"TimeoutResendInfo triggered"--> CheckResendCandidateInfo CheckResendCandidateInfo((Check)) CheckResendCandidateInfo -- "Otherwise" --> ResendCandidateInfo ResendCandidateInfo["send_rpc(
Rpc::CandidateInfo from RELOCATED_INFO)
to target NaeManger

schedule(
TimeoutResendInfo)"] ResendCandidateInfo --> ScheduleResendTimeoutInfo CheckResendCandidateInfo -- "CONNECTED==true" --> ResendProofs ResendProofs["for name in NEED_RESEND_PROOFS:
send_rpc(
Rpc::ResourceProofResponse{
get_resend_resource_proof_part(
name)})

NEED_RESEND_PROOFS=
get_resource_proof_elders()"] ResendProofs --> ScheduleResendTimeoutInfo ScheduleResendTimeoutInfo["schedule(
TimeoutResendInfo)"] ScheduleResendTimeoutInfo --> LoopEnd SendNextResourceProofPartForElder["NEED_RESEND_PROOFS.remove(
elder name)

send_rpc(
Rpc::ResourceProofResponse{
get_next_resource_proof_part(
elder name)})"] SendNextResourceProofPartForElder --> LoopEnd LocalEvent--"TimeoutConnectRefused
or
TimeoutProofRefused
triggered"--> EndRoutine EndRoutine["End of JoiningRelocateCandidate
"] style EndRoutine fill:#f9f,stroke:#333,stroke-width:4px Rpc((RPC)) WaitFor --> Rpc Rpc -- Rpc::NodeApproval --> EndRoutine Rpc -- Rpc::NodeConnected --> NodeConnected NodeConnected["kill_scheduled(TimeoutConnectRefused)>
CONNECTED=true"] NodeConnected-->LoopEnd Rpc -- Rpc::ConnectionInfoRequest --> OnConnectionInfoRequest OnConnectionInfoRequest["send_rpc(
Rpc::ConnectionInfoResponse)"] OnConnectionInfoRequest-->LoopEnd Rpc -- Rpc::ResourceProofReceipt --> SendNextResourceProofPartForElder Rpc -- Rpc::ResourceProof --> StartComputeResourceProofForElder StartComputeResourceProofForElder["start_compute_resource_proof(source elder)

schedule(TimeoutProofRefused)"] StartComputeResourceProofForElder --> LoopEnd Rpc -- "Rpc::ExpectCandidate
Rpc::ExpectCandidateRefuseResponse
Rpc::ExpectCandidateAcceptResponse
..." --> VoteParsecRPC VoteParsecRPC["vote_for(the parsec rpc)
(cache for later)"] VoteParsecRPC --> LoopEnd LoopEnd --> LoopStart

Node as a member of a section

Once a node has joined a section, they need to be ready to take on multiple roles simultaneously:

All of these flows are happening simultaneously, but they share a common event loop. At any time, either all flows or all but one flows must be in a "wait" state. (If an event is handled by multiple active event loops, the one with the highest number handles it.)
If our section decides to relocates us, we will have to stop functioning as a member of our section and go back to the previous flow where we will "Rebootstrap" so we can become a member of a different section.

graph TB style StartSectionMemberNode fill:#f9f,stroke:#333,stroke-width:4px style EndRoutine fill:#f9f,stroke:#333,stroke-width:4px StartSectionMemberNode --> InitialiseNodeInternalState InitialiseNodeInternalState["initialise_node_internal_state()

(Parsed, Routing table...)"] InitialiseNodeInternalState --> ConcurrentStartElder ConcurrentStartElder{"Concurrent
start elder"} ConcurrentStartElder --> ConcurrentStartSrc ConcurrentStartSrc{"Concurrent
start src"} ConcurrentStartSrc --> StartDecidesOnNodeToRelocate style StartDecidesOnNodeToRelocate fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartSrc --> StartRelocateSrc style StartRelocateSrc fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartElder --> ConcurrentStartDst ConcurrentStartDst{"Concurrent
start dst"} ConcurrentStartDst --> StartRespondToRelocateRequests style StartRespondToRelocateRequests fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartDst --> StartRelocatedNodeConnection StartRelocatedNodeConnection[StartRelocatedNodeConnection] style StartRelocatedNodeConnection fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartDst --> StartResourceProof StartResourceProof[StartResourceProof] style StartResourceProof fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartElder --> StartMergeSplitAndChangeElders style StartMergeSplitAndChangeElders fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartElder --> CheckOnlineOffline style CheckOnlineOffline fill:#f9f,stroke:#333,stroke-width:4px ConcurrentStartElder --> WaitFor WaitFor(("Wait for 0:")) Rpc((RPC)) WaitFor --> Rpc Rpc -- "Rpc::RelocatedInfo"--> EndRoutine EndRoutine["EndRoutine: Kill all sub routines"]

Destination section

As a member of a section, our section will sometimes receive a node that is being relocated. These diagrams are from the point of view of one of the nodes in the section, doing its part to handle the node that is trying to relocate to this section.

Deciding when to accept an incoming relocation

This flow represents what we do when a section contacts us to relocate one of their nodes to our section.
The process starts as we receive an Rpc::ExpectCandidate from this node.
We vote for it in PARSEC to be sure all members of the section process it in the same order.
Once it reaches consensus, we are ready to process that candidate by letting them connect (see StartRelocatedNodeConnection) and then perform the resource_proof (see StartResourceProof).
There are some subtleties, such as the fact that we only want to process one candidate at a time, but this is the general idea.

We receive this RPC from a section that wants to relocate a node to our section.
The node is not communicating with us yet, only once we sent Rpc::ExpectCandidateAcceptResponse to the originating section.
On receiving it, we vote for Parsec::ExpectCandidate to process it in the same order as other members of our section.
It kickstarts the entire chain of events in this diagram.
Note that we could also see consensus on Parsec::ExpectCandidate before we ourselves voted for it in PARSEC, as long as enough members of our section did.

We want to accept at most one incoming relocation at a time into our section.
The count_waiting_proofing_or_hop function returns the count of nodes that we have yet to resource proof or relocate through a new hop, (States from State::WaitingCandidateInfo until it reaches State::Online or State::Relocated).
When the output of this function is not 0 and we reach consensus on Parsec::ExpectCandidate, we send a Rpc::ExpectCandidateRefuseResponse to the would-be-incoming-node so they can try another section or try again later.

If we already accepted a candidate, reply with the same info we provided intially and returned by get_waiting_candidate_info.

graph TB Start["StartRespondToRelocateRequests:
No exit - Needs killed"] style Start fill:#f9f,stroke:#333,stroke-width:4px Start --> LoopStart LoopEnd --> LoopStart LoopStart --> WaitFor WaitFor((Wait for 1:)) WaitFor --RPC--> RPC WaitFor --Parsec
consensus--> ParsecConsensus RPC((RPC)) RPC --Rpc::ExpectCandidate--> VoteParsecExpectCandidate ParsecConsensus((Consensus)) ParsecConsensus --Parsec::ExpectCandidate--> Balanced VoteParsecExpectCandidate["vote_for(
Parsec::ExpectCandidate)"] VoteParsecExpectCandidate --> LoopEnd Balanced(("Check")) Balanced -- "get_waiting_candidate_info(candidate).is_some()" --> SendIdenticalExpectCandidateAcceptResponse Balanced -- "count_waiting_proofing_or_hop()==0" --> SendExpectCandidateAcceptResponse Balanced -- "Otherwise" --> SendRefuse SendIdenticalExpectCandidateAcceptResponse["send_rpc(
Rpc::ExpectCandidateAcceptResponse)
again to source section
with original info
get_waiting_candidate_info(candidate)"] SendIdenticalExpectCandidateAcceptResponse --> LoopEnd SendExpectCandidateAcceptResponse["add_node(
NodeState=State::WaitingCandidateInfo)

send_rpc(
Rpc::ExpectCandidateAcceptResponse)
to source section"] SendExpectCandidateAcceptResponse --> LoopEnd SendRefuse["send_rpc(
Rpc::ExpectCandidateRefuseResponse)
to source section"] SendRefuse --> LoopEnd

Relocated node connection

Manage node with NodeState=State::WaitingCandidateInfo and connection info request/response RPCs.
The candidate sends its CandidateInfo to the NaeManager of the target interval to ensure it reaches the section even in the event of a split or a merge.
This target interval address could be the middle address of the interval, and the interval should cover a range that will not be split if the section splits.
When we complete, we either stop responding to Rpc::CandidateInfo RPCs if it failed, or we send Rpc::NodeConnected on success,
(We could also omit Rpc::NodeConnected, the node would continue sending CandidateInfo until Rpc::ResourceProof or time out).
When the node reaches State::WaitingProofing or State::RelocatingHop state, the section becomes responsible for managing communication with this node as it would any of its adults.

Re-insert Parsec::CandidateConnected in case of prefix change (i.e split/merge + checkpoint), and CANDIDATES_INFO/CANDIDATES_VOTED should be kept. Do not re-insert votes for which is_valid_waited_info is false (discarded candidates will never reach consensus). (Alternatively could always discard votes on prefix change and clear CANDIDATES_INFO/CANDIDATES_VOTED)

Return true if

If we know of a section that has a shorter prefix than ours, we prefer for them to receive this incoming node rather than ourselves as it will help keep the Network's sections tree balanced.
This shorter_prefix_section is a function that will return None if we are the shortest of any section we know, Some if there is a better candidate.
If it is Some, we will relocate the new node to them instead of completing the relocation to our own section.

Convert the candidate node at the target interval address to a node using the new_public_id from the given CandidateInfo. Update the state of the node with the given state.

A nodes state indicating that it is relocated without ageing further.
Also counted as part of count_waiting_proofing_or_hop().

Collection of candidate that we will purge if they are still in State::WaitingCandidateInfo next time Parsec::CheckRelocatedNodeConnection is consensused.

Collection of CandidateInfo keyed by CandidateInfo.new_public_id.
For all items, only keep if is_valid_waited_info(info)==true.

Collection of candidates CandidateInfo.new_public_id that we have voted for as connected
For all items, only keep if also kept in CANDIDATES_INFO.

Provides and external entry point to reset the currently processed nodes: Do not reject a node because it took longer than expected.
This will be called for example after merge/split as the new nodes would become voters.

graph TB RelocatedNodeConnection["StartRelocatedNodeConnection"] style RelocatedNodeConnection fill:#f9f,stroke:#333,stroke-width:4px RelocatedNodeConnection --> StartCheckRelocatedNodeConnectionTimeout StartCheckRelocatedNodeConnectionTimeout["schedule(
CheckRelocatedNodeConnectionTimeout)"] StartCheckRelocatedNodeConnectionTimeout --> LoopStart WaitFor(("Wait for 2:")) LoopStart-->WaitFor WaitFor -- Parsec
consensus--> ParsecConsensus ParsecConsensus((Consensus)) ParsecConsensus -- "Parsec::CheckRelocatedNodeConnection" --> CleanCandidates CleanCandidates["for candidate in both
-waiting_nodes_connecting()
-CANDIDATES:

purge_node_info(candidate)

CANDIDATES_INFO
.retain(|(k,v)|is_valid_waited_info(v))
CANDIDATES_VOTED
.retain(|src| CANDIDATES_INFO.contains_key(src))
CANDIDATES=waiting_nodes_connecting()

(reject candidates that took too long)"] CleanCandidates --> ReStartCheckRelocatedNodeConnectionTimeout ReStartCheckRelocatedNodeConnectionTimeout["schedule(
CheckRelocatedNodeConnectionTimeout)"] ReStartCheckRelocatedNodeConnectionTimeout --> LoopEnd ParsecConsensus -- "Parsec::CandidateConnected" --> CheckValidConnected CheckValidConnected((Check)) CheckValidConnected -- "is_valid_waited_info(
Parsec::CandidateConnected info)" --> Balanced Balanced(("Check")) Balanced -- "shorter_prefix_section(
).is_some()" --> RelocateToShorterPrefix RelocateToShorterPrefix["update_to_node(
Parsec::CandidateConnected info,
State::RelocatingHop)"] RelocateToShorterPrefix --> SetConnected Balanced -- "Otherwise" --> SetWaitingProof SetWaitingProof["update_to_node(
Parsec::CandidateConnected info,
State::WaitingProofing)"] SetWaitingProof --> SetConnected SetConnected["send_rpc(
Rpc::NodeConnected)

(All communications now
managed by elder like
for other adults)"] SetConnected --> LoopEnd CheckValidConnected -- "Otherwise" --> LoopEnd RPC((RPC)) WaitFor --"RPC"--> RPC RPC -- "Rpc::CandidateInfo" --> CheckCandidateInfo CheckCandidateInfo((Check)) CheckCandidateInfo -- "Otherwise" --> DiscardRPC DiscardRPC[Discard RPC] DiscardRPC --> LoopEnd CheckCandidateInfo -- "is_valid_waited_info(
CandidateInfo)" --> SendConnectionInfoRequest SendConnectionInfoRequest["CANDIDATES_INFO
.insert_if_not_exists(
CandidateInfo.new_public_id,
CandidateInfo info)

send_rpc(
Rpc::ConnectionInfoRequest)

(cache candidate info and
send connect info)"] SendConnectionInfoRequest --> LoopEnd RPC -- "Rpc::ConnectionInfoResponse
and
CANDIDATES_INFO
.contains_key(message_src)
and
!CANDIDATES_VOTED
.contains(message_src)" --> ConnectCandidate ConnectCandidate["connect_to_candidate(
ConnectionInfoResponse info)

vote_for(
Parsec::CandidateConnected{
CANDIDATES_INFO.get(message_src)})

CANDIDATES_VOTED.insert(message_src)

(connect and vote for
candidate connected)"] ConnectCandidate --> LoopEnd WaitFor --Event--> Event Event((Event)) VoteParsecCheckRelocatedNodeConnection["vote_for(
Parsec::CheckRelocatedNodeConnection)"] Event -- CheckRelocatedNodeConnectionTimeout
expire --> VoteParsecCheckRelocatedNodeConnection VoteParsecCheckRelocatedNodeConnection --> LoopEnd LoopEnd --> LoopStart
graph TB Reset["RelocatedNodeConnection_Reset"] style Reset fill:#19f,stroke:#333,stroke-width:4px EndReset["End RelocatedNodeConnection_Reset"] style EndReset fill:#19f,stroke:#333,stroke-width:4px Reset --> ClearCandidates ClearCandidates["CANDIDATES.clear()

(Give time for new elder to catch up)"] ClearCandidates --> EndReset

Resource proof from a destination's point of view

In the previous diagram, we ensured an incoming candidate would only reach the "State::WaitingProofing" state once it was connected to our section and able to communicate with its elders. At this stage, the node would be a member of our peer_list.
This is maintained across merge/split as for any adult node, so we are ready to resource proof any node in State::WaitingProofing state

This leads us here: to the resource proof.
We only process up to one resource proof at a time (i.e: we wait for the current resource proof to reach completion before scheduling a new one).
When we periodically decide to resource proof a node, we check if any node is ready for it: State::WaitingProofing state, and pick the best candidate (There may be multiple after a merge).
As an elder, I will send the candidate a Rpc::ResourceProof. This gives them the "problem to solve". As they solve it, they will send me ResourceProofResponses. These will be parts of the proof. On receiving valid parts, I must send a ResourceProofReceipt. Once they finally send me the last valid part, they passed their resource proof and I vote for Parsec::Online (essentially accepting them as a member of my section).
At any time during this process, they may timeout (The whole process it taking longer than expected), in which case I will decided to reject them and vote for Parsec::PurgeCandidate.
This process ends once I reach consensus on either accepting the candidate (Parsec::Online) or refusing them (Parsec::PurgeCandidate).
It is possible that both reach the quorum consensus, in which case the second one will be discarded. It won't cause issues as consistency is the only property that matters here: if we accept someone who then went Offline, we will be able to detect they are Offline later with the standard Offline detection mechanism. But it is more likely that they took close to the time limit to complete their proof.

Option<(Candidate, Nonce)>.
The candidate we are currently resource proofing, if any.
The nonce is used to distinguish new attempts of resource proofing the same candidate after a merge or a split.

Once we've voted a node online, we don't care to handle further ResourceProofResponses from them.
This local variable helps us with this.

The candidate sends this RPC that contains part of a proof. It may continue to be sent by a node we have not accepted, even if consensus was reached to add it.
It may also still be sent by a candidate we have rejected. It's OK to discard the RPC in these cases as it is no longer relevant.

The same node could be accepted by some nodes who would vote Parsec::Online, but also time out for some other nodes who would vote for Parsec::PurgeCandidate.
If it's the case, we only want to process the first of these two events and discard the other one.

Provides an external entry point to cancel the currently processed nodes: Restart the resource proofing with all involved voters.
This will be called for example after merge/split as the new nodes would become voters.

graph TB ResourceProof["StartResourceProof"] style ResourceProof fill:#f9f,stroke:#333,stroke-width:4px ResourceProof --> StartCheckResourceProofTimeout StartCheckResourceProofTimeout["schedule(
CheckResourceProofTimeout)"] StartCheckResourceProofTimeout --> LoopStart WaitFor(("Wait for 3:")) LoopStart-->WaitFor WaitFor -- Parsec
consensus--> ParsecConsensus ParsecConsensus((Consensus)) ParsecConsensus --Parsec::Online
Parsec::PurgeCandidate
otherwise--> DiscardParsec DiscardParsec["Discard
Parsec
event"] DiscardParsec --> LoopEnd ParsecConsensus -- Parsec::PurgeCandidate
for CANDIDATE --> RemoveNode ParsecConsensus -- Parsec::Online
for CANDIDATE --> MakeOnline MakeOnline["set_node_state(
Candidate,
State::Online)

send_rpc(
Rpc::NodeApproval)"] RemoveNode["purge_node_info(
candidate node)"] RemoveNode --> ReStartCheckResourceProofTimeout MakeOnline --> ReStartCheckResourceProofTimeout ParsecConsensus -- "Parsec::CheckResourceProof" --> SetCandidate SetCandidate["CANDIDATE=resource_proof_candidate()

(Best node with NodeState=State::WaitingProofing)"] SetCandidate -->CheckRequestRP CheckRequestRP((Check)) CheckRequestRP --"Otherwise" --> ReStartCheckResourceProofTimeout ReStartCheckResourceProofTimeout["CANDIDATE=None
VOTED_ONLINE==no

schedule(
CheckResourceProofTimeout)"] ReStartCheckResourceProofTimeout --> LoopEnd CheckRequestRP --"CANDIDATE.is_some()"-->RequestRP RequestRP["send_rpc(
Rpc::ResourceProof)
to CANDIDATE

schedule(TimeoutAccept)"] RequestRP --> LoopEnd RPC((RPC)) WaitFor --RPC--> RPC RPC -- Rpc::ResourceProofResponse
from CANDIDATE --> ProofResponse((Proof)) ProofResponse((Check)) SendProofReceipt["send_rpc(
Rpc::ResourceProofReceipt)
for proof"] ProofResponse -- "Valid Part or End
otherwise" --> SendProofReceipt VoteParsecOnline["vote_for(
Parsec::Online)

VOTED_ONLINE=yes"] ProofResponse -- "Valid End
and
VOTED_ONLINE==no" --> VoteParsecOnline DiscardRPC[Discard RPC] RPC -- Rpc::ResourceProofResponse
otherwise --> DiscardRPC DiscardRPC --> LoopEnd WaitFor --Event--> Event Event((Event)) VoteParsecPurgeCandidate["vote_for(
Parsec::PurgeCandidate)"] Event -- TimeoutAccept
expire --> VoteParsecPurgeCandidate VoteParsecCheckResourceProofTimeout["vote_for(
Parsec::CheckResourceProof)"] Event -- CheckResourceProofTimeout
expire --> VoteParsecCheckResourceProofTimeout VoteParsecCheckResourceProofTimeout --> LoopEnd VoteParsecOnline --> SendProofReceipt SendProofReceipt-->LoopEnd VoteParsecPurgeCandidate --> LoopEnd LoopEnd --> LoopStart
graph TB Cancel["ResourceProof_Cancel"] style Cancel fill:#19f,stroke:#333,stroke-width:4px EndCancel["End ResourceProof_Cancel"] style EndCancel fill:#19f,stroke:#333,stroke-width:4px Cancel --> CancelCheckResourceProofTimeout CancelCheckResourceProofTimeout["CANDIDATE=None
VOTED_ONLINE==no

schedule(
CheckResourceProofTimeout)"] CancelCheckResourceProofTimeout --> EndCancel

Source section

As members of a section, each node must keep track of how many "work units" other nodes have performed.
Once a node has accumulated enough work units to gain age, the section must work together to relocate that node to a new section where they can become 1 age unit older.
These diagrams detail how this happens.

Deciding that a member of our section should be relocated away

In these diagrams, we are modelling the simple version of node ageing that we decided to implement for Fleming: Work units are incremented for all nodes in the section every time a timeout reaches consensus.
Because a quorum of elders must have voted for this timeout, the malicious nodes can't arbitrarily speed up the ageing of their nodes.
Once a node has accumulated enough work units to be relocated, if no other node is currently in State::RelocatingAgeIncrease we set its state to State::RelocatingAgeIncrease. This node will then be actually relocated in StartRelocateSrc (see StartRelocateSrc). Because of this, we will generally only relocate one adult at a time (except in case of merge).

In the context of Fleming, nodes (especially adults) aren't doing meaningful work such as handling data.
As a proxy, we use a time based metric to estimate how much work nodes have done (i.e: how long they remained in State::Online and responsive).
A local timeout wouldn't do here as it would allow malicious nodes to artificially age nodes in their sections faster. However, by reaching quorum on the fact a timeout happened, we ensure that at least one honest node has voted for the timeout.
All nodes start the WorkUnitTimeout. On expiry, they vote for a WorkUnitIncrement in PARSEC and restart the timer.

This function increments the number of work units for all members of my peer_list (remember that n_work_units is a member of the PeerState struct).

Returns true if we have any node currently relocating: With node state State::RelocatingAgeIncrease only.
There will most often be zero or one such nodes, unless a merge occurs in which case there may be multiple. Nodes coming back online, or needing an extra hop are not considered here.

This function will return the best candidate for relocation, if any.
First, it will only consider members of our peer_list that have the state: State::Online
We use the condition with has_relocating_node() to limit the number of State::Online nodes to relocate to one (except possibly in case of a merge).
It can return for instance the node with the largest number of work units for which the number of work units is greater than 2^age.

This function mutates our peer_list to set the state (for example set State::RelocatingAgeIncrease for the node).
inputs:
- node
- state
side-effect:
- mutates peer_list

graph TB Start["StartDecidesOnNodeToRelocate:
No exit - Needs killed"] style Start fill:#f9f,stroke:#333,stroke-width:4px Start --> StartWorkUnitTimeOut StartWorkUnitTimeOut["schedule(WorkUnitTimeOut)"] StartWorkUnitTimeOut --> LoopStart LoopEnd --> LoopStart LoopStart --> WaitFor WaitFor((Wait for 4:)) WaitFor --Event--> Event WaitFor --Parsec
consensus--> ParsecConsensus Event((Event)) Event -- WorkUnitTimeOut
Trigger --> VoteParsecRelocationTrigger VoteParsecRelocationTrigger["vote_for(Parsec::WorkUnitIncrement)
schedule(WorkUnitTimeOut)"] VoteParsecRelocationTrigger --> LoopEnd ParsecConsensus((Consensus)) ParsecConsensus -- Parsec::WorkUnitIncrement --> IncrementWorkUnit IncrementWorkUnit["increment_nodes_work_units()"] IncrementWorkUnit-->AlreadyRelocating AlreadyRelocating(("Check?")) AlreadyRelocating --"get_node_to_relocate().is_some()
and
!has_relocating_node()"--> SetRelocatingNodeState AlreadyRelocating --"Otherwise"--> LoopEnd SetRelocatingNodeState["set_node_state(
get_node_to_relocate(),
State::RelocatingAgeIncrease)"] SetRelocatingNodeState --> LoopEnd

Relocating a member of our section away from it

At this stage, we handle nodes that were marked for relocation.

We send an Rpc::ExpectCandidate to the destination section:
Either that section will send us a Rpc::ExpectCandidateAcceptResponse, then we will complete the node relocation.
Or that section will send us a Rpc::ExpectCandidateRefuseResponse, then we will retry later.
Or that RPC or the response is lost, then we will retry later.

When we receive Rpc::ExpectCandidateAcceptResponse or Rpc::ExpectCandidateRefuseResponse, we vote for it in PARSEC, regardless of the order of operations, so it will be consensused.
The first Parsec::ExpectCandidateAcceptResponse consensused for a node will be the single valid relocation. We will sign the relocation info through parsec and send to the node that is being relocated the RelocatedInfo they will need.
At this point, we will purge their information since this node isn't a member of our section any more.

Elders are not considered for relocation: Elder nodes in State::RelocatingAgeIncrease will eventually get demoted to adults (see StartMergeSplitAndChangeElders) at which point they may be relocated.
We prioritise relocating our Adults, then just relocated nodes that need another hop, then nodes coming back online.

Also of note: We may be relocating multiple nodes (i.e because of merge, or node relocating to us or node coming back online), but we will only handle one at a time per CheckRelocateTimeOut event.
Throttling is otherwise handled in flows setting the relocating states.

Takes ALREADY_RELOCATING for the nodes it will ignore.
Returns the best node to relocate and the target address to send it to

There may be multiple nodes relocating, for example because of a merge. Take the best one (oldest), and choose a target address.
The target address is one managed by one of our neighbours. This could be random, or the current old_public_id with a single bit of the prefix flipped.
This would help ensure that source and destination remain neighbours, even if the source splits.
Using a target address instead of a section ensures we deliver the message even if the destination splits or merges.

There are 3 states the node may be: State::RelocatingAgeIncrease, State::RelocatingHop, and State::RelocatingBackOnline.
Nodes will be selected in this order: State::RelocatingAgeIncrease, State::RelocatingHop and then State::RelocatingBackOnline and tie break by age then name.
This ensures that we prioritise good node relocation.
Note: It may be possible for a node to relocate to its sibling, and complete relocation after a merge occurred.

The nodes we ignore when selecting a new node to send Rpc::ExpectCandidate for.
Local states are not carried over on merge or split, so we will resend Rpc::ExpectCandidate earlier than we would otherwise.

Is it a valid node that is not yet relocated (i.e State is State::RelocatingAgeIncrease, State::RelocatingHop or State::RelocatingBackOnline).

Exactly one of these RPCs will be sent to us from the destination section as a response to our section's Rpc::ExpectCandidate.
When this happens, we will immediately vote for it in PARSEC as we need to act in the same order as anyone else in our section.
In case we re-sent the Rpc::ExpectCandidate, we may receive more than one Rpc::ExpectCandidateRefuseResponse and Rpc::ExpectCandidateAcceptResponse. In this case we will pass on the first Rpc::ExpectCandidateAcceptResponse to our Candidate.

Trigger relocation: Candidate will disconnect on receiving that RPC
This RPC contains the Rpc::ExpectCandidateAcceptResponse info, and the signatures proving the source section received it and decided that particular response is the one to relocate to.
In case we re-sent the Rpc::ExpectCandidate, we may receive more than one Rpc::ExpectCandidateAcceptResponse.
In this case we must ensure that no single node could act on that other Rpc::ExpectCandidateAcceptResponse and be accepted by the destination. this means the signatures must be provided only once the Rpc::ExpectCandidateAcceptResponse is agreed.
RelocatedInfo will contain the Rpc::ExpectCandidateAcceptResponse info and the quorum of signatures gathered from PARSEC vote on Parsec::RelocatedInfo.

graph TB Start["StartRelocateSrc:
No exit - Needs killed"] style Start fill:#f9f,stroke:#333,stroke-width:4px Start --> StartCheckRelocateTimeOut StartCheckRelocateTimeOut["schedule(CheckRelocateTimeOut)"] StartCheckRelocateTimeOut --> LoopStart LoopEnd --> LoopStart LoopStart --> WaitFor WaitFor((Wait for 5:)) WaitFor --Event--> Event WaitFor --RPC--> RPC WaitFor --Parsec
consensus--> ParsecConsensus Event((Event)) Event -- CheckRelocateTimeOut
Trigger --> VoteParsecCheckRelocate VoteParsecCheckRelocate["vote_for(
Parsec::CheckRelocate)
schedule(
CheckRelocateTimeOut)"] VoteParsecCheckRelocate --> LoopEnd RPC((RPC)) RPC --Rpc::ExpectCandidateAcceptResponse--> VoteParsecExpectCandidateAcceptResponse RPC --Rpc::ExpectCandidateRefuseResponse--> VoteParsecExpectCandidateRefuseResponse VoteParsecExpectCandidateAcceptResponse["vote_for(
Parsec::ExpectCandidateAcceptResponse)"] VoteParsecExpectCandidateAcceptResponse --> LoopEnd VoteParsecExpectCandidateRefuseResponse["vote_for(
Parsec::ExpectCandidateRefuseResponse))"] VoteParsecExpectCandidateRefuseResponse --> LoopEnd ParsecConsensus((Consensus)) ParsecConsensus -- Parsec::CheckRelocate --> CheckNeedRelocate CheckNeedRelocate((Check?)) CheckNeedRelocate--"Otherwise" -->AllowResendCandidates CheckNeedRelocate--"get_best_relocating_node_and_target(
ALREADY_RELOCATING).is_some()" --> SendExpectCandidate SendExpectCandidate["(node, target)=
get_best_relocating_node_and_target(
ALREADY_RELOCATING)

send_rpc(
Rpc::ExpectCandidate(node))
to target NaeManager

ALREADY_RELOCATING
.insert(node)"] SendExpectCandidate --> AllowResendCandidates AllowResendCandidates["ALREADY_RELOCATING=
ALREADY_RELOCATING
.map(|(node, count| (node, count+1))
.filter(|(node,count|) count < 3)

(update wait and allow resend)"] AllowResendCandidates --> LoopEnd ParsecConsensus --"Parsec::ExpectCandidateAcceptResponse
Parsec::ExpectCandidateRefuseResponse" --> CheckIsOurs CheckIsOurs((Check)) CheckIsOurs -- "is_our_relocating_node(node)" --> CheckIsAccept CheckIsAccept((Check)) CheckIsAccept -- Parsec::ExpectCandidateRefuseResponse --> RefusedCandidate RefusedCandidate["ALREADY_RELOCATING
.remove(node)

(allow resend)"] RefusedCandidate --> LoopEnd CheckIsAccept -- Parsec::ExpectCandidateAcceptResponse --> VoteProvableRelocateInfo VoteProvableRelocateInfo["set_node_state(
node,
State::Relocated{accept_info}
(Vote for same relocation if merge/split
so only one valid proof exists)

vote_for(
Parsec::RelocatedInfo{accept_info})

(set relocated and prepare info)"] VoteProvableRelocateInfo --> LoopEnd CheckIsOurs -- Otherwise --> DiscardVote DiscardVote[Discard
Vote] DiscardVote --> LoopEnd ParsecConsensus --"Parsec::RelocatedInfo"--> SendProvableRelocateInfo SendProvableRelocateInfo["send_rpc(Rpc::RelocatedInfo)
to node

Node may be already gone"] SendProvableRelocateInfo-->PurgeNodeInfos PurgeNodeInfos["purge_node_info(
node)"] PurgeNodeInfos--> LoopEnd

Elder-only

Process for Adult/Elder promotion and demotion including merge

This flow updates the elder status of our section nodes if needed.
Because it is interlinked, it also handles merging and splitting section: When merging or splitting, no elder change can happen.
However other flows continue, so relocating to and from the section is uninterrupted:

As for incrementing work units, we want to update the eldership status of all nodes in a section on a synchronised, regular basis.
For this reason, it makes sense to have a timer going through Parsec.
Note that this timer has to be only as fast as needed so that it remains highly unlikely that 1/3 of the elders in any section would go offline within one timer's duration.

A section sends a Rpc::Merge to their neighbour section when they are ready to merge at the given SectionInfo digest. The RPC contains the SectionInfo of the originating section.
In this flow, we handle both situations:

We vote for this Parsec event on receiving a Rpc::Merge from our neighbour section.
It contains the information about them that we need for merging. When this PARSEC event reaches consensus in PARSEC, we store that information by calling store_merge_infos.

This function is used to store the merge information from a neighbour section locally.
Once it has been stored, has_merge_infos will return true and we will be ready to enter the ProcessMerge flow.

This function indicates that we received sufficient information from our neighbour section needing a merge, and reached consensus on it.
We are ready to start the merging process with them.

This function indicates that we need merging (as opposed to a merge triggered by our neighbour's needs).
The details for the trigger are still in slight flux, but here are some possibilities:

If any of our elders is not State::Online, they must be demoted to a plain old adult.
If this happens, the oldest adult must be promoted to the elder state as a replacement.
Alternatively, if any of our State::Online adult nodes is older than any of our elders, the youngest elder must be demoted and this adult must be promoted.
Note that elder changes are only processed when the section is not in the middle of handling a merge.

This function indicates that we need splitting.
The details for the trigger are still in slight flux, but here are some possibilities:

graph TB StartMergeSplitAndChangeElders["StartMergeSplitAndChangeElders:
No exit - Needs killed"] style StartMergeSplitAndChangeElders fill:#f9f,stroke:#333,stroke-width:4px StartMergeSplitAndChangeElders --> StartCheckElderTimeout StartCheckElderTimeout["schedule(
CheckElderTimeout)"] StartCheckElderTimeout --> LoopStart WaitFor(("Wait for 6:")) LoopStart --> WaitFor WaitFor -- Event --> Event Event((Event)) Event-- CheckElder
Timeout--> VoteCheckElderTimeout VoteCheckElderTimeout["vote_for(
Parsec::CheckElderTimeout)"] VoteCheckElderTimeout--> LoopEnd RPC((RPC)) WaitFor -- RPC --> RPC RPC --Rpc::Merge--> VoteParsecNeighbourMerge VoteParsecNeighbourMerge["vote_for(
Parsec::NeighbourMerge)"] VoteParsecNeighbourMerge --> LoopEnd Consensus((Consensus)) WaitFor-- Parsec
consensus --> Consensus Consensus -- "Parsec::NeighbourMerge" --> SetNeighbourMerge SetNeighbourMerge["store_merge_infos(
Parsec::NeighbourMerge info)"] SetNeighbourMerge-->LoopEnd Consensus--"Parsec::CheckElderTimeout"-->CheckMergeNeeded CheckMergeNeeded(("Check")) CheckMergeNeeded--"Otherwise"-->CheckElderChange CheckElderChange(("Check")) CheckElderChange -- "Otherwise" --> CheckNeedSplit CheckNeedSplit(("Check")) CheckNeedSplit --"Otherwise" --> RestartTimeout RestartTimeout["schedule(
CheckElderTimeout)"] RestartTimeout-->LoopEnd CheckNeedSplit --"split_needed()" --> Concurrent2 Concurrent2{"Concurrent
paths"} Concurrent2 --> ProcessSplit Concurrent2 --> LoopEnd ProcessSplit["ProcessSplit"] style ProcessSplit fill:#f9f,stroke:#333,stroke-width:4px ProcessSplit --> CancelResourceProof CheckElderChange --"check_elder_change()

Has elder changes: elder first ordered by:
NodeState==State::Online then age then name."--> Concurrent0 Concurrent0{"Concurrent
paths"} Concurrent0 --> ProcessElderChange Concurrent0 --> LoopEnd ProcessElderChange["ProcessElderChange(changes)"] style ProcessElderChange fill:#f9f,stroke:#333,stroke-width:4px ProcessElderChange -->CancelResourceProof CancelResourceProof["ResourceProof_Cancel"] style CancelResourceProof fill:#19f,stroke:#333,stroke-width:4px CancelResourceProof --> ResetRelocatedNodeConnection ResetRelocatedNodeConnection["RelocatedNodeConnection_Reset"] style ResetRelocatedNodeConnection fill:#19f,stroke:#333,stroke-width:4px ResetRelocatedNodeConnection --> RestartTimeout CheckMergeNeeded --"merge_needed()
or
has_merge_infos()"-->Concurrent1 Concurrent1{"Concurrent
paths"} Concurrent1 --> ProcessMerge Concurrent1 --> LoopEnd ProcessMerge["ProcessMerge"] style ProcessMerge fill:#f9f,stroke:#333,stroke-width:4px ProcessMerge --> CancelResourceProof LoopEnd --> LoopStart

Process Adult/Elder promotion and demotion needed from last check

Vote for Parsec::Add for new elders,Parsec:: Remove for no longer elders and Parsec::NewSectionInfo
This handles any change, it does not care whether one or all elders are changed, this is decided by the calling function.

At any time, there must be exactly NUM_ELDERS (say 10) elders per section.
To maintain this invariant, we must handle multiple eldership changes atomically
We accomplish this by voting for all the membership changes needed at once and waiting for all these votes to reach consensus before reflecting the status change in our chain.

A list of PublicId.
The content of the NewSectionInfo parsec event that reached consensus.

This function updates the eldership status of each node in the chain based on the new section info: the nodes with their public id in new_section_info are the exact set of current elders.
Input:

Side-effect:

graph TB ProcessElderChange["ProcessElderChange
(Take elder changes)
(shared state)"] style ProcessElderChange fill:#f9f,stroke:#333,stroke-width:4px EndRoutine["End of ProcessElderChange
(shared state)"] style EndRoutine fill:#f9f,stroke:#333,stroke-width:4px ProcessElderChange --> MarkAndVoteSwapNewElder MarkAndVoteSwapNewElder["vote_for(Parsec::Add) for new elders
vote_for(Parsec::Remove) for now adults nodes
vote_for(Parsec::NewSectionInfo)

WAITED_VOTES.insert(all votes)"] MarkAndVoteSwapNewElder --> LoopStart WaitFor(("Wait for 7:")) LoopStart --> WaitFor Consensus((Consensus)) WaitFor-- Parsec
consensus --> Consensus Consensus -- "WAITED_VOTES.contains(vote)" --> OneVoteConsensused OneVoteConsensused["WAITED_VOTES.remove(vote)"] OneVoteConsensused --> WaitComplete WaitComplete(("Check?")) WaitComplete--"WAITED_VOTES
.is_empty()
(Wait complete)"-->MarkNewElderAdults MarkNewElderAdults["update_elder_status(new_section_info)"] MarkNewElderAdults--> EndRoutine WaitComplete--"!WAITED_VOTES
.is_empty()
(Wait not complete)"--> LoopEnd LoopEnd --> LoopStart

Handling merges

Send Rpc::Merge, and take over handling any Parsec::NeighbourMerge.
Complete when one merge has completed, and a NewSectionInfo is consensused.
If multi-stage merges are required, they will require calling this function again
While in this sanctuary, our SectionInfo shall not be disturbed by elder changes. This stops us from changing our SectionInfo after indicating to our neighbour the last SectionInfo before merge.

We send it to our sibling section (or sections with longer prefix) on entering ProcessMerge, containing our own SectionInfo.
We send Merge irrespective of whether we are the section that triggered the merge. This allows all sections involved in the merge to receive a Rpc::Merge, which is how Parsec::NeighbourMerge gets voted for.

This PARSEC event indicates that our neighbour section is ready to merge with us.
It is voted for in the StartMergeSplitAndChangeElders flow, on receipt of a Rpc::Merge.
It contains their SectionInfo (or digest for it).

Store the neighbour's merge info, may not be sibling in case of multi merge

Did we store the neighbour's merge info for our sibling

Remove the stored sibling's merge info and return the NewSectionInfo.

Once we are ready to merge, have received our neighbour's SectionInfo through their Rpc::Merge, and subsequently reached consensus on the Parsec::NeighbourInfo we voted for, we have all the information needed to decide on the membership of our post-merge section.
This is the Parsec::NewSectionInfo.

With the Parsec::NewSectionInfo in hands, completing the merge process consists on joining the newly formed section and leaving the old one behind.

graph TB EndRoutine["End of ProcessMerge
(shared state)"] style EndRoutine fill:#f9f,stroke:#333,stroke-width:4px LoopStart --> WaitFor WaitFor(("Wait for 8:")) Consensus((Consensus)) WaitFor-- Parsec
consensus --> Consensus Consensus -- "Parsec::NewSectionInfo" --> CompleteMerge CompleteMerge["complete_merge()
(Start parsec with new genesis...)"] CompleteMerge --> MarkNewElderAdults MarkNewElderAdults["update_elder_status(new_section_info)"] MarkNewElderAdults--> EndRoutine Consensus -- "Parsec::NeighbourMerge" --> SetNeighbourMerge SetNeighbourMerge["store_merge_infos(Parsec::NeighbourMerge info)"] SetNeighbourMerge --> CheckMerge ProcessMerge["ProcessMerge
(shared state)"] style ProcessMerge fill:#f9f,stroke:#333,stroke-width:4px ProcessMerge --> SendMergeRpc SendMergeRpc["send_rpc(Rpc::Merge)"] SendMergeRpc --> CheckMerge CheckMerge((Check)) CheckMerge -- "has_sibling_merge_info()" --> VotForNewSectionInfo VotForNewSectionInfo["merge_sibling_info_to_new_section()
vote_for(Parsec::NewSectionInfo)"] VotForNewSectionInfo--> LoopEnd CheckMerge -- "Otherwise" --> LoopEnd LoopEnd --> LoopStart

Handling splits

Vote for the two Parsec::NewSectionInfo to gather required signatures

Wait for all these votes to reach consensus before reflecting the status change in our chain.
Both sections need to be consensused before we move on so we do not leave one behind with not enough voters.

With the NewSectionInfo in hands, completing the split process consists on joining the correct newly formed section and leaving the old one behind.

A list of PublicId.
The content of the NewSectionInfo parsec event that reached consensus that we are now a member of.

graph TB ProcessSplit["ProcessSplit
(Take elder changes)
(shared state)"] style ProcessSplit fill:#f9f,stroke:#333,stroke-width:4px EndRoutine["End of ProcessSplit
(shared state)"] style EndRoutine fill:#f9f,stroke:#333,stroke-width:4px ProcessSplit --> VoteNewSections VoteNewSections["vote_for(Parsec::NewSectionInfo) for the two new sections

WAITED_VOTES.insert(all votes)"] VoteNewSections --> LoopStart WaitFor(("Wait for 9:")) LoopStart --> WaitFor Consensus((Consensus)) WaitFor-- Parsec
consensus --> Consensus Consensus -- "WAITED_VOTES.contains(vote)" --> OneVoteConsensused OneVoteConsensused["WAITED_VOTES.remove(vote)"] OneVoteConsensused --> WaitComplete WaitComplete(("Check?")) WaitComplete--"WAITED_VOTES
.is_empty()
(Wait complete)"-->CompleteSplit CompleteSplit["complete_split()
(Start parsec with new genesis...)"] CompleteSplit --> MarkNewElderAdults MarkNewElderAdults["update_elder_status(new_section_info)"] MarkNewElderAdults--> EndRoutine WaitComplete--"!WAITED_VOTES
.is_empty()
(Wait not complete)"--> LoopEnd LoopEnd --> LoopStart

Handling members of our section going online or offline

A nodes state indicating that it is relocated with age reduced as it went offline.

graph TB CheckOnlineOffline["CheckOnlineOffline:
No exit - Needs killed"] style CheckOnlineOffline fill:#f9f,stroke:#333,stroke-width:4px CheckOnlineOffline --> LoopStart WaitFor(("Wait for 10:")) LoopStart --> WaitFor LocalEvent((Local
Event)) WaitFor --event--> LocalEvent LocalEvent -- Node detected offline --> VoteNodeOffline VoteNodeOffline["vote_for(
Parsec::Offline)"] VoteNodeOffline --> LoopEnd LocalEvent -- Node detected back online --> VoteNodeBackOnline VoteNodeBackOnline["vote_for(
Parsec::BackOnline)"] VoteNodeBackOnline --> LoopEnd Consensus((Consensus)) WaitFor-- Parsec
consensus --> Consensus Consensus--"Parsec::Offline"-->SetOfflineState SetOfflineState["set_node_state(
node,
State::Offline)"] SetOfflineState -->LoopEnd Consensus -- "Parsec::BackOnline" --> SetRelocating SetRelocating["set_node_state(
node,
State::RelocatingBackOnline)"] SetRelocating --> LoopEnd LoopEnd --> LoopStart

Node relocation overview

Successfully relocate a node from source to destination section

Sent by the source section when a candidate needs to relocate
Contains:

Sent by destination section when a candidate is accepted
Contains:

Sent by destination section when a candidate is refused
Contains: Empty

Sent by source section when a candidate is relocated to the relocated node
Contains:

Sent by the joining node to the target interval's NaeManager to initiate the joining process
Contains:

Sent to collect information needed to establish a direct connection: Unchanged

Sent by the destination section to the joining node when connected
Contains:

Sent to process resource proof: Unchanged
(Could avoid some malicious behaviour with nonce/counter/proofs that we started resource proof).

Sent by destination section when a candidate becomes an adult / resource proof is completed.
Contains: Empty (Any information needed by an adult/elder should be sent to all connected members, and updated as it changes)

sequenceDiagram participant Src as Source Section participant Node as Relocating Node participant Dst as Destination Section loop FindDestination Src->>+Dst: Routing RPC: Rpc::ExpectCandidate opt Refuse Dst-->>Src: Routing RPC: Rpc::ExpectCandidateRefuseResponse end end Dst-->>-Src: Routing RPC: Rpc::ExpectCandidateAcceptResponse Src->>Node: Direct node-to-node RPC: Rpc::RelocatedInfo loop NodeConnection Node->>Dst: Proxied Routing RPC to group: Rpc::CandidateInfo Dst->>+Node: Proxied Routing RPC: Rpc::ConnectionInfoRequest Node-->>-Dst: Proxied Routing RPC: Rpc::ConnectionInfoResponse end Dst->>Node: Unproxied Group RPC: Rpc::NodeConnected Dst->>Node: Direct node-to-node RPC: Rpc::ResourceProof loop ResProof Node->>+Dst: Direct node-to-node RPC: Rpc::ResourceProofResponse Dst-->>-Node: Direct node-to-node RPC: Rpc::ResourceProofReceipt end Dst->>Node: Unproxied Group RPC: Rpc::NodeApproval