Comparing to previous releases 1.* latest release brings a lot of new updates and features, most notably:
- New bidding mechanism,
- New data layer,
- Updateable contracts,
- Better node configuration,
- Stability improvements.
More detailed explanation below:
From the previous version of the node and protocol there have been several changes addressing 3 main areas:
- Lowering the cost of protocol usage in Ether and simplifying the interaction between ODN and blockchain (by decreasing the number of transactions, as well as simplifying and optimizing smart contracts)
- Addressing networking timeout issues with the underlying Kademlia implementation
- Increasing resilience of the nodes with a command oriented architecture (command sourcing pattern)
- Support updatable contracts - an advanced contract update mechanism that will provide fast and reliable updates.
Unfortunately, therefore the new version of the OT Node is not backwards compatible with v1.
Here’s a summary of most important implications of the new version:
- The OT node now supports the ERC-725 identity standard instead of using the wallet address as the node identifier. This simplifies key management and allows for multiple wallets to be represented with one identity on the blockchain. Finally using ERC-725 improves interoperability with other ETH based systems utilizing the same identity standard
- In order to participate in the network, each node is required to post a small amount of tokens as stake at the beginning of its operation, in order to mitigate possible issues with identity Sybil attacks
- The replication process is now atomic - dataset fingerprinting and publishing the corresponding offer on the blockchain is now done during the replication phase.
- DH nodes exclusively communicate off-chain with the DC node, therefore they do not need to send any transactions to the blockchain during the replication process
- During the offer replication negotiation phase, a number of n DH nodes are randomly picked (by a lightweight proof-of-work mechanism) to become “vault” nodes, committing to store the dataset for the determined period of time and make it available for retrieval
- n DH nodes are selected by the DC node, solving a task which requires it to find a solution to a randomly generated problem, for which the input is an n-tuple of DH node identities. The DC node needs to “mine” this solution as there is no inverse function that would be able to find the n-tuple solution from the randomly generated problem
- The number n is calculated based on the algorithm and task difficulty, stemming from the network size.
- This mechanism removes the complexity and scaling constraints from the previous bidding mechanism smart contract implementation which was limited by block gas limit in Ethereum
- In order to complete the replication process (create and finalize the offer) the DC node needs to have n * offer_price tokens on it’s wallet balance, where it is in the discretion of the DC node operator to determine the offer_price in TRAC tokens. The DH nodes have the discretion to decide if they want to engage in the bidding game (by accepting the replication process, therefore qualifying to be eligible for becoming vault DH nodes) based on the offer_price offered by the DC node. This is how market conditions are achieved
- Once an offer is finalized and vault DH nodes are chosen, each of them puts up a stake in TRAC tokens which is a linear function of the offer_price. Nodes that are not chosen but have received replication data can choose to either opportunistically store and provide the datasets or delete them if they find them “unvaluable”.
- Finally a creditor smart contract is introduced to allow for easier token management, where one “Creditor” can supply tokens for a multitude of different nodes at the same time (i.e. one service provider managing nodes for several companies). This feature will be available from release version v2.1.0.
- Future versions will introduce High Availability operational modes, easier and more tweakable market mechanisms, further features in the data layer (more standards, more configuration options and extensions in the privacy layer) and several operational improvements