Table of Contents

  1. Ansible and AWS
  2. Kuro Documentation

NB: The Ansible and AWS section is equivalent to, while the Kuro Documentation section is equivalent to Both documentations can be found in <kadena-directory>/docs/.

Ansible and AWS

Kadena Kuro AWS Marketplace QuickStart Video

Watch the video above or follow the instructions below for AWS QuickStart instructions.

AWS Quick Start

  1. Spin up an EC2 instance with Kadena’s Kuro AMI or with the desired configurations (See Instance Requirements). This will serve as the Ansible monitor instance.
  2. Ensure that the key pair(s) of the monitor and Kadena server instances are not publicly viewable: chmod 400 /path/to/keypair.pem. Otherwise, SSH and any service that rely on it (i.e. Ansible) will not work.
  3. Add the key pair(s) of the monitor and Kadena server instances to the ssh-agent: ssh-add /path/to/keypair.pem
  4. SSH into the monitor instance using ssh-agent forwarding: ssh -A <instance-user>@<instance-public-dns>. If using Kadena’s AWS listing, the <instance-user> is ubuntu. This facilitates the Ansible monitor’s task of managing different instances by having access to their key pair.
  5. Once logged into the monitor instance, locate the directories containing the Kadena executables, the Kadena server node configurations, and the Ansible playbooks.
  6. Edit the ansible_vars.yml to indicate the path to the Kadena executables and the node configurations. Also indicate the number of EC2 instances designated as Kadena servers to launch as well as how to configure them. See Instance Requirements and Security Group Requirements for instance image and security group specifics.
  7. Grant Ansible the ability to make API calls to AWS on your behalf. To do this, launch the monitor instance with Power User IAM role or export AWS security credentials as environment variables:
    $ export AWS_ACCESS_KEY_ID='AK123'
    $ export AWS_SECRET_ACCESS_KEY='abc123'

    Make sure to persist these environment variables when logging in and out of the monitor instance.

You are now ready to start using the Ansible playbooks!

Ansible Playbooks

Playbooks are composed of plays, which are then composed of tasks. Plays and tasks are executed sequentially. Ansible playbooks are in YAML format and can be executed as follows:

ansible-playbook /path/to/playbook.yml

The aws/ directory contains the following playbooks:


This playbook launches EC2 instances that have the necessary files and directories to run the Kadena Server executable. It also creates a file containing all of their private IP addresses and the default (i.e. SQLite backend) node configurations for each. This will create instances tagged as “kadena_server”. This list of IP addresses will be located in aws/ipAddr.yml.


This playbook terminates all Kadena Server EC2 instances.


This playbooks runs the Kadena Server executable. If the servers were already running, it terminates them as well as cleans up their sqlite and log files before launching the server again. This playbook also updates the server’s configuration if it has changed in the specified configuration directory (conf/) on the monitor instance. The Kadena Servers will run for 24 hours after starting. To change this, edit the Start Kadena Servers async section in this playbook.


This playbook retrieves all of the Kadena Servers’ logs and sqlite files, deleting all previous retrieved logs. It stores the logs in aws/logs/.

NB: To change distributed nodes’ configuration, run

<kadena-directory>$ ./bin/<OS-name>/genconfs --distributed aws/ipAddr.yml

Provide the desired settings when prompted. For more information, refer to the “Automated configuration generation: genconfs section in docs/

Launching the Demo

Once you’ve completed the AWS Quick Start instructions, execute the following commands to boot up the Kuro servers and start the kadena-demo:

$ cd kadena-aws/
$ ansible-playbook aws/start_instances.yml
$ tmux
$ ./aws/

Press Enter when prompted by bin/ubuntu-16.04/ This will start the Kadena Client and allow you to start interacting with the private blockchain (see the kadenaclient binary explanation for more details).

For a list of supported interactions, refer to the “Sample Usage: [payments|monitor|todomvc] section in

To exit the Kadena Client, type exit. To kill the tmux sessions, type tmux kill-session.

The demo script assumes the following directory structure:

$ tree <kadena-directory>
├── aws
    ├── ansible_vars.yml
    ├── get_server_logs.yml
    ├── ipAddr.yml		(produced by start_instances.yml)
    ├── run_servers.yml
    ├── start_instances.yml
    ├── stop_instances.yml
    └── templates
	└── ipAddr.j2
└── bin
    └── <OS-name>
        └── <all kadena executables>

Instance Requirements

The Ansible monitor instance and the Kadena server instances should be configured as follows:

  1. Install all Kadena software requirements. Refer to <kadena-directory>/docs/ for specifics.
  2. Have Ansible 2.6+ installed. See for instructions.
  3. Setup Ansible to use EC2’s external inventory script. See for instructions.

An AWS image (AMI) created from this configured instance could be used to launch the Ansible monitor and Kadena server instances. For more information, see

See setup/ for an example on how to configure EC2’s free-tier ubuntu machine to run the Kadena executables and Ansible.

Security Group Requirements

Ansible needs to be able to communicate with the AWS instances it manages, and the Kadena Servers need to communicate with each other. Therefore, the security group (firewall) assigned to the Kadena server instances should allow for the following:

  1. The Ansible monitor instance (the one running the playbooks) should be able to ssh into all of the Kadena Server instances it will manage.
  2. The Kadena Server instances should be able to communicate via TCP 10000 port.
  3. The Kadena Server instances should be able to receive HTTP connections via the 8000 port from any instance running the Kadena Client.

The simplest solution is to create a security group that allows all traffic among itself and assign this security group to the Ansible monitor and Kadena server instances.

Further Reading

  1. While a little outdated, this post provides detailed instructions and goes further into the justifications for the above suggestions:
  2. The official guide on how to use Ansible’s AWS EC2 External Inventory Script:

Kuro Documentation

Kadena Version:

Change Log

Getting Started




NB: The docker and script files for installing the Kadena dependencies can be found in <kadena-directory>/setup.

Kadena Demo Quick Start

Quickly launch a local instance, see “Sample Usage: [payments|monitor|todomvc]” for interactions supported.


<kadena-directory>$ tmux
<kadena-directory>$ ./bin/osx/

Ubuntu 16.04

<kadena-directory>$ tmux
<kadena-directory>$ ./bin/ubuntu-16.04/

Kadena server and client binaries


Launch a consensus server node. On startup, kadenaserver will open connections on three ports as specified in the configuration file: <apiPort>, <nodeId.port>, <nodeId.port> + 5000. Generally, these ports will default to 8000, 10000, and 15000 (see genconfs for details).

For information regarding the configuration yaml generally, see the “Configuration Files” section.

kadenaserver (-c|--config) [-d|--disablePersistence]

  -c,--config               [Required] path to server yaml configuration file
  -d,--disablePersistence   [Optional] disable usage of SQLite for on-disk persistence
                                       (higher performance)

NB: there is a zeromq bug that may cause kadenaserver to fail to launch (segfault) ~1% of the time. Once running this is not an issue. If you encounter this problem, please relaunch.

kadenaclient &

Launch a client to the consensus cluster. The client allows for command-line level interaction with the server’s REST API in a familiar (REPL-style) format. The associated script incorporates rlwrap to enable Up-Arrow style history, but is not required.

kadenaclient (-c|--config)

  -c,--config               [Required] path to client yaml configuration file

Sample Usage (found in
  rlwrap -A bin/kadenaclient -c "conf/$(ls conf | grep -m 1 client)"

General Considerations

Elections Triggered by a High Load

When running a local demo, resource contention can trigger election under a high load when certain configurations are present. For example, batch 40000 when the replication per heartbeat is set to +10k will likely trigger an election event. This is caused entirely by a lack of available CPU being present; one of the nodes will hog the CPU, causing the other nodes to trigger an election. This should not occur in a distributed setting nor is it a problem overall as the automated handling of availability events are one of the features central to any distributed system.

If you would like to do large scale batch tests in a local setting, use genconfs to create new configuration files where the replication limit is ~8k.

Load Testing with Many Clients

If you’ll be testing with many (100s to +10k) simultaneous clients please be sure to provision extra CPU’s. In a production setting, we’d expect:

The ability to do either of these is a feature of Kadena – because commands must have a unique hash and are either (a) signed or (b) fully encrypted, they can be redirected without degrading the security model.

Replay From Disk

On startup but before kadenaserver goes online, it will replay from origin each persisted transaction. If you would like to start fresh, you will need to delete the SQLite DB’s prior to startup.

Core Count

By default kadenaserver is configured use as many cores as are available. In a distributed setting, this is generally a good default; in a local setting, it is not. Because each node needs 8 cores to function at peak performance, running multiple nodes locally when clusterSize * 8 > available cores can cause the nodes to obstruct each other (and thereby trigger an election).

To avoid this, the demo’s script restricts each node to 4 cores via the +RTS -N4 -RTS flags. You may use these, or any other flags found in GHC RTS Options to configure a given node should you wish to.

Beta Limitations

Beta License instances of Kadena are limited as follows:

For a version without any/all of these restrictions, please contact us at

AWS Marketplace Limitations

The AWS Marketplace listing of Kadena is limited as follows:

For a version without any/all of these restrictions, please contact us at


Automated configuration generation: genconfs

kadenaserver and kadenaclient each require a configuration file. genconfs is designed to assist you in quickly (re)generating these files.

It operates in 2 modes:

In either mode genconfs will interactively prompt for settings with recommendations.

$ ./genconfs
When a recommended setting is available, press Enter to use it
[FilePath] Which directory should hold the log files and SQLite DB's? (recommended: ./log)

Set to recommended value: "./log"
[FilePath] Where should `genconfs` write the configuration files? (recommended: ./conf)
... etc ...

In distributed mode:

$ cat server-ips
$ ./genconfs --distributed ./server-ips
When a recommended setting is available, press Enter to use it
[FilePath] Which directory should hold the log files and SQLite DB's? (recommended: ./log)
... etc ...

For details about what each of these configuration choices do, please refer to the “Configuration Files” section.

Interacting With a Running Cluster

Interaction with the cluster is performed via the Kadena REST API, exposed by each running node. The endpoints of interest here support the Pact REST API for executing transactional and local commands on the cluster.

The kadenaclient tool

Kadena ships with kadenaclient, which is a command-line tool for interacting with the cluster via the REST API. It is an interactive program or “REPL”, similar to the command-line itself. It supports command history, such that recently-issued commands are accessible via the up- and down-arrow keys, and the history can be searched with Control-R.

Getting help

The help command documents all available commands.

$ ./
node3> help
Command Help:
sleep [MILLIS]
    Pause for 5 sec or MILLIS
    Show/set current batch command
data [JSON]
    Show/set current JSON data payload
    Load and submit yaml file with optional mode (transactional|local), defaults to transactional

server command

Issue server to list all nodes known to the client, and server NODE to point the client at the REST API for NODE.

node0> server
Current server: node0
node0: ["Alice", sending: True]
node1: ["Bob", sending: True]
node2: ["Carol", sending: True]
node3: ["Dinesh", sending: True]
node0> server node1

load command

load is designed to assist with initializing a new environment on a running blockchain, accepting a yaml file to instruct how the code and data is loaded.

The “demo” smart contract explores this:

$ tree demo
├── demo.pact
├── demo.repl
└── demo.yaml

$ cat demo/demo.yaml
data: |-
    "keys": ["demoadmin"]
    "pred": ">"
codeFile: demo.pact
  - public: 06c9c56daa8a068e1f19f5578cdf1797b047252e1ef0eb4a1809aa3c2226f61e
    secret: 7ce4bae38fccfe33b6344b8c260bffa21df085cf033b3dc99b4781b550e1e922
batchCmd: |-
  (demo.transfer "Acct1" "Acct2" 1.00)

Sample Usage: running the payments demo (non-private) and testing batch performance

Launch the client and (optionally) target the leader node (in this case node0). The only reason to target the leader is to forgo the forwarding of new transactions to the leader. The cluster will handle the forwarding automatically.

node3> server node0

Initialize the chain with the payments smart contract, and create the global/non-private accounts. Note that the load process has an optional feature to set the batch command (see docs for cmd in help). The demo.yaml sets the batch command to transfer 1.00 between the demo accounts.

node0> load demo/demo.yaml
status: success
data: Write succeeded

Setting batch command to: (demo.transfer "Acct1" "Acct2" 1.00)

node0> exec (demo.create-global-accounts)
account      | amount       | balance      | data
"Acct1"      | "1000000.0"  | "1000000.0"  | "Admin account funding"
"Acct2"      | "0.0"        | "0.0"        | "Created account"

Execute a single dollar transfer and check the balances again with read-all. exec sends a command to execute transactionally on the blockchain; local queries the local node (here “node0”) to prevent a needless transaction for a query.

node0> exec (demo.transfer "Acct1" "Acct2" 1.00)
status: success
data: Write succeeded

node0> local (
account      | amount       | balance      | data
"Acct1"      | "-1.00"      | "999999.00"  | {"transfer-to":"Acct2"}
"Acct2"      | "1.00"       | "1.00"       | {"transfer-from":"Acct1"}

Verify that cmd is properly setup, and perform a batch test. batch N will create N identical transactions, using the command specified in cmd for each, and then send them to the cluster via the server specified by server (in this case to node0).

Once sent, the client will then listen for the final transaction, collect and show its timing metrics, and print out the throughput seen in the test (i.e. "Finished Commit" / N). The “First Seen” time is the moment when the targeted server first saw the batch of transactions and the “Finished Commit” time delta fully captures the time it took for the replication, consensus, cryptography, and execution of the final transaction (meaning that all previous transactions needed to first be fully executed.)

Some of the metrics may be of interest to you:

node0> cmd
(demo.transfer "Acct1" "Acct2" 1.00)
node0> batch 4000
Preparing 4000 messages ...
Sent, retrieving responses
Polling for RequestKey: "b768a85c6e1a06d4cfd9760dd981b675dcd9dc97ee8d7abc756246107f2ea03edd80e10e5168b41ee96a17b098ea3285a0f5ca9c61c4d974a7832e01f354dcf9"
First Seen:          2017-03-19 05:43:14.571 UTC
Hit Turbine:        +24.03 milli(s)
Entered Con Serv:   +39.83 milli(s)
Finished Con Serv:  +52.41 milli(s)
Came to Consensus:  +113.00 milli(s)
Sent to Commit:     +113.94 milli(s)
Started PreProc:    +690.55 milli(s)
Finished PreProc:   +690.66 milli(s)
Crypto took:         115 micro(s)
Started Commit:     +1.51 second(s)
Finished Commit:    +1.51 second(s)
Pact exec took:      179 micro(s)
Completed in 1.517327sec (2637 per sec)
node0> exec (
account      | amount       | balance      | data
"Acct1"      | "-1.00"      | "995999.00"  | {"transfer-to":"Acct2"}
"Acct2"      | "1.00"       | "4001.00"    | {"transfer-from":"Acct1"}

If you would like to view the performance metrics from each node in the cluster, this can be done via pollMetrics <requestKey>

node0> pollMetrics b768a85c6e1a06d4cfd9760dd981b675dcd9dc97ee8d7abc756246107f2ea03edd80e10e5168b41ee96a17b098ea3285a0f5ca9c61c4d974a7832e01f354dcf9
##############  node3  ##############
First Seen:          2017-03-19 05:43:14.571 UTC
Hit Turbine:        +24.03 milli(s)
Entered Con Serv:   +39.83 milli(s)
Finished Con Serv:  +52.41 milli(s)
Came to Consensus:  +113.00 milli(s)
Sent to Commit:     +113.94 milli(s)
Started PreProc:    +690.55 milli(s)
Finished PreProc:   +690.66 milli(s)
Crypto took:         115 micro(s)
Started Commit:     +1.51 second(s)
Finished Commit:    +1.51 second(s)
Pact exec took:      179 micro(s)
##############  node2  ##############
First Seen:          2017-03-19 05:43:14.868 UTC
... etc ...

NB: the Crypto took metric is only accurate when the concurrency system is set to Threads. All other metrics are always accurate.

Sample Usage: Stressing the Cluster

The aforementioned performance test revolves around the idea that, in production, there will be a resource pool that batches new commands and forwards them directly to the Leader for the cluster. This is, by far, the best architectural setup from a performance and efficiency perspective. However, if you’d like to test the worst-case setup – one where new commands are distributed evenly across the cluster and the cluster is forced to forward and batch them as best it can – you can use par-batch.

par-batch TOTAL_CMD_CNT CMD_RATE_PER_SEC DELAY works much like batch except that kadenaclient will evenly distributed the new commands across the entire cluster. It is creates TOTAL_CMD_CNT commands first and then submits portions of the new command pool to each node in individual batches with a DELAY millisecond pause between each submission. Globally, it will achieve the CMD_RATE_PER_SEC specified.

For example, on a 4 node cluster par-batch 10000 1000 200 will submit 50 new commands to each node every 200ms for 10 seconds.

NB: In-REPL performance metrics for this test are inaccurate. Also, being the worst case architecture means that the cluster will make a best effort at performance but it will not be as high as batch.

Sample Usage: Running the payments demo with private functionality

Refer to “Entity Configuration” below for general notes on privacy configurations. This demo requires that there be 4 entities configured by the genconfs tool, which will name them “Alice”, “Bob”, “Carol” and “Dinesh”. These would correspond to business entities on the blockchain, communicating with private messages over the blockchain. Confirm this setup with the server command.

Launch the cluster, and load the demo.yaml file.

node3> load demo/demo.yaml
status: success
data: TableCreated

Setting batch command to: (demo.transfer "Acct1" "Acct2" 1.00)

Create the private accounts by sending a private message that executes a multi-step pact to create private accounts on each entity. Change to Alice’s server (node0) and send a private message to the other 3 participants with the demo code:

node3> server node0
node0> private Alice [Bob Carol Dinesh] (demo.create-private-accounts)

The private command creates an encrypted message, sent from Alice to Bob, Carol and Dinesh. The create-private-accounts pact executes a single command on the different servers. To see the results, perform local queries on each node.

node0> local (
account | amount   | balance  | data
"A"     | "1000.0" | "1000.0" | "Created account"
node0> server node1
node1> local (
account | amount   | balance  | data
"B"     | "1000.0" | "1000.0" | "Created account"
node1> server node2
node2> local (
account | amount   | balance  | data
"C"     | "1000.0" | "1000.0" | "Created account"
node2> server node3
node3> local (
account | amount   | balance  | data
"D"     | "1000.0" | "1000.0" | "Created account"

This illustrates how the different servers (which would be presumably behind firewalls, etc) contain different, private data.

Now, execute a confidential transfer between Alice and Bob, transferring money from Alice’s account “A” to Bob’s account “B”.

For this, the pact “payment” is used, which executes the debit step on the “source” entity, and the credit step on the “dest” entity. You can see the function docs by simply executing payment:

node3> local demo.payment
status: success
data: (TDef defpact demo.payment (src-entity:<i> src:<j> dest-entity:<k> dest:<l>
  amount:<m> -> <n>) "Two-phase confidential payment, sending money from SRC at SRC-ENTITY

Set the server to node0 for Alice, and execute the pact to send 1.00 to Bob:

node3> server node0
node0> private Alice [Bob] (demo.payment "Alice" "A" "Bob" "B" 1.00)
status: success
  amount: '1.00'
  result: Write succeeded

To see the results, issue local queries on the nodes. Note that node2 and node 3 are unchanged:

node0> local (
account | amount  | balance  | data
"A"     | "-1.00" | "999.00" | {"tx":5,"transfer-to":"B","message":"Starting pact"}
node0> server node1
node1> local (
account | amount | balance   | data
"B"     | "1.00" | "1001.00" | {"tx":5,"debit-result":"Write succeeded","transfer-from":"A"}
node1> server node2
node2> local (
account | amount   | balance  | data
"C"     | "1000.0" | "1000.0" | "Created account"
node2> server node3
node3> local (
account | amount   | balance  | data
"D"     | "1000.0" | "1000.0" | "Created account"

You can also test out the rollback functionality on an error. Mistype the recipient account id (in this case we use “bad” instead of “B”). The pact will execute the debit on Alice/node0; attempt the credit on Bob/node1, failing because of the bad ID; finally the rollback will execute on Alice/node0. Bob’s account will be unchanged, while Alice’s account will note the rollback with the original tx id of the pact execution.

node0> private Alice [Bob] (demo.payment "Alice" "A" "Bob" "bad" 1.00)
status: success
  tx: 7
  amount: '1.00'
  result: Write succeeded

node0> local (
account | amount | balance  | data
"A"     | "1.00" | "999.00" | {"rollback":7}
node0> server node1
node1> local (
account | amount | balance   | data
"B"     | "1.00" | "1001.00" | {"tx":5,"debit-result":"Write succeeded","transfer-from":"A"}

NB: The result of the first send shows you the result of the first part of the multi-phase tx, thus the “success”/”Write succeeded” status. Querying the database reveals the rollback which occurred two transactions later.

Sample Usage: Inserting multiple records

You can test inserting multiple records into a sample client database with the following commands:

node0> load demo/orders.yaml

This creates an 'orders' table into which records can be inserted.

The command:

node0> loadMultiple 0 3000 demo/orders.txt

will insert 3000 records into the orders table. The file orders.txt serves as a template for the order records, and contains special strings of the form “${count}” that will be replaced with the numbers from 0 through 2999 as the records are inserted. All 3000 records are sent in a single HTTP ‘batch’ command.

You can run additional loadMultiple commands, but the initial ‘count’ (0 in the last example) must be chosen to not overlap with previously inserted rows. So subsequent commands could be:

node0> loadMultiple 3000 3000

node0> loadMultiple 6000 3000

node0> loadMultiple 9000 3000


Sample Usage: Viewing the Performance Monitor

Each kadena node, while running, will host a performance monitor at the URL <>:<nodeId.port>/monitor.

Sample Usage: Running Pact TodoMVC

This repo also bundles the Pact TodoMVC. Each Kadena node will host the frontend at <nodeId.public-ip>:8000/todomvc. To initialized the todomvc:

$ cd <kadena-directory>

# launch the cluster

$ ./bin/<OS-name>/
node3> load todomvc/demo.yaml

# go to <public-ip-addr>:8000/todomvc

NB: this demo can be run at the same time as the payments demo.

Sample Usage: Running a cluster on AWS

The Ansible playbooks and scripts we use for testing Kadena on AWS are now available as well, located in <kadena-directory>/aws. Refer to <kadena-directory>/docs/ for detailed instructions on how to use these Ansible playbooks and scripts.

Sample Usage: Querying the Cluster for Server Metrics

Each kadenaserver hosts an instance of ekg (a performance monitoring tool) on <node-host>:<node-port>+80. It returns a JSON blob of the latest tracked metrics. The following shell script extract uses this mechanism for status querying:

  for i in `cat kadenaservers.privateIp`;
    do echo $i ;
      curl -sH "Accept: application/json" "$i:10080" | jq '.kadena | {role: .node.role.val, commit_index: .consensus.commit_index.val, applied_index: .node.applied_index.val}' ;
  exit 0

NB: The port that genconfs when running in --distributed mode is 10000 therefore ekg runs on port 10080 on each node.

Configuration File Documentation

Generally, you won’t need to personally edit the configuration files for either the client or server(s), but this information is available should you wish to. The executable genconfs will create the configuration files for you and offer recommended settings based on your choices.

Server (node) config file

Node Specific Information


Each consensus node requires a unique Ed25519 keypair and nodeId.

myPublicKey: 53db73154fbb0c57129a0029439e5fc448e1199b6dcd5601bc08b48c5d9b0058
myPrivateKey: 0c2b9f177cee13c698bec6afe2e635ca244ce402ccbd826a483f25f618beec8f
  alias: node0
  fullAddr: tcp://
  host: ''
  port: 10000

Other Nodes

Each consensus node further requires a map of every other node, as well as their associated public key.

- alias: node1
  fullAddr: tcp://
  host: ''
  port: 10001
- alias: node2
  fullAddr: tcp://
  host: ''
  port: 10002
- alias: node3
  fullAddr: tcp://
  host: ''
  port: 10003

- - node0
  - 53db73154fbb0c57129a0029439e5fc448e1199b6dcd5601bc08b48c5d9b0058
- - node1
  - 65d59bda770dd6de2b25308b2e039714fec752e42d11af3712159f27e9e295f4
- - node2
  - bd1700e6f206315debabfa5bf42228ed4f9e78cacbffabcca74ff4f67e5ac7a4
- - node3
  - 8d6f928659ea57be2ac19d64af05ca0ccb0f42303f0d668d1263c9a4c8b36925

Runtime Configuration

Kadena uses SQLite for caching & persisting various by default. Upon request, Oracle, MS SQL Server, Postgres, and generic ODBC backends are also available.

While this is pretty low level tuning, Kadena nodes can be configured to use different concurrency backends. We recommend the following defaults but please reach out to us if you have questions about tuning.

preProcUsePar: true
preProcThreadCount: 100

Consensus Configuration

These settings should be identical for each node.

Entity Configuration and Confidentiality

The Kadena platform uses the Noise protocol to provide the best possible on-chain encryption, as used by Signal, Whatsapp and Facebook. Messages are thoroughly encrypted with perfect forward secrecy, resulting in opaque blobs on-chain that leak no information about contents, senders or recipients.

Configuring for confidential execution employs the notion of “entities”, which identify sub-clusters of nodes as belonging to a business entity with private data. Entities maintain keys for encrypting as well as signing.

Within an entity sub-cluster, a single node is configured as a “sending node” which must be used to initiate private messages; this allows other sub-cluster nodes to avoid race conditions surrounding the key “ratcheting” used by the Noise protocol for forward secrecy; this way sub-cluster nodes will stay perfectly in sync and replicate properly.

For a given entity, the signer and local entries must match for all nodes in the sub-cluster; only one may be designated as sender setting that to true for that node only. remotes list the static public key and the entity name for each remote entity in the cluster.

The signer private and public keys are ED25519 signing keys; the secret and public keys for local ephemeral and static keys, as well as for remote public keys, are Curve25519 Diffie-Hellman keys.

Performance Considerations

While genConfs will make a best guess at what the best configuration for your cluster is based on your inputs, it may be off. To that end, here are some notes if you find yourself seeing unexpected performance numbers.

The relationship of aeBatchSize to heartbeatTimeout determines the upper bound on performance, specifically aeBatchSize/heartbeatTimeoutInSeconds = maxTransactionsPerSecond. This is because when the cluster has a large number of pending transactions to replicate, it will replicate up to aeBatchSize transactions every heartbeat until the cluster has caught up. Generally, it’s best to have maxTransactionsPerSecond be 1.5x of the expected performance, which itself is ~8k/second.

Because of the way that we measure performance, which starts from the moment that the cluster’s Leader node first sees a transaction to when it fully executes the Pact smart contract (inclusive of the time required for replication, consensus, and cryptography), the logic of the Pact smart contract itself will impact performance. Thus, executing simple logic like (+ 1 1) will achieve 12k commits/second whereas a smart contract with numerous database writes will vary based on the backend used and the complexity of the data model.

Client (repl) config file

Example of the client (repl) configuration file. genconfs will also auto-generate this for you.

PublicKey: 53db73154fbb0c57129a0029439e5fc448e1199b6dcd5601bc08b48c5d9b0058
SecretKey: 0c2b9f177cee13c698bec6afe2e635ca244ce402ccbd826a483f25f618beec8f

(c) Kadena 2017