Skip to main content

Mirror Node / Light nodes

· 3 min read
Philip
Tagion Core Contributor

This is a proposal for introducing a new type of node that provides the same interface as a normal node, but does not perform consensus. It acts as a "relay station" for clients to communicate with.

Motivation

The motivation for creating mirror nodes is that it will quickly allow users to run their own nodes since it would not require swapping. It is also a step in the direction of providing more decentralisation to the system, since these nodes would provide resieliency for the data stored in the DART. Secondly, mirror nodes is the first step in creating further distribution and decentralization of the system since DART synchronization catch-up would need to work and be exercised on a greater scale.

Requirements

The requirements for a mirror nodes is to provide the same external protocols as a full node see Protocols: public hirpcmethods for more information. Therefore the node will also be required to have enough space to store the DART locally.

Documentation for starting a mirror node have to be very well documented such that it is easier to boot.

Proposed solution

The tagionshell can stay the complete same, as it acts as a caching / interface layer. Neuewelle would provide a new switch that can start a mirror node. It might be better to create a new program for mirror nodes but this is something that will have to be discussed. The mirror node works by subscribing to other nodes recorders and constantly updating its own DART and TRT. The nodes that are communicated with would for a start be the ones located in the dart. If the mirror node is behind communication it will start by syncing up before accepting outside requests. This is done via DART synchronization. It will also verify that information is correct while doing so. A new service will have to be responsible for this called DARTSynchronzation.

The switch would also prevent the Transcript from being spawned, since the node does not perform consensus. It will start all other services as these are still important for verifying incoming transactions.
Once a transaction has gone through the TVM and reaches the EpochCreator the transaction will be gossiped to other nodes via their public methods.

Future updates

In the future once the database grows bigger, nodes could also run but only keeping sections of the database backed up. If they get a transaction that requires information from other sectors, they will ask nodes they know have this information.

Testing Environment Options for Tagion tools

· 6 min read
Ivan
Tagion Core Contributor

This document explores various options for setting up a testing environment for Tagion tools, considering the programming languages already used in the project (D-lang, Bash, Python, and Jest with NodeJS+TypeScript). Each option is evaluated based on convenience to write tests, simplicity of setup, organization for many tests, and reporting capabilities.

Testing options

Bash Scripting

Pros:

  • Direct execution on Linux, no extra setup.
  • Simple to invoke binaries and compare outputs.

Cons:

  • Limited flexibility for complex parsing/comparison.
  • Basic error handling and reporting.

Evaluation:

  • Convenience to write test: 7
  • Simplicity of setup: 10
  • Organization for many tests: 5
  • Reporting: 3
  • Average: 6.25

Minimal Example:

#!/bin/bash
expected_output="expected.txt"
actual_output="actual.txt"
./your_binary inputfile > "$actual_output"
if diff "$expected_output" "$actual_output"; then
echo "Test passed"
else
echo "Test failed"
fi

D-lang with std.process

Pros:

  • Seamless integration with D-lang projects.
  • Powerful language features for complex tests.

Cons:

  • Manual organization for tests required.

Evaluation:

  • Convenience to write test: 7
  • Simplicity of setup: 8
  • Organization for many tests: 6
  • Reporting: 5
  • Average: 6.5

Minimal Example:

import std.process;
import std.stdio;
import std.file;

void main() {
auto expectedOutput = readText("expected.txt");
auto actualOutput = executeShell("./your_binary inputfile");
assert(actualOutput.output == expectedOutput, "Test failed");
}

pytest with Python

Pytest is a mature full-featured Python testing tool that helps you write better programs. It simplifies the creation, organization, and execution of tests, including complex functional testing.

Pros:

  • Easy to start with due to its simple syntax for writing tests.
  • Powerful fixture system for setup and teardown, which is particularly useful for pre-running processes or configurations.
  • Supports parameterized tests and can run tests in parallel.
  • Rich plugin architecture for extending functionality.
  • Excellent support for different types of tests, from unit to integration and end-to-end tests.
  • Automatic test discovery.
  • Detailed and customizable reports, outputting to both console and files.

Cons:

  • Requires familiarity with Python.
  • Environment setup involves creating a Python virtual environment and installing dependencies.

Evaluation:

  • Convenience to write test: 9
  • Simplicity of setup: 7
  • Organization for many tests: 9
  • Reporting: 9
  • Average: 8.5

Minimal Example (Test File Skeleton):

import pytest

def test_feature_1():
assert True

def test_feature_2():
assert True

Jest with NodeJS and TypeScript

Jest is a delightful JavaScript Testing Framework with a focus on simplicity. It works with projects using: Babel, TypeScript, Node, React, Angular, Vue, and more. It's well-suited for JavaScript and TypeScript projects, making it a popular choice for frontend and backend testing.

Pros:

  • Zero configuration for many projects, with automatic discovery of test files.
  • Built-in code coverage reports, with support for console and file outputs.
  • Rich mocking, spying, and test isolation features.
  • Supports testing asynchronous code out of the box.
  • Integrated with modern JavaScript ecosystems.

Cons:

  • Primarily focused on the JavaScript/TypeScript ecosystem, might not be ideal for non-JS projects.
  • Can become slow in large projects without proper configuration.

Evaluation:

  • Convenience to write test: 9
  • Simplicity of setup: 8
  • Organization for many tests: 9
  • Reporting: 9
  • Average: 8.75

Minimal Example (Test File Skeleton):

Creating two test files for different tools, toolA and toolB, with two example tests in each:

tests/toolATests.test.ts

describe('Tool A Tests', () => {
test('Feature 1 should work', () => {
// Test implementation
});

test('Feature 2 should work', () => {
// Test implementation
});
});

Summary

When scaling up to about 10 tests for each command-line tool, organization and maintenance become crucial. Scripting solutions like Bash and Makefiles might start simple but can quickly become unwieldy as complexity grows. Python and Jest, with their structured testing frameworks, offer more scalability and maintainability, making them suitable for larger test suites. D-lang provides a middle ground, with strong language features but potentially requiring more manual organization.

Each option's ability to handle multiple tests effectively varies, with Jest and Python offering more structured approaches that scale better as the number of tests increases. Bash and D-lang, while capable, may require more manual effort to maintain clarity and organization as the suite expands.

Pytest vs. Jest Comparison

When comparing Pytest and Jest for a project with about 5-10 tools and up to 10 tests for each tool, several factors are crucial, including the complexity of organization, reporting capabilities, environment setup, and the ability to pre-run processes or configurations for tests.

Organization

Pytest:

  • Test files and functions are automatically discovered based on naming conventions.
  • Supports structuring tests in a modular way using directories and files.
  • The fixture system provides a powerful way to set up and tear down configurations or dependencies.

Jest:

  • Similar to Pytest, Jest discovers tests based on naming conventions and supports organization using directories and files.
  • Jest's setup and teardown mechanisms are managed through global or individual test lifecycle hooks.

Both frameworks support a clean and scalable organization of tests, but Pytest's fixture system is exceptionally versatile for managing dependencies and state.

Reporting

Pytest:

  • Offers detailed reports in the console, highlighting failed tests with specific error messages.
  • Supports generating reports in various formats, including HTML, through plugins.

Jest:

  • Provides an interactive watch mode with clear output in the console, including a summary of test suites and individual tests.
  • Capable of outputting coverage reports in various formats directly.

Both Pytest and Jest offer excellent reporting capabilities, with both console and file outputs. Pytest's plugin system and Jest's built-in coverage tool are highlights.

Environment Setup

Pytest Setup on a fresh Ubuntu:

# Update packages and install Python and pip
sudo apt update
sudo apt install python3 python3-pip -y

# (Optional but recommended) Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies from requirements.txt
pip install -r requirements.txt

# Run tests with Pytest
pytest

Jest Setup on a fresh Ubuntu:

# Update packages and install Node.js and npm
sudo apt update
sudo apt install nodejs npm -y

# Install project dependencies including Jest
npm install

# Run tests with Jest
npm test

Pytest requires Python-specific setup, while Jest requires Node.js ecosystem setup. The complexity is similar, but the familiarity with the respective language's environment might sway the preference.

Handling Pre-run Processes

Pytest:

  • Can use fixtures to start and stop background processes or perform setup tasks before running tests.

Jest:

  • Utilizes global setup/teardown files or beforeEach/afterEach hooks for similar purposes.

Both frameworks provide mechanisms to manage pre-run processes, but Pytest's fixtures offer more granularity and control.

Summary

Choosing between Pytest and Jest largely depends on the primary technology stack of the project and the team's familiarity with Python or JavaScript/TypeScript. For Python-centric projects or when testing requires intricate setup and teardown, Pytest is exceptionally powerful. Jest, being part of the JavaScript ecosystem, is ideal for projects already using Node.js, particularly when uniformity across frontend and backend testing is desired.

Subscription API

· 2 min read
Lucas
Tagion Core Contributor

This proposal aims to aid the contract tracing proposal, by providing an external API to query the data. And making real time data easier to access.

Motivation

Current wallet implementations rely on polling the shell in order the know. When their balance has changed and when a transaction has gone through. The obviously superior alternative is to let the server notify the client when it has data. Be that via long-polling, socket, SSE etc... Note: the kernel node exposes a nng subscription socket which publishes all data to the connected clients where data is filtered client-side. This socket is not intended to be exposed externally.

Requirements

The API should be content driven, cheap for the server decide which events to send, while stile being flexible enough the clients should do minimal filtering.

Proposed Solution

The client sends a HiRPC with the subscribe method. The single parameter is object with the following structure.

struct SubFilter {
@optional string typename;
@optional DARTIndex[] dartindex;
@optional Archive.Type
bool verify() {
return !typename.empty || !dartindex.empty;
}
}

A response event should be the same format as the response to a dartRead command. It should contain a recorder with documents matching the filter.

Examples

Subscribe to any new updated document which you own.

trt.subscribe

SubFilter sub;
sub.typename = "$@trt";
sub.dartindex = [dartindex(#$Y, <mypubkey>)];

Subscribe to new epochs

subscribe

SubFilter sub;
sub.typename = $@E;

Contract tracing proposal

· 3 min read
Philip
Tagion Core Contributor

Current problems with tracing

Currently if you want to know if a contract has gone through or not, the only way to figure it out as a client or debugger is by seeing if the inputs for the contract were deleted and the outputs added. Also this does not guarantee that a specific contract went through. It might have been another one. When debugging developers also have a difficult time, since the only way to currently debug is to go through the log, which is cumbersome in cases where many contracts are sent at the same time.

Proposed solution

Important aspects that needs to be fulfilled with the tracing are as follows.

  • The logging should use the logger-service, in order to not send the log if there are no listeners.
  • It should not slow down the core, which means that the information must be pushed out from the respective services.
  • It should contain different options for tracing. Allowing for pushing a (true, false) in prod if a contract has gone through. Or more verbose information in debug mode in order to see where the contract got stuck.
  • You should be able to make a request to the system which can return if a contract has gone through.

It should be easily extended in order to support functionality for a future explorer.

Tracing

The unique identifier for each contract should be the contract hash. This will be unique for all contract coming into the system. This contract hash should be logged out with a specific identifier ("CONTRACT_contract_hash"?), which allows users to subscribe to a specific contract or all contracts. The logging should happen in all actors through the stack with the inputvalidator being the most important indicating that the contract was received. And the dart/trt telling if the contract has gone through in the end.

New TRT Archive

We could add a new archive to the trt which contains:

@recordType(TYPENAME~"contract_trt")
struct TRTContract {
@label("#CONTRACT") Buffer contract_hash; // the contract hash as a name record
long epoch_number; // the epoch number
}

This will allow users to lookup if a contract has gone through with a trt.checkRead method. Or perform a trt.dartRead in order to see what epoch the contract when through. This would also mean that contract hashes has to be stored in the recorder as well, so that the trt can be rebuilt at a later stage and not contain a state by itself.

First steps

  1. Implement logging on contracts through the stack.
  2. Create a simple CLI program which will print a new line with "CONTRACT_HASH, STATE", each time a new event is created by subscribing to all contracts. This will be the base for a new debug tool in the future.

Cache

· 4 min read
Philip
Tagion Core Contributor

Current problems with cache and system interfaces.

The cache does not have "bucket" storage on pubkeys meaning that it will not work in its current format, if a user has multiple bills on the same pubkey. This could be fixed by updating the cache to have bucket storage, and each time we have a update from the dart in the recorder regarding a bill, we construct a new search request for all bills pubkeys and send against the dart.

The problem with the above idea, is in case that a user has many bills on a single pubkey, we might need to return 1000 bills on a request, which is not a scalable solution. Maybe it is better to update the requests?

Proposed solution

We introduce a new method called hirpc.trt.dartRead (And also deprecate hirpc.search which goal is to return all DARTIndices for a specific public key. This will greatly reduce the overall response since a DARTIndices is only 32 bytes.

  1. The user sends a hirpc.trt request on all their public keys and gets all dartindices back where archives were found from the TRT ("or cache").
  2. The user checks for bills in their bill[] on the responded dartindexs. If some bills are found in their bill[] but were not in the response, the bill is no longer in the system and has been deleted. Likewise if some bills are not found in their bill[] but were found in their response the user might have received further payment.
  3. The user takes the public keys which were not found in the bill[] but in the response, and performs a hirpc.dartRead only on the indices that are necessary to read (new archives).
  4. The user is returned the new found archives which the user was the owner of.

A cache is also created which contains Document[DARTIndex] making the lookup on a index very fast and acts as a cache layer on the DART. The current cache is changed so that instead of holding TagionBill[Pubkey] it contains DARTIndex[][DARTIndex] and acts as a cache layer on the TRT. Like the other cache it needs to update itself based on the recorder changes and create new trt requests.

Performing a hirpc.trt.dartRead

Performing a hirpc.dartRead

Updating DARTCache internally

Updating TrtCache internally

The TRT can push its updates to a cache just like the DART does. This is done by pushing the recorder that it modifies itself with. This recorder contains a "full list" of all documents located on a specific public key:

struct TRTArchive {
@label(TRTLabel) Pubkey owner;
DARTIndex[] indices;

mixin HiBONRecord!(q{
this(Pubkey owner, DARTIndex[] indices) {
this.owner = owner;
this.indices = indices;
}
});
}

Therefore from the recorder the cache is able to update itself.

Other than the above specified cache update the cache also updates itself on new requests from users.