Introduction
How MarketGrid can be configured.
Overview
Glossary
| Term | Definition |
|---|---|
| Node | Compute resource, e.g. VM, bare metal box, or Kubernetes node |
| Container | A running instance of MarketGrid image |
| Service | A process running within a MarketGrid container |
Background
MarketGrid is deployed as a single image that is designed to be able to be run as any kind of component from within the MarketGrid platform.
MarketGrid uses Shared Memory as the high performance mechanism by which various processes can access the data in the platform. Given the nature of a containerised deployment, this means that the most straightforward approach is to ensure that all services on a given node (whether physical or virtual) should be run within a single container so that they have access to that node's copy of the Shared Memory database.
With this in mind, there are several distinct modes in which the Matching Engine can be deployed in order to support the different parts of the system:
Trading Engine Modes
Each mode in which the Trading engine can run can be configured in a scenario to be used operationally to start a container that has the correct services running to facilitate the functions that the given node is supposed to provide.
All the nodes in the system effectively receive the same sequence of incoming transactions and process these transactions in order to produce their own copy of the Shared Memory database from which the other reader services (such as the UI Servers, or the Alerting Engine) can read their data.
Main
This is the primary matching engine within the platform. It is the target for all inbound transactions and it is the component that validates and processes the transactions in order to create the "golden source" of the sequenced transaction log that produced all the data in the system. It is this transaction log that provides the authoritative audit trail for the system; it is also this log that is replicated to remote locations for resilient storage in the event of a catastrophic failure at the primary data centre. As such the Main Matching Engine performs two critical parts of the system workflow that are unique to its role in the environment:
- Initial Sequencing and logging of the incoming transactions; and
- Sending replies to those transactions back to the originating
User
None of the other Matching engine modes perform these functions, they just accept the sequenced stream of messages and process them in the order in which they are given. The critical step of sequencing the inbound transactions is where it takes transactions from the various inbound message queues and interleaves them into a single sequence of transactions for the actual Matching Engine functionality to process one at a time.
Replica
A Replica Matching Engine reads the stream of transactions sent by the Main engine and processes them to produce a copy of the Shared Memory database. The mechanism by which it performs this is essentially the same as for an engine in Backup mode, but a Replica will never offer itself as a candidate to take over in the event of a failure of the Main engine.
A Replica Matching Engine node can be started at any time and it will "catch up" from the Main Matching Engine by downloading past transactions and then proceed to process future transactions in real time.
Backup
When running in Backup mode, the Matching engine will replicate the transactions using the same underlying technology as a simple Replica but with the additional functionality of ensuring that the transactions it receives are logged, and that there is a reply sent back to the Main engine to report that the transaction has been received.
In addition to this, in the event that the stream of transactions from the Main engine stops, the Backup will attempt to take over as the Main engine by engaging with the Governor process to determine if the Main has actually failed.
Replica Nodes
A given MarketGrid environment will only deploy a single node with a Matching engine in Standalone mode. Where the environment is using the MarketGrid resiliency features, ordinarily a single node in Backup mode is also started (as well as deploying the Governor process to arbitrate in the event of a failover). It is when considering the other MarketGrid services desired for different environments where we may have different types and numbers of Replica nodes configured.
A given Replica node will run a single container instance that is configured via a scenario to start various other services to give that node a particular "personality". It is possible to construct scenarios that combine several of these personalities into a single node where desired, but for the purposes of this discussion we will consider them separately. Whilst there is no technical limit on the different types of Replica node, there are a few general categories that are normally deployed:
UI Nodes
graph TD
MTE[Main TE]:::darkblue --> RTE1[Replica TE]:::darkblue
MTE[Main TE]:::darkblue --> RTE2[Replica TE]:::darkblue
subgraph Replica 1
RTE1 --> UI101[UIServer 1]:::pink
RTE1 --> UI102[UIServer 2]:::pink
RTE1 --> UI1kk[...]:::pink
RTE1 --> UI1NN[UIServer N]:::pink
end
subgraph Replica 2
RTE2 --> UI201[UIServer 1]:::pink
RTE2 --> UI202[UIServer 2]:::pink
RTE2 --> UI2kk[...]:::pink
RTE2 --> UI2NN[UIServer N]:::pink
end
classDef green fill:#dfd
classDef darkgreen fill:#bdb
classDef blue fill:#ddf
classDef darkblue fill:#bbd
classDef pink fill:#fdf
classDef darkpink fill:#dbd
External Interface Nodes
graph TD
MTE[Main TE]:::darkblue --> RTE[Replica TE]:::darkblue
subgraph Replica Node
RTE --> IN01[Interface 1]:::pink
RTE --> IN02[Interface 2]:::pink
RTE --> INkk[...]:::pink
RTE --> INNN[Interface N]:::pink
end
classDef green fill:#dfd
classDef darkgreen fill:#bdb
classDef blue fill:#ddf
classDef darkblue fill:#bbd
classDef pink fill:#fdf
classDef darkpink fill:#dbd
Market Database Node
graph TD
MTE[Main TE]:::darkblue --> RTE[Replica TE]:::darkblue
MTE --> RTE1[Replica TE]:::darkblue
subgraph Replica MDB
RTE --> RDB[RDB]:::pink
RDB --> HDB[HDB]:::green
MDG[MDB Gateway]:::blue <--> RDB
MDG <--> HDB
end
subgraph Replica UI N
RTE1 --> UI101[UIServer 1]:::pink
MDG <--> UI101
RTE1 --> UI102[UIServer 2]:::pink
MDG <--> UI102
RTE1 --> UI1kk[...]:::pink
MDG <--> UI1kk
RTE1 --> UI1NN[UIServer N]:::pink
MDG <--> UI1NN
end
classDef green fill:#dfd
classDef darkgreen fill:#bdb
classDef blue fill:#ddf
classDef darkblue fill:#bbd
classDef pink fill:#fdf
classDef darkpink fill:#dbd
Stand Alone Deployments
A simple stand alone node may run zero Replica engines and simply attach additional services such as UI Servers or Interface Servers directly to the Main Matching Engine Shared Memory Database. These kinds of deployments are convenient for such purposes as:
- functional testing
- demonstration systems; or
- supporting developers working on software to interact with MarketGrid
The Stand Alone mode performs all the functions of Main Matching Engine.
graph TD
MTE[Main TE]:::darkblue
MTE --> UI[UIServer]:::pink
MTE --> IN1[Interface 1]:::green
MTE --> IN2[Interface 2]:::green
MTE --> IN3[Interface 3]:::green
classDef green fill:#dfd
classDef darkgreen fill:#bdb
classDef blue fill:#ddf
classDef darkblue fill:#bbd
classDef pink fill:#fdf
classDef darkpink fill:#dbd
Scenario Design
Scenarios are an important part of the MarketGrid containerised environment. They serve several purposes not the least of which is to ensure that there is a controlled and reproducable configuration and operational framework to allow for a structured deployment of the system from development, through testing, acceptance and onto staging and production.
Designing the scenarios for deployment is not an ad hoc process and as such for any given environment, the different classes of a given node should be carefully configured into a scenario that can remain consistent across all environments.
For example, a UI Server Replica scenario should be configured with as many UI Server processes as might be served by the class of computing node on which that container is deployed. In this way there is no need to try and start a new process in a running instance since the processes would already be running. The marginal cost of an "idle" UI Server process is a small amount of memory, so there is no downside in starting the process and it can be left to the customer facing load balancing and proxy infrastructure to be configured to use the right number of processes to serve the incoming load or to take advantage of the full compute power of the node in question.
Example Deployment
In order to illustrate the containerised deployment of MarketGrid across multiple nodes, this section of the document provides a concrete example of how to deploy using Docker and specifically Docker Compose files.
Orchestration Technology
At a more fundamental level, MarketGrid software is agnostic about which approach to orchestration is used to deploy in a containerised environment. Carta uses Kubernetes in our own production environment and we use Docker in the development process to validate images and perform Unit, Functional and Integration testing.
Prerequisites
In order to touch on all the facets of a deployment, this example uses different virtual machines in order to simulate the deployment across different Availability Zones to provide additional resilience for the different nodes being deployed and using formal networking between the nodes in order to reflect the way in which networking is used in a real world environment.
The presumption in this example is that there are different virtual machines for each node to be deployed and that these nodes are on a shared local network.
Using Docker Compose
Docker Compose can be helpful if you want to use more complex configurations, especially those involving networking and volume setups.
In the example provided below, the command of each Docker Compose service starts a specific scenario that is relevant for that container.
N.B.
servicesin the Docker Compose file refer to containers, whereasservicesin the MarketGrid scenario files refer to the processes within MarketGrid that will start up when that scenario is selected.
version: "3.9"
services:
mg-base:
image: registry.cartax.io/platform/meta/marketgrid:${RELEASE_TAG}
hostname: ${CONTAINER_HOSTNAME}
shm_size: 10gb
network_mode: "host"
volumes:
- type: bind
source: ./scenarios/${CLIENT}/
target: /opt/MarketGrid/scenarios/${CLIENT}/
- type: bind
source: ./logs/
target: /opt/MarketGrid/logs/
- type: bind
source: ./etc/${CLIENT}
target: /opt/MarketGrid/etc/
mg-standalone:
extends:
service: mg-base
command: mg start standalone
container_name: mg-standalone
volumes:
- type: bind
source: ./datasets/
target: /opt/MarketGrid/datasets/
- type: bind
source: ./kdb_licence/
target: /opt/q/licences/x.marketgrid.systems
- type: bind
source: ./data/
target: /opt/MarketGrid/data/
mg-main:
extends:
service: mg-standalone
command: mg start main
container_name: mg-main
mg-backup:
extends:
service: mg-standalone
command: mg start backup
container_name: mg-backup
mg-mdb:
extends:
service: mg-standalone
command: mg start mdb
container_name: mg-mdb
mg-interfaces:
extends:
service: mg-base
command: mg start interfaces
container_name: mg-interfaces
mg-ui-1:
extends:
service: mg-base
command: mg start ui
container_name: mg-ui-1
mg-ui-2:
extends:
service: mg-base
command: mg start ui
container_name: mg-ui-2
Example scenarios
Here are some examples of the scenarios referenced in the commands in the compose file above.
main
name: main
description: Main scenario
services:
# Core matching engine
matching_engine:
process: te_engine
options:
machinemode: Main
mainenginelistenaddr: 0.0.0.0
governorconnectaddr: localhost
logstreamerlistenaddr: 0.0.0.0
logstreamer: 1
backuptimeout: 10
autoload: true
writecacheonsigterm: true
load: datasets/${CLIENT}/${CLIENT_DATASET}
logstreamermaxsend: 10000
logstreamerdefaultsend: 1000
max_recs:
Account: 1000
Auction: 1
AuctionAccount: 1
AuctionSession: 1
Broadcast: 1000000
Enterprise: 100
Firm: 100
Holding: 50000
Holding_change: 100000
Level1_change: 100000
HoldingTransaction: 1000
Industry: 10
Instrument: 200
InstrumentGroup: 5
InstrumentMarket: 500
Market: 5
Order: 10000
RFQ: 1
ScheduledTransactions: 1
Sector: 5
Trade: 20000
User: 1000
Blotter: 1
GroupUser: 10000
Position: 1
Position_change: 1
PositionTransaction: 1
PositionTransaction_change: 1
TableCache: 200000
TransactionLog: 10000
BlobObject: 1000
BlobObject_change: 10000
# Governor process
governor:
process: te_governor
# Transaction server (optional, and can have multiple instances)
transaction_server:
process: te_tserver
backup
name: backup
description: Backup scenario
services:
# Core matching engine
matching_engine:
process: te_engine
options:
machinemode: Backup
mainengineconnectaddr: tcp://mghost_main:12010
governorconnectaddr: mghost_main
writecacheonsigterm: true
backuptimeout: 10
logstreamermaxsend: 10000
logstreamerdefaultsend: 1000
max_recs:
Account: 1000
Auction: 1
AuctionAccount: 1
AuctionSession: 1
Broadcast: 1000000
Enterprise: 100
Firm: 100
Holding: 50000
Holding_change: 100000
Level1_change: 100000
HoldingTransaction: 1000
Industry: 10
Instrument: 200
InstrumentGroup: 5
InstrumentMarket: 500
Market: 5
Order: 10000
RFQ: 1
ScheduledTransactions: 1
Sector: 5
Trade: 20000
User: 1000
Blotter: 1
GroupUser: 10000
Position: 1
Position_change: 1
PositionTransaction: 1
PositionTransaction_change: 1
TableCache: 200000
TransactionLog: 10000
BlobObject: 1000
BlobObject_change: 10000
# Transaction server (optional, and can have multiple instances)
transaction_server:
process: te_tserver
standalone (for dev/testing)
name: standalone
description: Standalone scenario
services:
# Core matching engine
matching_engine:
process: te_engine
options:
machinemode: Standalone
# To run in demo mode (optional, comment out if not wanted)
demomode: NoPasswords
# To restart the engine from previous transaction logs set
# the autoload + writecacheonsigterm options to true
# Or comment out if starting from dataset each time
autoload: true
writecacheonsigterm: true
load: datasets/${CLIENT}/${CLIENT_DATASET}
max_recs:
Account: 1000
Auction: 1
AuctionAccount: 1
AuctionSession: 1
Broadcast: 1000000
Enterprise: 100
Firm: 100
Holding: 50000
Holding_change: 100000
Level1_change: 100000
HoldingTransaction: 1000
Industry: 10
Instrument: 200
InstrumentGroup: 5
InstrumentMarket: 500
Market: 5
Order: 10000
RFQ: 1
ScheduledTransactions: 1
Sector: 5
Trade: 20000
User: 1000
Blotter: 1
GroupUser: 10000
Position: 1
Position_change: 1
PositionTransaction: 1
PositionTransaction_change: 1
TableCache: 200000
TransactionLog: 10000
BlobObject: 1000
BlobObject_change: 10000
# Transaction server
transaction_server:
process: te_tserver
# UI services
ui_server:
process: ui_server
options:
mdb_connections: gateway:localhost:13005
nginx:
process: nginx
options:
branding: ${CLIENT_BRANDING}
# kdb+ services
discovery:
process: mdb_discovery
rdb:
process: mdb_rdb
policies:
on_scenario_stop: DoNotSignal
alert_engine:
process: mdb_alertengine
gateway:
process: mdb_gateway
hdb:
process: mdb_hdb
reporter:
process: mdb_reporter
ui
name: ui
description: UI node scenario
services:
# Core matching engine
matching_engine:
process: te_engine
options:
machinemode: Replica
engineconnectaddr1: mghost_main:12001
upstreamconnectaddr1: mghost_main:12021
logstreamer: 1001
logstreamermaxsend: 10000
logstreamerdefaultsend: 1000
max_recs:
Account: 1000
Auction: 1
AuctionAccount: 1
AuctionSession: 1
Broadcast: 1000000
Enterprise: 100
Firm: 100
Holding: 50000
Holding_change: 100000
Level1_change: 100000
HoldingTransaction: 1000
Industry: 10
Instrument: 200
InstrumentGroup: 5
InstrumentMarket: 500
Market: 5
Order: 10000
RFQ: 1
ScheduledTransactions: 1
Sector: 5
Trade: 20000
User: 1000
Blotter: 1
GroupUser: 10000
Position: 1
Position_change: 1
PositionTransaction: 1
PositionTransaction_change: 1
TableCache: 200000
TransactionLog: 10000
BlobObject: 1000
BlobObject_change: 10000
# Transaction server
transaction_server:
process: te_tserver
options:
engineaddr: mghost_main:12001
engineaddr: mghost_bu:12001
# Snapshot server (enhances performance of OrderBook view in UI)
snapshot_server:
process: te_snapshots
options:
type: OrderBook
# UI services
ui_server_0:
process: ui_server
options:
mdb_connections: gateway:mghost_mdb:13005
ui_server_1:
process: ui_server
options:
mdb_connections: gateway:mghost_mdb:13005
ui_server_2:
process: ui_server
options:
mdb_connections: gateway:mghost_mdb:13005
nginx:
process: nginx
options:
branding: ${CLIENT_BRANDING}
interfaces
mdb
name: mdb
description: Market Database node scenario
services:
# Core matching engine
matching_engine:
process: te_engine
options:
machinemode: Replica
engineconnectaddr1: mghost_main:12001
upstreamconnectaddr1: mghost_main:12021
logstreamer: 1001
max_recs:
Account: 1000
Auction: 1
AuctionAccount: 1
AuctionSession: 1
Broadcast: 1000000
Enterprise: 100
Firm: 100
Holding: 50000
Holding_change: 100000
Level1_change: 100000
HoldingTransaction: 1000
Industry: 10
Instrument: 200
InstrumentGroup: 5
InstrumentMarket: 500
Market: 5
Order: 10000
RFQ: 1
ScheduledTransactions: 1
Sector: 5
Trade: 20000
User: 1000
Blotter: 1
GroupUser: 10000
Position: 1
Position_change: 1
PositionTransaction: 1
PositionTransaction_change: 1
TableCache: 200000
TransactionLog: 10000
BlobObject: 1000
BlobObject_change: 10000
# Transaction server
transaction_server:
process: te_tserver
options:
engineaddr: mghost_main:12001
engineaddr: mghost_bu:12001
# kdb+ services
discovery:
process: mdb_discovery
rdb:
process: mdb_rdb
policies:
on_scenario_stop: DoNotSignal
alert_engine:
process: mdb_alertengine
transaction_server: localhost:12001
gateway:
process: mdb_gateway
hdb:
process: mdb_hdb
reporter:
process: mdb_reporter
Starting MarketGrid with docker-compose
To test a deployment of MarketGrid that runs across multiple nodes, each node will need a copy of the docker-compose file and the node's appropriate scenario file. Note that there will only be one container running per node, but the container may have multiple MarketGrid services running inside.
To start up a container on a node, run the following command:
docker-compose -f docker-compose.yaml up ${SERVICE_NAME}
where ${SERVICE_NAME} is the name of the service (container) in the docker-compose file that should be started on this node.
For example, one could start up the mg-standalone container on a single node, or start up mg-main, mg-backup, and mg-ui-1 on 3 separate nodes.
The scenarios for these type of nodes include information about the other nodes' hosts in order to communicate between them.