MTG KMS Server Package Dependencies
To verify the integrity of the packages, a corresponding SHA-256 checksum as well as a PGP-Signature will be delivered for each package.
Component | Package name | Note |
---|---|---|
mtg-kms-server |
|
MTG KMS Server |
For detailed instructions on Installation Preparation, Installation and Apache Configuration please refer to the Related Links section at the end of this page. |
Hardware Security Module HSM
For supported HSMs and extended instructions please refer to this page.
Database
MTG-KMS uses a database for storing its configuration and user data.
The underlying database system has to be provided and managed by the customer, i.e. is not part of the MTG-KMS software. The database system should be configured to accept the jdbc connections from MTG-KMS to its database schema. |
MTG provides the RDBMS specific application’s database schema installation scripts inside the mtg-kms-server package. Depending on customer agreement, MTG provides schema installation scripts for the following database management systems:
-
Oracle SQL
-
PostgreSQL
-
MariaDB
MariaDB
Set the default character set of the database to UTF8 (see below).
ALTER DATABASE <db_name> COLLATE = 'utf8_unicode_ci' CHARACTER SET = 'utf8';
The encoding of the database must always be utf8 with collation utf8_unicode_ci. Currently, only the utf8mb3 format of MariaDB is supported. Some languages are not supported by this format. |
Galera Cluster with MariaDB
if a galera cluster is used, the option log_bin_trust_function_creators="ON" has to be set in the configuration file of the mysql installation because of the use of database triggers in connection with flyway.
|
To ensure that the transaction is repeated in the event of a deadlock exception in Quartz, the lock handler must be set for this. This must be done in the application.properties file with the following entry.
spring.quartz.properties.org.quartz.jobStore.lockHandler.class=org.quartz.impl.jdbcjobstore.UpdateLockRowSemaphore
|
Failover and Load-Balancing Modes
The following three failover and load balancing modes are supported for the Galera Cluster:
- sequential
-
-
This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster.
-
This mode does not support load-balancing reads on replicas.
-
The connector will try to connect to hosts in the order in which they were declared in the connection URL, so the first available host is used for all queries. For example, if the connection URL is the following:
jdbc:mariadb:sequential:host1,host2,host3/testdb
, when the connector tries to connect, it will always try host1 first. -
If that host is not available, then it will try host2. etc. When a host fails, the connector will try to reconnect to hosts in the same order.
-
- loadbalance
-
-
This mode supports connection load-balancing in a multi-master environment, such as MariaDB Galera Cluster.
-
This mode does not support load-balancing reads on replicas.
-
The connector performs load-balancing for all queries by randomly picking a host from the connection URL for each connection, so queries will be load-balanced as a result of the connections getting randomly distributed across all hosts.
-
- load-balance-read
-
-
When running a multi-master cluster (i.e. Galera), writing to more than one node can lead to optimistic locking errors ("deadlocks").
-
Writing concurrently to multiple nodes also doesn’t bring a meaningful upgrade in performance, due to having to (synchronously) replicate to all nodes anyway.
-
This mode supports connection failover in a multi-master environment, such as MariaDB Galera Cluster.
-
This mode does support load-balancing reads on replicas.
-
The connector will try to connect to primary hosts in the order in which they were declared in the connection URL, so the first available host is used for all queries.
-
For example, if the connection URL is the following:
jdbc:mariadb:load-balance-read:primary1,primary2,address=(host=replica1)(type=replica),address=(host=replica2)(type=replica)/DB
, when the connector tries to connect, it will always try primary1 first. If that host is not available, then it will try primary2. etc. -
When a primary host fails, the connector will try to reconnect to hosts in the same order.For replica hosts, the connector performs load-balancing for all queries by randomly picking a replica host from the connection URL for each connection, so queries will be load-balanced as a result of the connections getting randomly distributed across all replica hosts.
-
Asynchronous communication between Cluster Nodes
Since a Galera cluster communicates asynchronously with the nodes by default, depending on the use case, the cluster variable wsrep_sync_wait
may need to be set to 1 to avoid the error: “No object with the specified Unique Identifier exists”. Such a use case would be when a write operation, e.g., Create, is performed multiple times in parallel on different nodes, followed immediately by a read operation, e.g., Get or Locate. This means that the Get operation could be called on a node before the object has been synchronized to that node by Create.
Setting wsrep_sync_wait to 1 should be carefully considered, as this leads to performance losses.
|
In the case of loadbalance
or load-balance-read
, setting wsrep_sync_wait
to the value 1 must always be considered!
In the case of load balancing mode sequential , this value is only necessary if a node fails and is then put back into operation.
|
Due to the failure, another node takes over the connections and these remain in place even after the failed node is brought back into operation. This means that two nodes are then used and the effect described above may occur.
To avoid this, wsrep_sync_wait
can be set to the value 1, or it can be ensured that the failed node is restarted together with all other nodes when there is no traffic at all.
DMZ
If the KMS-Tenant application is available through the internet, it is possible to use a separated Apache web server inside a DMZ. This Apache web server will act as a reverse proxy and forward the request towards the KMS-Tenant application in the back end. For a more specific description please contact the MTG Support Team.
MTG Secrets Protection Manager
The MTG-KMS uses the MTG Secrets Protection Manager workflow for encrypting application-specific data:
Therefore, it is a mandatory prerequisite to run the MTG Secrets Protection Manager CLI tool init
command before configuring the following applications:
-
KMS-Server
The content of the secrets-protection.properties
file, that was automatically generated by running the init
command, should be copied inside each MTG application’s application.properties
file.