Skip to main content

Monitor Tessera

You can use Tessera with InfluxDB or Prometheus time-series databases to record API usage metrics. You can visualize the recorded data using an existing dashboard tool such as Grafana.

In addition, you can search, analyze, and monitor Tessera logs using Splunk or Elastic Stack (ELK). You can set up Splunk such that the logs for multiple Tessera nodes in a network are accessible from a single centralized Splunk instance.

Record API metrics

Tessera can record the following usage metrics for each endpoint of its API:

  • Average response time
  • Maximum response time
  • Minimum response time
  • Request count
  • Requests per second

You can store these metrics in an InfluxDB or Prometheus time-series database for further analysis.

  • Use InfluxDB when you prefer for metrics to be "pushed" from Tessera to the database. For example, Tessera starts a service which periodically writes the latest metrics to the database by calling the database's API.
  • Use Prometheus when you prefer for metrics to be "pulled" from Tessera by the database. For example, Tessera exposes a /metrics API endpoint which the database periodically calls to fetch the latest metrics.

Both databases integrate well with the open source dashboard editor Grafana to allow for easy creation of dashboards to visualize the data being captured from Tessera.

InfluxDB

The InfluxDB documentation provides details on how to set up an InfluxDB database ready for use with Tessera. A summary of the steps is as follows:

  1. Install InfluxDB.

  2. Start the InfluxDB server:

    influxd -config /path/to/influx.conf

    For local development and testing, the default configuration file (Linux: /etc/influxdb/influxdb.conf, macOS: /usr/local/etc/influxdb.conf) is sufficient. For further configuration options see Configuring InfluxDB.

  3. Connect to the InfluxDB server using the influx CLI and create a new database. If using the default configuration, this is as follows:

    influx
    > CREATE DATABASE myDb
  4. To view data stored in the database, use the Influx Query Language.

    influx
    > USE myDb
    > SHOW MEASUREMENTS
    > SELECT * FROM <measurement>
info

You can call the InfluxDB HTTP API directly as an alternative to using the influx CLI.

You can optionally configure each Tessera server type (for example, P2P, Q2T, ADMIN, THIRDPARTY, ENCLAVE) to store API metrics in an InfluxDB. You can configure these servers to store metrics to the same database or separate ones.

To configure a server to use an InfluxDB, add influxConfig to the server configuration. For example:

"serverConfigs": [
{
"app":"Q2T",
"serverAddress":"unix:/path/to/tm.ipc",
"influxConfig": {
"serverAddress": "https://localhost:8086", // InfluxDB server address
"dbName": "myDb", // InfluxDB DB name (DB must already exist)
"pushIntervalInSecs": 15, // How frequently Tessera will push new metrics to the DB
"sslConfig": { // Config required if InfluxDB server is using TLS
"tls": "STRICT",
"sslConfigType": "CLIENT_ONLY",
"clientTrustMode": "CA",
"clientTrustStore": "/path/to/truststore.jks",
"clientTrustStorePassword": "password",
"clientKeyStore": "path/to/truststore.jks",
"clientKeyStorePassword": "password"
}
}
},
{
"app":"P2P",
"serverAddress":"http://localhost:9001",
"influxConfig": {
"serverAddress": "http://localhost:8087",
"dbName": "anotherDb",
"pushIntervalInSecs": 15
}
}
]

InfluxDB TLS Configuration

InfluxDB supports one-way TLS. This allows clients to validate the identity of the InfluxDB server and provides data encryption.

See Enabling HTTPS with InfluxDB for details on how to secure an InfluxDB server with TLS. A summary of the steps is as follows:

  1. Obtain a CA/self-signed certificate and key (either as separate .crt and .key files or as a combined .pem file).

  2. Enable HTTPS in influx.conf:

    # Determines whether HTTPS is enabled.
    https-enabled = true

    # The SSL certificate to use when HTTPS is enabled.
    https-certificate = "/path/to/certAndKey.pem"

    # Use a separate private key location.
    https-private-key = "/path/to/certAndKey.pem"
  3. Restart the InfluxDB server to apply the configuration changes.

    To allow Tessera to communicate with a TLS-secured InfluxDB, you must provide sslConfig in the configuration file. Configure Tessera as the client in one-way TLS:

    "sslConfig": {
    "tls": "STRICT",
    "sslConfigType": "CLIENT_ONLY",
    "clientTrustMode": "CA",
    "clientTrustStore": "/path/to/truststore.jks",
    "clientTrustStorePassword": "password",
    "clientKeyStore": "path/to/truststore.jks",
    "clientKeyStorePassword": "password",
    "environmentVariablePrefix": "INFLUX"
    }

    where truststore.jks is a Java keystore format file containing the trusted certificates for the Tessera client (for example, the certificate of the CA used to create the InfluxDB certificate).

    If securing the key store with a password, you must provide this password. Passwords can be provided either in the configuration (for example clientTrustStorePassword) or as environment variables (using environmentVariablePrefix and setting <PREFIX>_TESSERA_CLIENT_TRUSTSTORE_PWD). The TLS configuration documentation explains this in more detail.

    As Tessera expects two-way TLS, a .jks file for the clientKeyStore must also be provided. This isn't used so can simply be set as the trust store.

Prometheus

The Prometheus documentation provides information to set up Prometheus to integrate with Tessera. The Prometheus first steps is a good starting point. A summary of the steps to store Tessera metrics in a Prometheus database is as follows:

  1. Install Prometheus.

  2. Create a prometheus.yml configuration file to provide Prometheus with the necessary information to pull metrics from Tessera. You can use an example Prometheus configuration with the Quorum Developer Quickstart.

  3. Start Tessera. Tessera always exposes the metrics endpoint, so no additional configuration of Tessera is required.

  4. Start Prometheus:

    prometheus --config.file=prometheus.yml
  5. To view data stored in the database, access the Prometheus UI (by default localhost:9090, this address can be changed in prometheus.yml) and use the Prometheus Query Language.

Grafana

You can import a pre-built GoQuorum Grafana dashboard to visualize your recorded GoQuorum network data.

Monitor logs

You can search, analyze, and monitor the logs of Tessera nodes using Splunk or Elastic Stack (ELK).

Splunk

Set up Splunk and Splunk Universal Forwarders to consolidate the logs from multiple Tessera nodes. The following pages from the Splunk documentation are a good starting point for understanding how to achieve this:

The general steps to consolidate the logs for a Tessera network in Splunk are:

  1. Set up a central Splunk instance if one does not already exist. Typically this is on a host separate to the hosts running the Tessera nodes. This is known as the Receiver.

  2. Configure the Tessera hosts to forward their nodes' logs to the Receiver by:

    1. Configuring the format and output location of the node's logs. This is achieved by configuring Logback (the logging framework used by Tessera) at node start-up.

      The following example XML configures Logback to save Tessera's logs to a file. See the Logback documentation for more information on configuring Logback:

      <?xml version="1.0" encoding="UTF-8"?>
      <configuration>
      <appender name="FILE" class="ch.qos.logback.core.FileAppender">
      <file>/path/to/file.log</file>
      <encoder>
      <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
      </encoder>
      </appender>

      <logger name="org.glassfish.jersey.internal.inject.Providers" level="ERROR" />
      <logger name="org.hibernate.validator.internal.util.Version" level="ERROR" />
      <logger name="org.hibernate.validator.internal.engine.ConfigurationImpl" level="ERROR" />

      <root level="INFO">
      <appender-ref ref="FILE"/>
      </root>
      </configuration>

      To start Tessera with an XML configuration file:

      tessera -Dlogback.configurationFile=/path/to/logback-config.xml  -configfile /path/to/config.json
    2. Set up Splunk Universal Forwarders (lightweight Splunk clients) on each Tessera host to forward log data for its node to the Receiver.

    3. Set up the Splunk Receiver to listen and receive logging data from the Universal Forwarders.

Elastic Stack

Follow the Quorum Developer Quickstart to use Elastic Stack (ELK) to manage Tessera logs.