Prepare nodes for on-premises deployment

For on-premises deployments of YugabyteDB universes, you need to import nodes that can be managed by YugabyteDB Anywhere.

Prepare ports

The following ports must be opened for intra-cluster communication (they do not need to be exposed to your application, only to other nodes in the cluster and the YugabyteDB Anywhere node):

  • 7100 - YB-Master RPC
  • 9100 - YB-TServer RPC
  • 18018 - YB Controller

The following ports must be exposed for intra-cluster communication, and you should expose these ports to administrators or users monitoring the system, as these ports provide diagnostic troubleshooting and metrics:

  • 9300 - Prometheus metrics
  • 7000 - YB-Master HTTP endpoint
  • 9000 - YB-TServer HTTP endpoint
  • 11000 - YEDIS API
  • 12000 - YCQL API
  • 13000 - YSQL API
  • 54422 - Custom SSH

The following ports must be exposed for intra-node communication and be available to your application or any user attempting to connect to the YugabyteDB:

  • 5433 - YSQL server
  • 9042 - YCQL server
  • 6379 - YEDIS server

For more information on ports used by YugabyteDB, refer to Default ports.

Prepare nodes

You can prepare nodes for on-premises deployment, as follows:

  1. Ensure that the YugabyteDB nodes conform to the requirements outlined in the deployment checklist. This checklist also gives an idea of recommended instance types across public clouds.

  2. Install the prerequisites and verify the system resource limits, as described in system configuration.

  3. Ensure you have SSH access to the server and root access (or the ability to run sudo; the sudo user can require a password but having passwordless access is desirable for simplicity and ease of use).

  4. Execute the following command to verify that you can ssh into this node (from your local machine if the node has a public address):

    ssh -i your_private_key.pem ssh_user@node_ip
    

The following actions are performed with sudo access:

  • Create the yugabyte:yugabyte user and group.

  • Set the home directory to /home/yugabyte.

  • Create the prometheus:prometheus user and group.

    Tip

    If you are using the LDAP directory for managing system users, you can preprovision Yugabyte and Prometheus users, as follows:

    • Ensure that the yugabyte user belongs to the yugabyte group.

    • Set the home directory for the yugabyte user (default /home/yugabyte) and ensure that the directory is owned by yugabyte:yugabyte. The home directory is used during cloud provider configuration.

    • The Prometheus username and the group can be user-defined. You enter the custom user during the cloud provider configuration.

  • Ensure that you can schedule Cron jobs with Crontab. Cron jobs are used for health monitoring, log file rotation, and cleanup of system core files.

    Tip

    For any third-party Cron scheduling tools, you can disable Crontab and add the following Cron entries:

    # Ansible: cleanup core files hourly
    0 * * * * /home/yugabyte/bin/clean_cores.sh
    # Ansible: cleanup yb log files hourly
    5 * * * * /home/yugabyte/bin/zip_purge_yb_logs.sh
    # Ansible: Check liveness of master
    */1 * * * * /home/yugabyte/bin/yb-server-ctl.sh master cron-check || /home/yugabyte/bin/yb-server-ctl.sh master start
    # Ansible: Check liveness of tserver
    */1 * * * * /home/yugabyte/bin/yb-server-ctl.sh tserver cron-check || /home/yugabyte/bin/yb-server-ctl.sh tserver start
    

    Disabling Crontab creates alerts after the universe is created, but they can be ignored. You need to ensure Cron jobs are set appropriately for YugabyteDB Anywhere to function as expected.

  • Verify that Python 2.7 is installed.

  • Enable core dumps and set ulimits, as follows:

    *       hard        core        unlimited
    *       soft        core        unlimited
    
  • Configure SSH, as follows:

    • Disable sshguard.
    • Set UseDNS no in /etc/ssh/sshd_config (disables reverse lookup, which is used for authentication; DNS is still useable).
  • Set vm.swappiness to 0.

  • Set mount path permissions to 0755.

Note

By default, YugabyteDB Anywhere uses OpenSSH for SSH to remote nodes. YugabyteDB Anywhere also supports the use of Tectia SSH that is based on the latest SSH G3 protocol. For more information, see Enable Tectia SSH.

Enable Tectia SSH

Tectia SSH is used for secure file transfer, secure remote access and tunnelling. YugabyteDB Anywhere is shipped with a trial version of Tectia SSH client that requires a license in order to notify YugabyteDB Anywhere to permanently use Tectia instead of OpenSSH.

To upload the Tectia license, manually copy it at ${storage_path}/yugaware/data/licenses/<license.txt>, where storage_path is the path provided during the Replicated installation.

Once the license is uploaded, YugabyteDB Anywhere exposes the runtime flag yb.security.ssh2_enabled that you need to enable, as per the following example:

curl --location --request PUT 'http://<ip>/api/v1/customers/<customer_uuid>/runtime_config/00000000-0000-0000-0000-000000000000/key/yb.security.ssh2_enabled'
--header 'Cookie: <Cookie>'
--header 'X-AUTH-TOKEN: <token>'
--header 'Csrf-Token: <csrf-token>'
--header 'Content-Type: text/plain'
--data-raw '"true"'