Here we are going to configure Apache Tomcat (System 1) and Mysql (System 2) in a two node RedHat Cluster Suite (RHCS) with high availability (HA) cluster, it will help to run any desire application and database on automatic switchover mode with zero downtime.
RHCS High availability clusters eliminate single point of failure, so if the any node in cluster on which a service (in our case Apache Tomcat and MySQL) are running on node 1 and Node 2 . If one of Node become inoperative, the service can start up again (fail over) to another cluster node with minimum interruption or without any data loss.
System Requirements
There are many different ways of setting up a high availability cluster. In our part, we are using following components:
1. Two Cluster nodes - Install two machines with RedHat Enterprise Linux
Server 6.3 to act as cluster nodes.
2. Network /Local Storage - Shared network storage or Local storage is required. In our case we will used additional 5GB storage on both node, one for Apache Tomcat and the other one MySQL server.
If our one node gets down, then same LUN will appears on another node with data availability.
3. RedHat Products - Combines components from RedHat Enterprise Linux and RedHat Cluster Suite.
4. We will use apache tomcat RPM .
We had two RHEL 6.3 installed OS with hostname system1.eassylinux.com and system2.eassylinux.com to create HA cluster. We have to disable selinux policy and flush iptables for this setup.
Configure IP Address for both the cluster node :
![](https://static.wixstatic.com/media/986ac3_abc37db566eb48349f857e98c2ca23b4~mv2.png/v1/fill/w_980,h_243,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_abc37db566eb48349f857e98c2ca23b4~mv2.png)
We have to disable , flush iptables and selinux policy for this setup.
![](https://static.wixstatic.com/media/986ac3_6cf33d5bee334d15a6763c4dea83dd63~mv2.png/v1/fill/w_980,h_412,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_6cf33d5bee334d15a6763c4dea83dd63~mv2.png)
![](https://static.wixstatic.com/media/986ac3_0dcddf5df7334123a883178ab5d15101~mv2.png/v1/fill/w_980,h_216,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_0dcddf5df7334123a883178ab5d15101~mv2.png)
We changes the SELINUX policy from configuration file /etc/sysconfig/selinux,
Now system comes to know that selinux policy is disabled, we have to reboot both nodes.
Now you can see , selinux policy is in disabled state.
![](https://static.wixstatic.com/media/986ac3_f35263bac38b4af48beddcc8927c3598~mv2.png/v1/fill/w_908,h_79,al_c,q_85,enc_avif,quality_auto/986ac3_f35263bac38b4af48beddcc8927c3598~mv2.png)
Make host entry in /etc/hosts file to communicate each node with their hostname and their ip address:
![](https://static.wixstatic.com/media/986ac3_7dcbcee9cebe45ef91d0c972ab877b5e~mv2.png/v1/fill/w_980,h_120,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_7dcbcee9cebe45ef91d0c972ab877b5e~mv2.png)
Now check the communication between both nodes via hostname
![](https://static.wixstatic.com/media/986ac3_a53d38a0ba2a4f5ca56f70abd1e43051~mv2.png/v1/fill/w_980,h_421,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_a53d38a0ba2a4f5ca56f70abd1e43051~mv2.png)
We are successfully able to ping our both node with their hostname and IP address vice versa.
Configure block storage:
We want two additional LUN locally for Apache tomcat and Mysql.
Hence we are adding 5GB additional disk on each node for apache tomcat and mysql on both node. Then we will make logical volume for our data storage.
As per below image we have successfully added 5 GB local LUN on both node.
![](https://static.wixstatic.com/media/986ac3_debedd3cca554810878d961366d559e5~mv2.png/v1/fill/w_980,h_151,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_debedd3cca554810878d961366d559e5~mv2.png)
We added successfully 5GB disk named sdb and sdc. Now we are going to create LVM partition.
Create LVM Partition:
Create the logical volumes and file systems using standard LVM2 and file
System commands. This example assumes a whole second disk (/dev/sdb and /dev/sdc) is being used as a new LVM physical volume. Here we use fdisk to create the LVM (8e) partition type and run partprobe to make sure the change is synced with the kernel:
LUN partition configuration:
Now we are going to create partition for /dev/sdb on both nodes.
![](https://static.wixstatic.com/media/986ac3_f31749a128be4b15a22dc453d87d9221~mv2.png/v1/fill/w_980,h_441,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_f31749a128be4b15a22dc453d87d9221~mv2.png)
Similarly, we are going to create partition for /dev/sdc on both nodes.
![](https://static.wixstatic.com/media/986ac3_801a735bc2774f21a93a7639e89faadc~mv2.png/v1/fill/w_980,h_458,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_801a735bc2774f21a93a7639e89faadc~mv2.png)
We have created the partition name /dev/sdb1 and /dev/sdc1 from disk /dev/sdb and /dev/sdc .
![](https://static.wixstatic.com/media/986ac3_6a66eeb34cca44cf8a2ea974be89a6d6~mv2.png/v1/fill/w_980,h_185,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_6a66eeb34cca44cf8a2ea974be89a6d6~mv2.png)
Create the LVM physical volume :
![](https://static.wixstatic.com/media/986ac3_fb3ec9718dd34b418e1515d8e381f664~mv2.png/v1/fill/w_980,h_121,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_fb3ec9718dd34b418e1515d8e381f664~mv2.png)
Create the LVM volume group:
![](https://static.wixstatic.com/media/986ac3_69215ff18d9744be94e46f4c0f7a415a~mv2.png/v1/fill/w_980,h_114,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_69215ff18d9744be94e46f4c0f7a415a~mv2.png)
Create the logical volume from the new LVM volume group for Tomcat and
MySQL on both nodes:
![](https://static.wixstatic.com/media/986ac3_c75889fb9d8448589452cbd0211616a9~mv2.png/v1/fill/w_980,h_90,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_c75889fb9d8448589452cbd0211616a9~mv2.png)
Now format a block storage device with a specific file system on both nodes,
[root@system1 ~]# mkfs.ext4 /dev/vg_tomcat/lv_tomcat
[root@system2 ~]# mkfs.ext4 /dev/vg_tomcat/lv_tomcat
![](https://static.wixstatic.com/media/986ac3_23cac5ef610d456b9b6147188ecf93df~mv2.png/v1/fill/w_980,h_349,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_23cac5ef610d456b9b6147188ecf93df~mv2.png)
[root@system1 ~]# mkfs.ext4 /dev/vg_mysql/lv_mysql
[root@system2 ~]# mkfs.ext4 /dev/vg_mysql/lv_mysql
![](https://static.wixstatic.com/media/986ac3_67cbeae77a2c483d87332fe9723c0eb5~mv2.png/v1/fill/w_980,h_324,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_67cbeae77a2c483d87332fe9723c0eb5~mv2.png)
We have now two partitions, with device names /dev/vg_tomcat/lv_tomcat and /dev/vg_sql/lv_sql.
Create two directory where we can mount new two partitions :
[root@system1 ~]# mkdir -p /tomcat
[root@system1 ~]# mkdir -p /mysql
[root@system1 ~]# mount /dev/vg_tomcat/lv_tomcat /tomcat/
Here now logical volume is mounted on /tomcat dir.
[root@system2 ~]# mkdir -p /tomcat
[root@system2 ~]# mkdir -p /mysql
[root@system2 ~]# mount /dev/vg_mysql/lv_mysql /mysql/
Here now logical volume is mounted on /mysql dir.
![](https://static.wixstatic.com/media/986ac3_759e7e675d7a45fa9846284e0cf4ad06~mv2.png/v1/fill/w_980,h_82,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_759e7e675d7a45fa9846284e0cf4ad06~mv2.png)
On system1, we mounted the /tomcat directory and on system2, we mounted the /mysql directory.
![](https://static.wixstatic.com/media/986ac3_8f94608373964168b98668c0bac3c9d0~mv2.png/v1/fill/w_980,h_154,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_8f94608373964168b98668c0bac3c9d0~mv2.png)
We have to mount iso image and create repository to install cluster components.
![](https://static.wixstatic.com/media/986ac3_ea4757828d8c47969da109c7b7ea11d4~mv2.png/v1/fill/w_980,h_170,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_ea4757828d8c47969da109c7b7ea11d4~mv2.png)
Now We have mounted the iso image to /mnt .
We are created the yum repository in /etc/yum.repos.d/server.repo as below :
![](https://static.wixstatic.com/media/986ac3_61df08a3427f4490bda69faa622c3692~mv2.png/v1/fill/w_980,h_139,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_61df08a3427f4490bda69faa622c3692~mv2.png)
To check repository is enabled, fire the command yum repolist all as below :
![](https://static.wixstatic.com/media/986ac3_c1df1296c9d94d6e8275357536e71460~mv2.png/v1/fill/w_980,h_135,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_c1df1296c9d94d6e8275357536e71460~mv2.png)
Install Apache Tomcat :
On both cluster nodes, install the Apache tomcat RPM.and start the tomcat service on only one node.
#rpm –ivh apache-tomcat*
#/etc/init.d/apache-tomcat start
Install MySQL :
On both cluster nodes, install the MySQL RPM. and start the MySQL service on only one node.
#rpm –ivh MySQL-*
#/etc/init.d/mysql start
Install RICCI :
On both cluster nodes, install the ricci RPM.
Install the ricci RPMs:
# yum install ricci –y
Start RICCI :
On both cluster nodes, start the ricci daemon on both nodes and configure it
to start on boot:
# service ricci start
# chkconfig ricci on
![](https://static.wixstatic.com/media/986ac3_01c9ef9f683f457992b0f5931131c5af~mv2.png/v1/fill/w_980,h_160,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_01c9ef9f683f457992b0f5931131c5af~mv2.png)
Set password
On both cluster nodes, set the ricci user password:
# passwd ricci
At this point both cluster nodes should be running the ricci servers and be
ready to manage by the cluster web user interface (luci)
![](https://static.wixstatic.com/media/986ac3_cd9be83497374d998b52679b6b8ae099~mv2.png/v1/fill/w_980,h_134,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_cd9be83497374d998b52679b6b8ae099~mv2.png)
Install The Cluster Web User Interface (LUCI)
On the system1, we have chosen to run the cluster web user interface (luci), then run the following steps to install and configure luci:
Install Luci :
Install the luci RPMs:
# yum install luci –y
Start Luci :
Start the luci daemon.
# service luci start
#chkconfig luci on
#service luci status
![](https://static.wixstatic.com/media/986ac3_14798b2cc7db4ebcb98636d3d67bf79d~mv2.png/v1/fill/w_574,h_366,al_c,q_85,enc_avif,quality_auto/986ac3_14798b2cc7db4ebcb98636d3d67bf79d~mv2.png)
Login to Luci :
As instructed by the start-up script, point your web browser to the address
displayed while restarting the luci service and login as the root user, as
prompted.
![](https://static.wixstatic.com/media/986ac3_46ee727ea61a451c922044fb9f00c20c~mv2.png/v1/fill/w_551,h_50,al_c,q_85,enc_avif,quality_auto/986ac3_46ee727ea61a451c922044fb9f00c20c~mv2.png)
This is High Availability management home page.
![](https://static.wixstatic.com/media/986ac3_54c6e3e559174947855f69e3fff00351~mv2.png/v1/fill/w_980,h_429,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_54c6e3e559174947855f69e3fff00351~mv2.png)
Login creadential will be your root and root password. After enter creadential, you will see this page.
![](https://static.wixstatic.com/media/986ac3_21024ba99795420590a3223c0474b594~mv2.png/v1/fill/w_980,h_409,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_21024ba99795420590a3223c0474b594~mv2.png)
Name the cluster:
Select Manage Cluster then Create, then fill in the Cluster Name (for example,
eassy_linux).
Identify cluster nodes:
Fill in the Node Name (short name as mentioned in /etc/hosts file) and
Password (the password for the user ricci) for the first cluster node. Click the
Add Another Node button and add the same information for the second cluster
Node.
![](https://static.wixstatic.com/media/986ac3_551c0f46fefa4566ac576e56e442845b~mv2.png/v1/fill/w_980,h_449,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_551c0f46fefa4566ac576e56e442845b~mv2.png)
Add cluster options:
Select the following options, then click the Create Cluster button:
Use the Same Password for All Nodes: Select this check box.
Use Locally Installed Packages: Select this check box.
Reboot Nodes Before Joining Cluster: Leave this unchecked.
Enabled Shared Storage Support: Leave this unchecked.
After you click the Create Cluster button, if the nodes can be contacted luci
Will set up each cluster node and add each node to the cluster. When each
Node is setup, the High Availability Management screen will appear.
![](https://static.wixstatic.com/media/986ac3_36b037077b7044f8a3638148e4a91e24~mv2.png/v1/fill/w_980,h_307,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_36b037077b7044f8a3638148e4a91e24~mv2.png)
Create fail over domain :
Click the Failover Domains tab. Click the Add button and fill in the following
information as prompted:
Name : Fill in any node you like. (e.g. failover_eassylinux)
Prioritized : Check this box.
Restricted : Leave this unchecked.
Member : Click the Member Box for each node.
Priority. Add a “1” for system1; Add a “2” for system2 under the priority column.
Click Create to apply the changes to the fail over domain.
The configuration screen will look like this:
![](https://static.wixstatic.com/media/986ac3_2326b42a094641beb9df8dabfb308739~mv2.png/v1/fill/w_592,h_413,al_c,q_85,enc_avif,quality_auto/986ac3_2326b42a094641beb9df8dabfb308739~mv2.png)
Add fence devices :
Configure appropriate fence devices for the hardware you have. Add a fence
device and instance for each node. These settings will be particular to your
hardware and software configuration.
Creating a Highly Available LVM (HA LVM) :
We have created two LVM eg. Lv_tomcat and lv_mysql on both node.
The HA LVM provides LVM failover by:
Providing a mirroring mechanism between two SAN connected systems
Allowing a system to take over serving content from a system that fails
Setting up Shared Resources (From LUCI) :
Select the Resource tab. We are adding 3 Resource type
1.IP Address for apache-tomcat and mysql , use virtual ip, because we never use physical ip. If we use physical ip co-incidently our physical ip is down due to some network issue, then our HA service will be down until we get up our physical server. Hence we always use virtual ip.
Identify the Cluster Service's IP Address :
From luci, do the following to identify the cluster service's IP Address:
Select the Cluster.
Click on the cluster name (for example, Eassy_Linux).
Add an IP address resource.
Select the Resources tab, the click Add and choose IP Address.
Fill an IP address information.
Enter the following:
IP Address. Fill in a valid IP address. Ultimately, this IP Address
(192.168.43.121 for Apache Tomcat & 192.168.43.122 for MySQL) is used from a
Web browser to access the Tomcat and MySQL.
Monitor Link. Check this box.
Submit information.
Click the Submit button.
After configuring this, the screen will look like:
![](https://static.wixstatic.com/media/986ac3_75c0e602a4474812b33d5c18711b9eb8~mv2.png/v1/fill/w_531,h_373,al_c,q_85,enc_avif,quality_auto/986ac3_75c0e602a4474812b33d5c18711b9eb8~mv2.png)
![](https://static.wixstatic.com/media/986ac3_970169b9ffcb48feae326d0902706cc8~mv2.png/v1/fill/w_980,h_453,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_970169b9ffcb48feae326d0902706cc8~mv2.png)
2.Filesystem
![](https://static.wixstatic.com/media/986ac3_a56f971fd5f04e3da04ec2b881e394e0~mv2.png/v1/fill/w_532,h_494,al_c,q_85,enc_avif,quality_auto/986ac3_a56f971fd5f04e3da04ec2b881e394e0~mv2.png)
![](https://static.wixstatic.com/media/986ac3_5241e73dbaf54952b72ef2606deff24d~mv2.png/v1/fill/w_980,h_446,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_5241e73dbaf54952b72ef2606deff24d~mv2.png)
3.Scripts
![](https://static.wixstatic.com/media/986ac3_b04f8d4d7cbb41b88c50579c78014adc~mv2.png/v1/fill/w_549,h_295,al_c,q_85,enc_avif,quality_auto/986ac3_b04f8d4d7cbb41b88c50579c78014adc~mv2.png)
![](https://static.wixstatic.com/media/986ac3_379df4ab7134421c96191797f5ddbe7f~mv2.png/v1/fill/w_980,h_469,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_379df4ab7134421c96191797f5ddbe7f~mv2.png)
Similarly for MySQL :
IP Address :
![](https://static.wixstatic.com/media/986ac3_525ba323799f4b56a3da27f01b85d46b~mv2.png/v1/fill/w_525,h_373,al_c,q_85,enc_avif,quality_auto/986ac3_525ba323799f4b56a3da27f01b85d46b~mv2.png)
Filesystem :
![](https://static.wixstatic.com/media/986ac3_7ad763e6169e408b9c68c3b47db8d0c0~mv2.png/v1/fill/w_541,h_503,al_c,q_85,enc_avif,quality_auto/986ac3_7ad763e6169e408b9c68c3b47db8d0c0~mv2.png)
Script:
![](https://static.wixstatic.com/media/986ac3_c7c82dfb57364b8093a028c7aea7e246~mv2.png/v1/fill/w_642,h_128,al_c,q_85,enc_avif,quality_auto/986ac3_c7c82dfb57364b8093a028c7aea7e246~mv2.png)
Service groups :
Creating MySQL and Tomcat Service :
From luci, with the cluster selected, add a new service and associate the IP Address to it as follows:
Add a Service Group :
Click on the Service Groups tab and select Add.
Fill in the service group information.
Service name. Assign a name to the service (e.g. mysql and
apache-tomcat)
Automatically start this service. Check this box.
Failover Domain. Select the failover_easylinux you created
earlier.
Recovery Policy. Select Relocate.
![](https://static.wixstatic.com/media/986ac3_8a7418a0322146caa997127a490e5e30~mv2.png/v1/fill/w_685,h_493,al_c,q_85,enc_avif,quality_auto/986ac3_8a7418a0322146caa997127a490e5e30~mv2.png)
Add the IP address resource :
Select the Add Resource button from bottom.
Then select the IP Address you added earlier.
Add File System as a Global Resource for MySQL and Tomcat
which you have added before.
Add Service as a Global Resource for MySQL and Tomcat which
you have added before.
Now, Submit the information.
The screen will prompt as:
![](https://static.wixstatic.com/media/986ac3_b12b1735b38e484f91c3ff0d19f22559~mv2.png/v1/fill/w_673,h_540,al_c,q_90,enc_avif,quality_auto/986ac3_b12b1735b38e484f91c3ff0d19f22559~mv2.png)
Filesystem for Apache_Tomcat
![](https://static.wixstatic.com/media/986ac3_496a41c34cb141129b40fca6261a1ff9~mv2.png/v1/fill/w_702,h_584,al_c,q_90,enc_avif,quality_auto/986ac3_496a41c34cb141129b40fca6261a1ff9~mv2.png)
Script for Apache_Tomcat
![](https://static.wixstatic.com/media/986ac3_b6cf294b68a44c73b7373a50c4942eb2~mv2.png/v1/fill/w_709,h_216,al_c,q_85,enc_avif,quality_auto/986ac3_b6cf294b68a44c73b7373a50c4942eb2~mv2.png)
And click the submit button. You have successfully created Apache_Tomcat cluster Service.
![](https://static.wixstatic.com/media/986ac3_3c2a4f1a42bc40df85e66e15e9b4d18f~mv2.png/v1/fill/w_980,h_417,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_3c2a4f1a42bc40df85e66e15e9b4d18f~mv2.png)
Just like Apache_Tomcat cluster services created, similarly we have to create for MySQL.
![](https://static.wixstatic.com/media/986ac3_1aa6a74be7344da6afb6aa9b0f4647b1~mv2.png/v1/fill/w_728,h_496,al_c,q_90,enc_avif,quality_auto/986ac3_1aa6a74be7344da6afb6aa9b0f4647b1~mv2.png)
IP address for MySQL cluster service :
![](https://static.wixstatic.com/media/986ac3_3da0be83803e437294cbb3e829e5d182~mv2.png/v1/fill/w_704,h_285,al_c,q_85,enc_avif,quality_auto/986ac3_3da0be83803e437294cbb3e829e5d182~mv2.png)
Filesystem for MySQL cluster services :
![](https://static.wixstatic.com/media/986ac3_de9bf78e46c44f61becc947bea36a401~mv2.png/v1/fill/w_730,h_584,al_c,q_90,enc_avif,quality_auto/986ac3_de9bf78e46c44f61becc947bea36a401~mv2.png)
Script for MySQL cluster services :
![](https://static.wixstatic.com/media/986ac3_c3035533745b43689d488765172120c4~mv2.png/v1/fill/w_716,h_397,al_c,q_85,enc_avif,quality_auto/986ac3_c3035533745b43689d488765172120c4~mv2.png)
And click the submit button. You have successfully created MySQL cluster Service.
![](https://static.wixstatic.com/media/986ac3_4557437ca8f24a77b3cf9cadfec8b8ab~mv2.png/v1/fill/w_980,h_445,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_4557437ca8f24a77b3cf9cadfec8b8ab~mv2.png)
Finally we have successfully establish HA cluster running with Apache tomcat and mysql
![](https://static.wixstatic.com/media/986ac3_564eafafd90f4141846580f766d8a14e~mv2.png/v1/fill/w_980,h_365,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_564eafafd90f4141846580f766d8a14e~mv2.png)
We can check through command line:
![](https://static.wixstatic.com/media/986ac3_c30a20cff04f44649db7c4e217552609~mv2.png/v1/fill/w_980,h_205,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_c30a20cff04f44649db7c4e217552609~mv2.png)
Below image is of your tomcat application running on virtual ip :
![](https://static.wixstatic.com/media/986ac3_d661a6878ac14fcd8022aa168add669c~mv2.png/v1/fill/w_980,h_523,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/986ac3_d661a6878ac14fcd8022aa168add669c~mv2.png)
After cluster running properly, you can see you both LUN are running on node1.
![](https://static.wixstatic.com/media/986ac3_e2f0bf18b7b74453bf171ad4edd25f71~mv2.png/v1/fill/w_440,h_183,al_c,q_85,enc_avif,quality_auto/986ac3_e2f0bf18b7b74453bf171ad4edd25f71~mv2.png)
You are also able to login to mysql on node1.
![](https://static.wixstatic.com/media/986ac3_23e9dacc14814e81a2ab94ef71c73acd~mv2.png/v1/fill/w_961,h_399,al_c,q_90,enc_avif,quality_auto/986ac3_23e9dacc14814e81a2ab94ef71c73acd~mv2.png)
Komentáře