Showing posts with label PXC Percona XtraDB Cluster. Show all posts
Showing posts with label PXC Percona XtraDB Cluster. Show all posts

June 25, 2023

Installation & Usage of Percona Toolkit 3

Percona Toolkit Usage and Installation (PXC Part 3)


Percona Toolkit is a collection of advanced command-line (CLI) tools that are created/used by Percona to help Percona support team or users to perform a variety of MongoDB and MySQL administration tasks that can be too difficult or complex to do manually.


Installing Percona Toolkit (3.3)


Install Percona Toolkit using the corresponding package manager, on RHEL or CentOS:
yum list all | grep toolkit
dnf install percona-toolkit -y

If you want to download a specific tool, use http://www.percona.com/get
For example, to download the pt-slave-restart tool, run:
wget percona.com/get/pt-slave-restart

Using tools in Percona Toolkit

pt-summary # Summarize system information nicely
pt-summary
pt-summary > pt-summary.out

pt-mysql-summary # Summarize MySQL information nicely
pt-mysql-summary --user=root --password=secret
pt-mysql-summary --user=root --password=secret  > "${PTDEST}/pt-mysql-summary.out"

pt-table-checksum    # Checks whether master/replicas have consistent copy or not
pt-table-checksum -uroot --pSecret9
pt-table-checksum -uroot --pSecret9 h=localhost -d thirumani
pt-table-checksum -uroot --pSecret9 h=localhost -d thirumani --slave-user=replication --slave-password=Slave@123
pt-table-checksum -uroot --pSecret9 h=localhost -d thirumani --slave-user=replication --slave-password=Slave@123 --no-check-binlog-format
pt-table-checksum -uroot --pSecret9 --replicate testdb.sample h=master-host
pt-table-checksum -uroot --pSecret9 --replicate testdb.sample h=master-host -d sakila

pt-table-sync # Synchronize MySQL table data efficiently
pt-table-sync --execute h=localhost,D=thirumani,t=order_items_refund h=host2
pt-table-sync --execute h=localhost -uroot --pSecret9 --replicate thirumani.order_items_refund --slave-user=replication --slave-password=Slave@123
pt-table-sync --execute localhost host2 host3
pt-table-sync --execute --sync-to-master slave1
pt-table-sync --execute --replicate test.checksum master1
pt-table-sync --execute --replicate test.checksum --sync-to-master slave1
pt-table-sync --execute --sync-to-master h=master2,D=db,t=tbl
pt-table-sync -uroot --pSecret9 --replicate testdb.sample --print h=master-host

pt-slave-find # Find and print replication hierarchy tree of MySQL slaves
pt-slave-find --host localhost -uroot --pSecret9

pt-slave-delay # Make a MySQL slave server lag behind its master
pt-slave-delay --delay 1m --interval 15s --run-time 10m slavehost
pt-slave-delay --delay 2h h=slave

pt-slave-restart # Watch and restart MySQL replication after errors
pt-slave-restart [OPTIONS] [DSN]
pt-slave-restart
pt-slave-restart h=slave
pt-slave-restart --master-uuid

pt-heartbeat # Monitor MySQL replication delay
pt-heartbeat -h192.168.1.222 -uroot --pSecret9
pt-heartbeat -D percona --create-table --update h=master
pt-heartbeat -D test --update -h master-server --daemonize
pt-heartbeat -D test --monitor -h slave-server
pt-heartbeat -D test --check h=slave-server

pt-query-digest # Analyze MySQL queries from logs, processlist, and tcpdump
pt-query-digest master/data/slow.log
pt-query-digest --review h=host2 --no-report slow.log
pt-query-digest slow.log --no-report --output slowlog --filter '$event->{fingerprint} && make_checksum($event->{fingerprint}) eq "FDEA8D2993C9CAF3"'
pt-query-digest --create-review_table --review D=dbname,t=tablename slow.log
pt-query-digest --processlist h=localhost -uroot --pSecret9
pt-query-digest --processlist h=localhost -uroot --pSecret9 --print --no-report --interval=0.01 > slow.log
pt-query-digest --processlist h=localhost -uroot --pSecret9 --interval=0.01 --output=slowlog > slow.log
tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt
pt-query-digest --type tcpdump mysql.tcp.txt

pt-upgrade # Verify that query results are identical on different servers (before upgrade)
pt-upgrade h=host1 h=host2 slow.log
pt-upgrade h=host1 --save-results host1_results/ slow.log
pt-upgrade host1_results1/ h=host2
pt-upgrade h=host1 h=host2 queries.txt

pt-visual-explain # Format EXPLAIN output as a tree
pt-visual-explain <file_containing_explain_output>
pt-visual-explain -c <file_containing_query>
mysql -e "explain select * from mysql.user" | pt-visual-explain

pt-stalk # Collect forensic data about MySQL when problems occur
pt-stalk --function process list --variable Command --match Sleep --threshold 155 --cycles 0
pt-stalk --no-stalk --iterations=2 --sleep=30 -- --user=root --password=<mysql-root-pass>
pt-stalk --no-stalk --iterations=2 --sleep=30 --dest="${PTDEST}" -- --user=root --password=<mysql-root-pass>
pt-stalk --variable Threads_connected --cycles 2 -- --user satya

pt-find #  Find MySQL tables and execute actions, like GNU find
pt-find --ctime +1 --engine MyISAM
pt-find --engine InnoDB --exec "ALTER TABLE %D.%N ENGINE=MyISAM"
pt-find --connection-id '\D_\d+_(\d+)$' --server-id '\D_(\d+)_\d+$' --exec-plus "DROP TABLE %s"
pt-find --empty junk test --exec-plus "DROP TABLE %s"
pt-find --tablesize +5G
pt-find --printf "%T\t%D.%N\n" | sort -rn
pt-find --noquote --exec "INSERT INTO sysdata.tblsize(db, tbl, size) VALUES('%D', '%N', %T)"

====================================================

pt-align # Aligns output from other tools to columns

pt-archiver # Archive rows from a MySQL table into another table or a file
pt-archiver --source h=host,D=db,t=child --purge --where 'NOT EXISTS(SELECT * FROM parent WHERE col=child.col)'
pt-archiver --source h=oltp_server,D=test,t=tbl --dest h=olap_server --file '/var/log/archive/%Y-%m-%d-%D.%t' --where "1=1" --limit 1000 --commit-each
pt-archiver --source h=127.0.0.1,D=sakila,t=payment --where 'rental_id > 1000' --limit 100 --commit-each --purge --dry-run
pt-archiver --source h=127.0.0.1,D=sakila,t=payment --where 'rental_id > 1000' --limit 100 --commit-each --purge --progress=100
pt-archiver --source h=127.0.0.1,D=sakila,t=payment --dest h=12.0.0.1,D=sakila_archive,t=payment --where 'rental_id > 1000' --limit 100 --commit-each

pt-config-diff # Diff MySQL configuration files and server variables
pt-config-diff h=host1 h=host2
pt-config-diff /etc/my.cnf h=host1
pt-config-diff /etc/my-small.cnf /etc/my-large.cnf

pt-deadlock-logger # Log MySQL deadlocks
pt-deadlock-logger h=host1
pt-deadlock-logger h=host1 --iterations 1
pt-deadlock-logger h=host1 --dest h=host2,D=percona_schema,t=deadlocks

pt-diskstats # An interactive I/O monitoring tool for GNU/Linux
pt-diskstats [OPTIONS] [FILES]
pt-diskstats

pt-duplicate-key-checker # Find duplicate indexes and foreign keys on MySQL tables
pt-duplicate-key-checker --host host1

pt-fifo-split # Split files and pipe lines to a fifo without really splitting
pt-fifo-split --lines 1000000 hugefile.txt
while [ -e /tmp/pt-fifo-split ]; do cat /tmp/pt-fifo-split; done

pt-fingerprint # Convert queries into fingerprints
pt-fingerprint --query "select a, b, c from users where id = 500"
pt-fingerprint /path/to/file.txt

pt-fk-error-logger # Log MySQL foreign key errors
pt-fk-error-logger h=host1
pt-fk-error-logger h=host1 --iterations 1
pt-fk-error-logger h=host1 --dest h=host2,D=percona_schema,t=fke

pt-index-usage # Read queries from a log and analyze how they use indexes
pt-index-usage /path/to/slow.log --host localhost
pt-index-usage slow.log --no-report --save-results-database percona

pt-ioprofile # Watch process IO and print a table of file and I/O activity
pt-ioprofile [OPTIONS] [FILE]
pt-ioprofile

pt-kill # Kill MySQL queries that match certain criteria
pt-kill --busy-time 60 --kill
pt-kill --busy-time 60 --print
pt-kill --match-command Sleep --kill --victims all --interval 10
pt-kill --match-state login --print --victims all
mysql -e "SHOW PROCESSLIST" > proclist.txt
pt-kill --test-matching proclist.txt --busy-time 60 --print

pt-mext # Look at many samples of MySQL SHOW GLOBAL STATUS side-by-side
pt-mext -r -- mysqladmin ext -i10 -c3
pt-mext -r -- cat mysqladmin-output.txt
pt-mext -r -- mysqladmin ext -i1 -c4
pt-mext -r -- cat 2021_05_01_12_55_30-mysqladmin > ptmext.out

pt-online-schema-change # ALTER tables without locking them
pt-online-schema-change --alter "ADD COLUMN c1 INT" D=sakila,t=actor
pt-online-schema-change --alter "ENGINE=InnoDB" D=sakila,t=actor
pt-online-schema-change --alter-foreign-keys-method auto --alter "add key actor_last_update(last_update)" --execute h=localhost,D=sakila,t=actor

pt-pmp # Aggregate GDB stack traces for a selected program, a poor man’s profiler
pt-pmp [OPTIONS] [FILES]

pt-secure-collect # collect, sanitize, pack and encrypt data
pt-secure-collect [<flags>] <command> [<args> ...]
pt-secure-collect collect <flags>

pt-show-grants #Canonicalize and print MySQL grants so you can effectively replicate, compare and version-control them
pt-show-grants
pt-show-grants --separate --revoke | diff othergrants.sql -

pt-sift # Browses files created by pt-stalk
pt-sift FILE|PREFIX|DIRECTORY

pt-table-usage # Analyze how queries use tables
pt-table-usage [OPTIONS] [FILES]

pt-variable-advisor # Analyze MySQL variables and advise on possible problems
pt-variable-advisor localhost
pt-variable-advisor --source-of-variables vars.txt

pt-mongodb-query-digest # reports query usage statistics by aggregating queries from MongoDB query profiler
pt-mongodb-query-digest [OPTIONS]
pt-mongodb-summary # collects information about a MongoDB cluster. It collects information from several sources to provide an overview of the cluster
pt-mongodb-summary [OPTIONS] [HOST[:PORT]]

pt-pg-summary # collects information about a PostgreSQL cluster
pt-pg-summary [options] [host:[port]]

pt-k8s-debug-collector # Collects debug data (logs, resource statuses etc.) from a k8s/OpenShift cluster. Data is packed into the cluster-dump.tar.gz archive in the current working directory
pt-k8s-debug-collector

Related Percona/MySQL Articles: Percona XtraDB Cluster (PXC) Configuration   Percona Xtrabackup Full/Incremental Backup & Recovery

February 28, 2023

ProxySQL 2 configuration/installation

ProxySQL Installation/Setup - Percona XtraDB Cluster Part 2


ProxySQL, is an open-source high-performance MySQL/MariaDB proxy server, it serves as an intermediary between MySQL database nodes and the applications.
ProxySQL, a SQL-compatible proxy server, can improve performance by routing traffic among a pool of multiple MySQL/MariaDB/Percona Server database (cluster) nodes and improve availability by automatically failing over to standby if one (or more) of the database node fails.

ProxySQL can be used for
  Load balancing
  Query caching
  Query routing/redirection
  ✓ Query rewriting
  Database firewall
  ✓ Sharding
  ✓   Data masking
  ✓   Zero-downtime changes

In an earlier article, I've written Percona XtraDB Cluster (PXC) 8.0 Installation/Configuration steps. This article describes how to configure the basic components of ProxySQL.


Step I. Installing ProxySQL

Most of the time, ProxySQL is configured on a separate server, and all the connections are routed from the application to the ProxySQL server. ProxySQL in turn directs all the connections to the database server depending on the query rules configured.

ProxySQL 2 is the latest major release of ProxySQL and it is recommended for new installations.

To install ProxySQL on RPM-based distributions, run:
# yum list all | grep "proxysql"
# yum install proxysql2

# yum list all | grep percona
# yum install percona-xtradb-cluster-client

# cat /etc/proxysql.cnf
# proxysql --version

The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers.
# service proxysql start
# service proxysql stop
# service proxysql status

# tail -100f /var/lib/proxysql/proxysql.log

Step II. Setting the ProxySQL Administrator Password

ProxySQL’s internals can be reconfigured using the standard SQL ProxySQL Admin interface, accessible via MySQL command-line utility available by default on port 6032.
Configuration is stored in an SQLite database.

Connect to the ProxySQL administration interface with the default password 'admin'
# mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='ProxySQLAdmin> '

If you want to change ProxySQL administration password,
ProxySQLAdmin> UPDATE global_variables SET variable_value='admin:password' WHERE variable_name='admin-admin_credentials';
ProxySQLAdmin> LOAD ADMIN VARIABLES TO RUNTIME;     # to copy the memory settings to the runtime
ProxySQLAdmin> SAVE ADMIN VARIABLES TO DISK;        #  save variables to disk to make them persist

Verify that the configuration is empty by checking that there are no entries in the mysql_servers, my_usersmysql_replication_hostgroups and mysql_query_rules tables.
ProxySQLAdmin> SELECT * FROM mysql_servers;
ProxySQLAdmin> SELECT * FROM mysql_users;

ProxySQL has native support for Galera Cluster and Group Replication.
ProxySQLAdmin> SELECT * from mysql_replication_hostgroups;

Query rules are very useful to control traffic passing through ProxySQL and are configured in the mysql_query_rules table.
ProxySQLAdmin> SELECT * from mysql_query_rules;

Step III. Add backend database nodes to the ProxySQL Server Pool

3 Percona XtraDB Cluster (PXC)/MySQL servers will be configured by adding them to the mysql_servers table.
ProxySQLAdmin> INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.4',10,3306,1000);
ProxySQLAdmin> INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',10,3306,1000);
ProxySQLAdmin> INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',10,3306,1000);

ProxySQLAdmin> INSERT INTO mysql_galera_hostgroups (writer_hostgroup, backup_writer_hostgroup, reader_hostgroup, offline_hostgroup, active, max_writers, writer_is_also_reader,max_transactions_behind) VALUES (10, 12, 11, 13, 1, 1, 2, 100);
ProxySQLAdmin> LOAD MYSQL SERVERS TO RUNTIME;    # Activate current in-memory MySQL server and replication hostgroup configuration
ProxySQLAdmin> SAVE MYSQL SERVERS TO DISK;       # Save the current in-memory MySQL server and replication hostgroup configuration to disk

Modifying the config at runtime is done through the ProxySQL Admin port of ProxySQL.
Changes will NOT be activated until they are loaded to RUNTIME and any changes which are NOT saved to DISK will NOT be available after a ProxySQL restart.

ProxySQL considers backend instances with a read_only = 0 as WRITER instances so this should only be set on primary MySQL servers or all primaries in the case of Percona XtraDB Cluster (PXC)/Group (multi-primary) replication. The backend MySQL servers have read_only = 1 configured on all replicas.

ProxySQLAdmin> SELECT hostgroup_id, hostname, status FROM runtime_mysql_servers;

Step IV. MySQL Users

After configuring the MySQL server backends in mysql_servers the next step is to configure mysql users,  add users to specify the username, password and default_hostgroup for basic configuration.
ProxySQLAdmin> INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('sbuser','sbpass',10);
ProxySQLAdmin> LOAD MYSQL USERS TO RUNTIME;      # Activate current in-memory MySQL user configuration
ProxySQLAdmin> SAVE MYSQL USERS TO DISK;         # Save the current in-memory MySQL user configuration to disk

ProxySQL Admin (proxysql-admin) is a powerful tool for configuring Percona XtraDB Cluster nodes into ProxySQL. ProxySQL v2 natively supports Percona XtraDB Cluster.
The proxysql-admin tool comes with the ProxySQL package from Percona apt/yum repositories.

Step V. Configure monitoring in MySQL

ProxySQL constantly monitors the MySQL backend nodes configured to identify the health status.
The credentials for monitoring the backend database servers need to be created in MySQL and also configured in ProxySQL along with the environment-specific check intervals.
ProxySQL runs as a daemon watched by a monitoring process, restarts it in case of a crash to minimize downtime.

To create the user in MySQL connect to the PRIMARY and execute:
ProxySQLAdmin> UPDATE global_variables SET variable_value='monitor' WHERE variable_name in ('mysql-monitor_username','mysql-monitor_password');

To allow ProxySQL access to the MySQL database, we need to create a user on MySQL database with the same credentials mentioned on the ProxySQL server.

On Percona XtraDB Cluster (PXC) nodes, create monitoring user, grant privileges,
mysql> CREATE USER 'monitor'@'%' IDENTIFIED BY 'monitor';
mysql> GRANT USAGE ON *.* TO 'monitor'@'%';
mysql> select user, host, super_priv, password_expired, plugin from mysql.user;

On Percona XtraDB Cluster (PXC) nodes or MySQL cluster servers, create ProxySQL client user
mysql> CREATE DATABASE sbtest;
mysql> CREATE USER 'sbuser'@'192.168.1.7' IDENTIFIED WITH mysql_native_password BY 'sbpass';
mysql> GRANT ALL ON *.* TO 'sbuser'@'192.168.1.7';


Step VI. Configuring Monitoring in ProxySQL

ProxySQLAdmin> UPDATE GLOBAL_VARIABLES SET variable_value='8.0' WHERE variable_name='mysql-server_version';
ProxySQLAdmin> UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');
ProxySQLAdmin> UPDATE GLOBAL_VARIABLES SET variable_value='true' WHERE variable_name='admin-web_enabled';
ProxySQLAdmin> SELECT * FROM global_variables WHERE variable_name LIKE 'mysql-monitor_%';

Changes made to the MySQL Monitor in table global_variables will be applied after executing the LOAD MYSQL VARIABLES TO RUNTIME statement.
To persist the configuration changes across restarts the SAVE MYSQL VARIABLES TO DISK must also be executed.
ProxySQLAdmin> LOAD MYSQL VARIABLES TO RUNTIME;    # Activate current in-memory MySQL variable configuration
ProxySQLAdmin> SAVE MYSQL VARIABLES TO DISK;       # Save the current in-memory MySQL variable configuration to disk

Step VII. Backend’s health check

Once the configuration is active verify the status of the MySQL backends in the monitor database tables in ProxySQL Admin:

ProxySQLAdmin> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC limit 10;
ProxySQLAdmin> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC limit 10;

ProxySQL is ready to serve traffic on port 6033 (by default), the reverse of MySQL default port 3306.
# mysql -usbuser -psbpass -h 127.0.0.1 -P6033 --prompt='ProxySQLClient> '
# mysql -usbuser -psbpass -h 127.0.0.1 -P6033 --prompt='ProxySQLClient> ' -e 'select @@hostname,@@port';

If needed, allow ProxySQL service port 6033/tcp in Linux Firewall.
# firewall-cmd --permanent --add-port=6033/tcp
# firewall-cmd --reload

Step VIII. Functional tests / Testing Cluster with sysbench

Sysbench is a useful tool to verify that ProxySQL is functional and benchmark system performance.

On ProxySQL node:
# yum list all | grep sysbench
# yum install sysbench -y

On ProxySQL node:
# mysql -umonitor -pmonitor -h192.168.1.4 -P3306 -e"SELECT @@hostname,@@port"
# mysql -usbuser -psbpass -h192.168.1.5 -P3306 -e"SELECT @@hostname,@@port"

# sysbench /usr/share/sysbench/oltp_insert.lua --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user='sbuser' --mysql-password='sbpass' --db-driver=mysql --threads=4 --tables=20 --table-size=15 prepare
# sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user='sbuser' --mysql-password='sbpass' --db-driver=mysql --threads=4 --tables=20 --table-size=15 --time=200 --report-interval=20 run
# sysbench /usr/share/sysbench/oltp_point_select.lua --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user='sbuser' --mysql-password='sbpass' --db-driver=mysql --threads=4 --tables=20 --table-size=15 --time=200 --report-interval=20 run
# sysbench /usr/share/sysbench/oltp_update_index.lua --mysql-db=sbtest --mysql-host=127.0.0.1 --mysql-port=6033 --mysql-user='sbuser' --mysql-password='sbpass' --db-driver=mysql --threads=4 --tables=20 --table-size=15 --time=200 --report-interval=20 run

Step IX. Verifying the ProxySQL Configuration/Automatic failover

From the command line of one of the MySQL servers, stop the MySQL process to simulate a failure.
# systemctl stop mysql

We can check that by querying the runtime_mysql_servers table from the ProxySQL administration prompt.
ProxySQLAdmin> SELECT hostgroup_id, hostname, status FROM runtime_mysql_servers;

The node we stopped is now shunned, which means it’s temporarily deemed inaccessible, so all traffic will be distributed across the two remaining online nodes.

Switch back to the MySQL server and bring the node back up.
# systemctl start mysql

Wait a moment, then query the runtime_mysql_servers table from the ProxySQL administration prompt again.
ProxySQLAdmin> SELECT hostgroup_id, hostname, status FROM runtime_mysql_servers;

Step X. Query statistics
ProxySQL collects a lot of real time statistics in the stats schema, each table provides specific information about the behavior of ProxySQL and the workload being processed:

ProxySQLAdmin> SELECT * FROM stats_mysql_global;
ProxySQLAdmin> SELECT * FROM stats_mysql_connection_pool order by hostgroup,srv_host ;
ProxySQLAdmin> SELECT * FROM stats_mysql_processlist;
ProxySQLAdmin> SELECT * FROM stats_mysql_query_digest;
ProxySQLAdmin> SELECT * FROM stats_mysql_commands_counters ORDER BY Total_cnt DESC;

ProxySQL is a high performance, high availability, protocol aware proxy for MySQL database clusters and its forks like Percona Server and MariaDB.
ProxySQL supports thousands of concurrent connections.


Related MySQL Articles: PXC 8 (Percona XtraDB Cluster) Configuration/Installation  How to use tools in Percona Toolkit

May 15, 2022

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PXC Part 5)


Percona Monitoring and Management (PMM) is Percona’s open-source tool for monitoring and alerting on database performance and the components that contribute to it.
PMM is a free tool that monitors MySQL (Percona Server and MySQL Community Edition), PostgreSQL, Amazon RDS/Aurora, MongoDB (Percona Server and MongoDB Community Edition), Percona XtraDB/Galera Cluster, ProxySQL, and Linux.

Part 4 of the article - Percona Xtrabackup Backup & Recovery

Key Benefits of Percona Monitoring & Management
  • Reduced complexity and simplified management.
  • Metrics Monitoring.
  • Optimized database performance.
  • Improved data security.

PMM (Percona Monitoring and Management) is made up of a server and clients.
PMM client is installed on the database servers and is used to collect metrics. The client contains technology specific exporters (which collect and export data), and an admin interface, which makes the management of the PMM platform very simple.
PMM server is a pre-integrated unit (Docker, VM or AWS AMI) that contains four components that gather the metrics from the exporters on the PMM client(s).
PMM server runs as a Docker container or Virtual Machine.

The PMM server contains Consul, Grafana, Prometheus and a Query Analytics Engine.
<ip_address_PMM_server> - PMM login (web management console)
            https://192.168.1.111/ default user name and password - admin admin
<ip_address_PMM_server>/prometheus/ - Prometheus
<ip_address_PMM_server>/orchestrator/ - Orchestrator
<ip_address_PMM_server>/graph/ - Grafana
<ip_address_PMM_server>:8500/ui/#/dc1/  - Consul

Installing PMM Server (2.20)

From the Percona website, download pmm-server-2.20.0.ova file for VirtualBox. And install it in your local VirtualBox, as a VM instance.

Configuration of PMM Server OVA file for Virtual Box:
VM specifications
Component Value
OS CentOS 7.9 (64-bit)
CPU 1
Base memory 4096 MB
Disks LVM, 2 physical volumes
Disk 1 (sda) VMDK (SCSI, 40 GB)
Disk 2 (sdb) VMDK (SCSI, 400 GB)

We need to configure our database for monitoring. For Percona XtraDB Cluster, login to one of the nodes and issue the commands below using mysql CLI.
MySQL> CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;
MySQL> GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost';

Installing the PMM Client (version 2)

If you are running an RPM-based Linux distribution, use the yum package manager to install PMM Client from the official Percona software repository.
# yum list all | grep pmm
# yum install pmm2-client -y

If you are running a DEB-based Linux distribution, use the apt package manager to install PMM Client from the official Percona software repository.
$ apt-get install pmm2-client

PMM (Percona Monitoring and Management) Admin utility

# pmm-admin
# pmm-admin --version
# pmm-admin list
# pmm-admin status

# pmm-admin inventory list services
# pmm-admin inventory list services --service-type=mysql

# pmm-admin config --server 192.168.100.1
# pmm-admin config --server 192.168.2.35 --server-insecure-ssl --server-user admin --server-password admin --force
# pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.1.111:443

# pmm-admin add mysql
# pmm-admin add mysql --username=pmm --password=pass --query-source=perfschema
# pmm-admin add mysql --query-source=perfschema --username=root --password=root --host=192.168.66.35 --port=3317 --service-name=mysql-192.168.66.35-3317

# pmm-admin add proxysql --username=admin --password=admin
# pmm-admin add proxysql --username=pmm --password=pmm --service-name=my-new-proxysql --host=127.0.0.1 --port=6032
# pmm-admin add proxysql --username=root --password=root --host=192.168.66.35 --port=16032 --service-name=proxysql-192.168.66.35-16032
# pmm-admin add proxysql:metrics --dsn "admin:admin@tcp(localhost:6032)/"

# pmm-admin add mongodb
# pmm-admin add mongodb --host=192.168.66.35 --port=27017 --service-name=mongodb-192.168.66.35-27017
# pmm-admin add mongodb --username=root --password=root --host=192.168.66.35 --port=37017 --service-name=mongodb-192.168.66.35-37017
# pmm-admin add mongodb --cluster satya-mongo-cluster --replication-set=satya_rep --username=root --password=root --host=192.168.66.35 --port=37023 --service-name=mongodb-192.168.66.35-37023-satya_rep

# pmm-admin add postgresql --username=pmm-managed --password=pmm-managed --host=127.0.0.1 --port=5432 --service-name=PG-127.0.0.1-5432

# pmm-admin check-network

PMM helps to improve the performance of databases, simplify their management, and strengthen their security. And it’s efficient, quick to set up, and easy to use.
PMM used to simply view, monitor, and manage all open source databases in one place. And query & metric information enables you to rapidly find, solve, and prevent issues with scaling, bottlenecks, and potential outages.

Visualizing Percona MySQL/XtraDB Cluster Metrics

We can now visualize the metrics on our PMM server dashboard. Head to the Metrics server dashboard and navigate to Dashboards.

PMM server dashboard Percona Monitoring and Management (PMM)



PXC Cluster Dashboard:
PXC Cluster Dashboard in Percona Monitoring and Management


ProxySQL Dashboard:

ProxySQL Dashboard in PMM


Related 
Percona Articles:  ProxySQL installation and configuration    Percona Xtrabackup Backup & Recovery

April 17, 2022

Percona Xtrabackup Backup & Recovery

Percona Xtrabackup (PXC Part 4)


XtraBackup is an open-source MySQL hot backup software program.
Features include hot, non-locking backups for InnoDB storage, incremental backups, streaming, parallel-compressed backups, throttling based on the number of I/O operations per second, etc.

Percona XtraBackup is an open-source hot backup utility for MySQL servers that doesn’t lock databases during the backup.


Percona Xtrabackup (8.0) installation from Repository

Percona provides repositories for yum (RPM packages for Red Hat, CentOS and Amazon Linux AMI) and apt (.deb packages for Ubuntu and Debian) for software such as Percona Server for MySQL, Percona XtraBackup, and Percona Toolkit.

yum list all | grep xtrabackup
dnf install percona-xtrabackup-80 -y

Percona XtraBackup can backup data from InnoDB, XtraDB, MyISAM tables on MySQL 8.0 servers as well as Percona Server for MySQL with XtraDB, Percona Server for MySQL 8.0, and Percona XtraDB Cluster 8.0.

Creating Backup

To create a backup, run xtrabackup with --backup option and specify --target-dir option where the backup will be stored.
xtrabackup --backup --target-dir=/data/backups/
xtrabackup --backup --user=satya --password=Secret9 --target-dir=/path/to/backupdir
xtrabackup --backup --user=satya --password=Secret9 --slave-info --target-dir=/path/to/backupdir

Preparing Backup

After making a backup with the --backup option, you need need to prepare it in order to restore it.
xtrabackup --prepare --target-dir=/data/backups/
xtrabackup --prepare --user=satya --password=Secret9 --target-dir=/path/to/backupdir

Restoring Backup

For convenience, xtrabackup binary has the --copy-back option to copy the backup to the datadir of the server. Backup needs to be prepared before it can be restored.
xtrabackup --copy-back --target-dir=/data/backups/

Incremental Backup

Percona xtrabackup supports incremental backups, which means that they can copy only the data that has changed since the last backup.
DBAs can perform many incremental backups between each full backup, to have a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.

To make an incremental backup, begin with a full backup as usual. The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory.
xtrabackup --backup --target-dir=/data/backups/full
xtrabackup --backup --target-dir=/root/backups/full -uroot --pSecret9

Now that you have a full backup, you can make an incremental backup based on it.
xtrabackup --backup --target-dir=/data/backups/incl --incremental-basedir=/data/backups/full
xtrabackup --backup --target-dir=/root/backups/incl --incremental-basedir=/root/backups/full -uroot --pSecret9

Preparing Incremental Backups

The --prepare step for incremental backups is not the same as for full backups. Beginning with the full backup, you can prepare it, and then apply the incremental differences to it.
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/full
xtrabackup --prepare --apply-log-only --target-dir=/root/backups/full -uroot --pSecret9

To apply the first incremental backup to the full backup. Preparing the second/third incremental backup is a similar process.
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/full --incremental-dir=/data/backups/incl
xtrabackup --prepare --apply-log-only --target-dir=/root/backups/full --incremental-dir=/root/backups/incl -uroot --pSecret9

Restoring Incremental Backups

Once prepared incremental backups are the same as the full backups and they can be restored in the same way.

.. shutdown database ..
xtrabackup --copy-back --target-dir=/root/backups/
xtrabackup --copy-back --target-dir=/data/backups/

Related MySQL Articles: Percona Monitoring and Management (PMM)   How to use tools in Percona Toolkit

August 20, 2021

Installing Percona XtraDB Cluster (PXC) 8

Percona XtraDB Cluster (PXC) 8.0 Installation/Configuration - Part 1


Here are the installation/deployment and configuration steps of PXC 8 (Percona XtraDB Cluster) for MySQL, with Galera cluster, on three nodes in a multi-master replication, on Oracle Linux (OEL)/Red Hat (RHEL)/CentOS/Ubuntu/Debian.

Multi-master (active-active) replication allows us to write operations on any node, even if one node fails, we can work on the other nodes. 

  Role Hostname       IP address
master 1     percona1     192.168.1.4
master 2     percona2     192.168.1.5
master 3     percona3     192.168.1.6

Part 2 of the article - ProxySQL 2 configuration/installation
Part 4 of the article - Percona Xtrabackup Backup & Recovery
Part 5 of the article - Percona Monitoring and Management


Step I. Common Percona XtraDB Cluster installation steps (on all nodes)

Perform below 6 steps on all nodes, in PXC cluster.

i) Update /etc/hosts with all 3 node details
# vi /etc/hosts
192.168.1.4 percona1 percona1.localdomain
192.168.1.5 percona2 percona2.localdomain
192.168.1.6 percona3 percona3.localdomain

ii) Open firewall ports
Percona requires the following ports for the cluster nodes. Create the appropriate firewall rules on Percona nodes.
3306
4444
4567
4568

# firewall-cmd --permanent --add-port={3306/tcp,4444/tcp,4567/tcp,4568/tcp}
# firewall-cmd --reload
# firewall-cmd --list-all

iii) Enable SELinux (Security-Enhanced Linux)
Make sure SELinux is enabled on all Percona nodes.

# cat /etc/sysconfig/selinux
# sestatus

/etc/selinux/config file controls if SELinux is disabled or enabled, and if enabled, whether SELinux operates in enforcing mode or permissive mode.

iv) Add Percona repository
Add percona repository to Linux Distribution, so we can download the latest version of PXC.
For Ubuntu/Debian Operating Systems, please use apt-get instead of yum/dnf.

# yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
# percona-release enable-only pxc-80 release
# percona-release enable tools release

v) Installing PXC (Percona XtraDB Cluster) from yum/dnf repo
Install the Percona XtraDB Cluster software and including all dependences.

# yum module disable mysql -y
# yum list all | grep percona
# yum install percona-xtradb-cluster

vi) securing MySQL deployment
We need to perform a post-installation task to secure the MySQL deployment. Run the following command and answer the questions accordingly:

# mysql_secure_installation



Step II. 
Configuring Percona XtraDB Cluster on the first node

i) Initial set up
Start the MySQL service for PXC node1.
# systemctl start mysql

To get the temporary root mysql password you need to run the below command.
# grep password /var/log/mysqld.log

ii) Change root password
# mysql -u root -p
Enter the password you got in the above step.

Change the root user password to the MySQL database.
MySQL> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Percona@pxc8';

Verify wsrep parameters in the database.
MySQL> show variables like '%wsrep%';

Stop MySQL server on the first node.
# systemctl stop mysql

iii) Configure PXC Write-Set Replication (WSP) settings
Update my.cnf with custom values according to your requirement.

# cp /etc/my.cnf /etc/my.cnf.orig
# vi /etc/my.cnf

# Cluster name
wsrep_cluster_name=democluster
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.4,192.168.1.5,192.168.1.6
# First node hostname
wsrep_node_name=percona1
# First node IP address
wsrep_node_address=192.168.1.4
# pxc_encrypt_cluster_traffic is ON by default, in PXC 8. I don't have encryption keys/certificates, so turning OFF encryption
pxc_encrypt_cluster_traffic=OFF

Initialization parameter wsrep_sst_auth is NOT required in Percona XtraDB Cluster version 8. And no need to create a replication user also, because Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session. During SST, the user mysql.pxc.sst.user and the role mysql.pxc.sst.role are created on the donor node.

iv) Bootstrap the cluster
The below command will start the first node and bootstrap the cluster.
# systemctl start mysql@bootstrap.service
# systemctl status mysql@bootstrap.service
# tail -f /var/log/mysqld.log



Step III. Configuring Percona XtraDB Cluster on Second/Third node

i) Configuring PXC Cluster with WSP (Write-Set Replication)
Update my.cnf with custom values as per your requirement, on all other nodes.

# cp /etc/my.cnf /etc/my.cnf.orig
# vi /etc/my.cnf

# Cluster name
wsrep_cluster_name=democluster
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.4,192.168.1.5,192.168.1.6
# Second/Third node hostname
wsrep_node_name=percona2            percona3, on third node
# Second/Third node IP address
wsrep_node_address=192.168.1.5      192.168.1.6, on third node
# pxc_encrypt_cluster_traffic is ON by default, in PXC 8. I don't have encryption keys/certificates, so turning OFF encryption
pxc_encrypt_cluster_traffic=OFF

ii) Start MySQL to join the cluster
# systemctl start mysql
# systemctl status mysql

# tail -f /var/log/mysqld.log

After MySQL server has been started, the new nodes should receive SST automatically. State Snapshot Transfer (SST) is a full data copy from one node (donor) to the joining node (joiner).

iii) Bouncing MySQL database server on first node
The first node was started in bootstrap mode, you can stop it and start the first node in normal mode.

# systemctl stop mysql@bootstrap.service
# systemctl start mysql
# systemctl status mysql

iv) Verify cluster setup
Cluster status can now be checked on all nodes.

# mysql -uroot -pPercona@pxc8 -e "select * from mysql.wsrep_cluster_members;"
+--------------------------------------+--------------------------------------+-----------+-----------------------+
| node_uuid                            | cluster_uuid                         | node_name | node_incoming_address |
+--------------------------------------+--------------------------------------+-----------+-----------------------+
| 275888d3-fde3-11eb-ba65-6b7198e764f9 | d6a6b71a-fda4-11eb-89f2-f7703766b53c | percona1  | 192.168.1.4:3306      |
| 41261eff-fde9-11eb-b83c-963c55b86b90 | d6a6b71a-fda4-11eb-89f2-f7703766b53c | percona3  | 192.168.1.6:3306      |
| cb8695c7-fde3-11eb-b8a1-d6cf0e345170 | d6a6b71a-fda4-11eb-89f2-f7703766b53c | percona2  | 192.168.1.5:3306      |

The above output shows that all 3 nodes are part of the cluster.

v) Verify default parameter values
By default, you will be seeing below parameter values (and along with other default parameters).

# cat /etc/my.inf
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so
wsrep_sst_method=xtrabackup-v2
pxc_strict_mode=ENFORCING
binlog_format=ROW
innodb_autoinc_lock_mode=2

Step IV. Testing Replication on Percona XtraDB Cluster

To test the replication, let us create a new database on the second node, create a table for that database on the third node, add one record to the table on the first node, and verify on the third or second node.

i) Create a new database on the second node

MySQL@percona2> CREATE DATABASE satyadb;

ii) Create a table on the third node

MySQL@percona3> USE satyadb;
MySQL@percona3> CREATE TABLE employee (emp_id INT PRIMARY KEY, emp_name VARCHAR(50));

iii) Insert records on the first node

MySQL@percona1> INSERT INTO satyadb.employee VALUES (111, 'percona');

iv) Retrieve all the rows from the table, on the second or third node:
MySQL@percona2> SELECT * FROM satyadb.employee;
+---------+-----------+
| emp_id  | emp_name |
+---------+-----------+
|     111 | percona  |
+---------+-----------+


Percona XtraDB Cluster 8.0 is based on Percona Server for MySQL 8.0 embedded with Galera writeset replication API and Galera replication (version 4) library, to form a highly available active-active (master-master) replication for Oracle MySQL based database server.
Percona XtraDB Cluster, a database clustering solution, ensures high availability, prevents downtime and data loss, and provides linear scalability.

Related MySQL Articles:  ProxySQL 2 installation setup   PMM server and client installation