Posted
over 3 years
ago
by
MyDBOPS
To copy the data of the particular column of the table to another table/server, We have an option to export the data as CSV and import the data back to a different table. But when the table size is large and we need to copy the data only
... [More]
for the required data to the target table will cause the load in the server since the table scanning is huge.
To overcome this, we have the pt-archiver copy the data from the source table to the destination as a whole or only for required columns. And also we can do this in a controlled manner as well. So there will be no performance impact even on the production time.
Source table structure :
mysql> show create table source\G
*************************** 1. row ***************************
Table: source
Create Table: CREATE TABLE `source` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`FIRSTNAME` varchar(30) NOT NULL,
`LASTNAME` varchar(30) NOT NULL,
`email` varchar(50) DEFAULT NULL,
`reg_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
Destination table structure :
mysql> show create table destination\G
*************************** 1. row ***************************
Table: destination
Create Table: CREATE TABLE `destination` (
`firstname` varchar(30) DEFAULT NULL,
`lastname` varchar(30) DEFAULT NULL,
`email` varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
I have inserted only 10 records for the sample in the source table using mysql_random_data_load.
mysql> select count(*) from source;
+----------+
| count(*) |
+----------+
| 10 |
+----------+
1 row in set (0.00 sec)
While using pt-archiver to copy only the required column, we need to make sure we have mentioned the below options. Because by default pt-archiver will perform the column level check between source and the destination.
–columnsColumns need to be copied–no-check-columnsTo overcome the column level check between source and destination. –columns options are case sensitive, we need to mention the column name only in lower case, not in the upper case. If we mentioned the column names in the upper case, then the default value for the column will be inserted. If any of the columns is not mentioned for the copy process, the default value will be loaded for that column.
[vagrant@centos11 ~]$ pt-archiver --source h=localhost,D=archiver,t=source --no-delete --dest h=localhost,D=archiver,t=destination --no-safe-auto-increment --where '1=1' --limit 1 --txn-size 1 --progress 1 --statistics --no-check-charset --columns firstname,lastname --no-check-columns
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
TIME ELAPSED COUNT
2021-11-23T20:57:12 0 0
2021-11-23T20:57:12 0 1
2021-11-23T20:57:12 0 2
2021-11-23T20:57:12 0 3
2021-11-23T20:57:12 0 4
2021-11-23T20:57:12 0 5
2021-11-23T20:57:12 0 6
2021-11-23T20:57:12 0 7
2021-11-23T20:57:12 0 8
2021-11-23T20:57:12 0 9
2021-11-23T20:57:12 0 10
2021-11-23T20:57:12 0 10
Started at 2021-11-23T20:57:12, ended at 2021-11-23T20:57:12
Source: D=archiver,h=localhost,t=source
Dest: D=archiver,h=localhost,t=destination
SELECT 10
INSERT 10
DELETE 0
Action Count Time Pct
commit 22 0.0318 57.90
select 11 0.0121 22.00
inserting 10 0.0062 11.28
other 0 0.0048 8.82
mysql> select * from destination;
+--------------------------------+--------------------------------+-------+
| firstname | lastname | email |
+--------------------------------+--------------------------------+-------+
| incidunt consectetur aut autem | accusamus et et provident quas | NULL |
| et ab doloribus natus. | veniam tempora odit facilis ea | NULL |
| eveniet aut qui laborum aut er | sequi consectetur eum qui vero | NULL |
| quasi qui natus optio unde. | voluptatem dolorem facere ut q | NULL |
| sunt voluptate qui ut ab a. | nihil et deserunt quis eligend | NULL |
| est ab ducimus odio sapiente a | dolor aut officiis nostrum del | NULL |
| corrupti voluptatem debitis ne | rerum dolor accusantium qui ne | NULL |
| ipsam provident exercitationem | similique rem fuga velit dicta | NULL |
| odit ex quas unde repudiandae | nostrum accusamus officia et. | NULL |
| impedit hic ratione quia minim | sed non beatae perferendis cup | NULL |
+--------------------------------+--------------------------------+-------+
10 rows in set (0.00 sec)
We have gone through the best practices that need to be taken while performing the required column copy process using pt-archiver and also go through the –dry-run output carefully before executing it. [Less]
|
Posted
over 3 years
ago
by
Joshua Otwell
Hey hey! I’ve published another issue of OpenLamp.tech, the newsletter for PHP/MySQL developers over the weekend. Come and enjoy the great content!!!
As always, I’ve curated some great reads this week for us all. I’d love it if you could help
... [More]
spread the word by sharing these posts with others who will value the newsletter as well.
Interested?
The Newsletter for PHP and MySQL Developers
We have a range of posts in this week’s issue covering:
A huge list of 52 SQL optimization strategies. Installing WordPress on Oracle Linux. A look at PHP’s isset() language construct.Fixing the “too many redirects” error in WordPress.Don’t wait on me to repost here each Monday. That’s 3 whole days you are waiting to read OpenLamp.tech and you don’t want to do that right?
Subscribe (it’s free) and have each new issue delivered directly to your inbox.
Anyways, here is the full issue #5. Enjoy and thank you for reading
Are you a Medium member? If so, receive an email notification each time I publish a blog post if you prefer the Medium platform. Not a member? No worries! Use my sign-up link (I will get a commission at no extra cost to you) and join. I really enjoy reading all the great content there and I know you will too!!!
Don’t forget that each month, OpenLamp.tech publishes a deep-dive featured piece. Last month was, 5 MySQL String Functions You Should Know. This month will cover MySQL Database Metadata with CodeIgniter 4 Query Builder.
The Newsletter for PHP and MySQL Developers
Self-Promotion:
If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!
How can I help you?
Are you thinking of starting up a blog? I use WordPress for my blog. Let’s both save money on the plans offered. Grab a Gmail HTML Email Signature template from my Etsy shop and make your emails pop and stand out. Need hosting for your next web application or WordPress site? I use and highly recommend Hostinger. They have great pricing and service.Disclosure: Some of the services and products links in this post are affiliate links. At no additional cost to you, should you make a purchase by clicking through one of them, I will receive a commission.
The post OpenLamp.tech issue #5 appeared first on Digital Owl's Prose. [Less]
|
Posted
over 3 years
ago
by
Joshua Otwell
Hey hey! I’ve published another issue of OpenLamp.tech, the newsletter for PHP/MySQL developers over the weekend. Come and enjoy the great content!!!
As always, I’ve curated some great reads this week for us all. I’d love it if you could help
... [More]
spread the word by sharing these posts with others who will value the newsletter as well.
Interested?
The Newsletter for PHP and MySQL Developers
We have a range of posts in this week’s issue covering:
A huge list of 52 SQL optimization strategies. Installing WordPress on Oracle Linux. A look at PHP’s isset() language construct.Fixing the “too many redirects” error in WordPress.Don’t wait on me to repost here each Monday. That’s 3 whole days you are waiting to read OpenLamp.tech and you don’t want to do that right?
Subscribe (it’s free) and have each new issue delivered directly to your inbox.
Anyways, here is the full issue #5. Enjoy and thank you for reading
Are you a Medium member? If so, receive an email notification each time I publish a blog post if you prefer the Medium platform. Not a member? No worries! Use my sign-up link (I will get a commission at no extra cost to you) and join. I really enjoy reading all the great content there and I know you will too!!!
Don’t forget that each month, OpenLamp.tech publishes a deep-dive featured piece. Last month was, 5 MySQL String Functions You Should Know. This month will cover MySQL Database Metadata with CodeIgniter 4 Query Builder.
The Newsletter for PHP and MySQL Developers
Self-Promotion:
If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!
How can I help you?
Are you thinking of starting up a blog? I use WordPress for my blog. Let’s both save money on the plans offered. Grab a Gmail HTML Email Signature template from my Etsy shop and make your emails pop and stand out. Need hosting for your next web application or WordPress site? I use and highly recommend Hostinger. They have great pricing and service.I enjoy reading Refind: The essence of the web, every morning in your inbox. Subscribe for free. Help me get a premium subscription by signing up yourself with my referral link.Grab a free pack of mobile Creator wallpapers.Disclosure: Some of the services and products links in this post are affiliate links. At no additional cost to you, should you make a purchase by clicking through one of them, I will receive a commission.
The post OpenLamp.tech issue #5 appeared first on Digital Owl's Prose. [Less]
|
Posted
over 3 years
ago
by
Joshua Otwell
Hey hey! I’ve published another issue of OpenLamp.tech, the newsletter for PHP/MySQL developers over the weekend. Come and enjoy the great content!!!
As always, I’ve curated some great reads this week for us all. I’d love it if you could help
... [More]
spread the word by sharing these posts with others who will value the newsletter as well.
Interested?
The Newsletter for PHP and MySQL Developers
We have a range of posts in this week’s issue covering:
A huge list of 52 SQL optimization strategies. Installing WordPress on Oracle Linux. A look at PHP’s isset() language construct.Fixing the “too many redirects” error in WordPress.Don’t wait on me to repost here each Monday. That’s 3 whole days you are waiting to read OpenLamp.tech and you don’t want to do that right?
Subscribe (it’s free) and have each new issue delivered directly to your inbox.
Anyways, here is the full issue #5. Enjoy and thank you for reading
Are you a Medium member? If so, receive an email notification each time I publish a blog post if you prefer the Medium platform. Not a member? No worries! Use my sign-up link (I will get a commission at no extra cost to you) and join. I really enjoy reading all the great content there and I know you will too!!!
Don’t forget that each month, OpenLamp.tech publishes a deep-dive featured piece. Last month was, 5 MySQL String Functions You Should Know. This month will cover MySQL Database Metadata with CodeIgniter 4 Query Builder.
The Newsletter for PHP and MySQL Developers
Self-Promotion:
If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!
How can I help you?
Are you thinking of starting up a blog? I use WordPress for my blog. Let’s both save money on the plans offered. Grab a Gmail HTML Email Signature template from my Etsy shop and make your emails pop and stand out. Need hosting for your next web application or WordPress site? I use and highly recommend Hostinger. They have great pricing and service.I enjoy reading Refind: The essence of the web, every morning in your inbox. Subscribe for free. Help me get a premium subscription by signing up yourself with my referral link.Disclosure: Some of the services and products links in this post are affiliate links. At no additional cost to you, should you make a purchase by clicking through one of them, I will receive a commission.
The post OpenLamp.tech issue #5 appeared first on Digital Owl's Prose. [Less]
|
Posted
over 3 years
ago
by
parvez alam
in this post, We will explain how to create your own invoice system with PHP & MySQL.It’s also called Billing Management Systems, we’ll create invoice, list all invoices and print invoice. I am extending my previous tutorials. I’ll add an invoice
... [More]
management system. What’s Invoice Management Systems Because the majority of transactions are now done […]
The post Invoice Management System Using PHP & MySQL appeared first on Phpflow.com. [Less]
|
Posted
over 3 years
ago
by
Michael McLaughlin
In 2009, I showed an example of how to use the MySQL LOAD DATA INFILE command. Last year, I updated the details to reset the secure_file-priv privilege to use the LOAD DATA INFILE command, but you can avoid that approach with a simple Python 3
... [More]
program like the one in this example. You also can use MySQL Shell’s new parallel table import feature, introduced in 8.0.17, as noted in a comment on this blog post.
The example requires creating an avenger table, avenger.csv file, a readWriteData.py Python script, run the readWriteData.py Python script, and a query that validates the insertion of the avenger.csv file’s data into the avenger table. The complete code in five steps using the sakila demonstration database:
Creating the avenger table with the create_avenger.sql script:
-- Conditionally drop the avenger table.
DROP TABLE IF EXISTS avenger;
-- Create the avenger table.
CREATE TABLE avenger
( avenger_id int unsigned PRIMARY KEY AUTO_INCREMENT
, first_name varchar(20)
, last_name varchar(20)
, avenger_name varchar(20))
ENGINE=InnoDB
AUTO_INCREMENT=1001
DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_0900_ai_ci;
Create the avenger.csv file with the following data:
Anthony,Stark,Iron Man
Thor,Odinson,God of Thunder
Steven,Rogers,Captain America
Bruce,Banner,Hulk
Clinton,Barton,Hawkeye
Natasha,Romanoff,Black Widow
Peter,Parker,Spiderman
Steven,Strange,Dr. Strange
Scott,Lange,Ant-man
Hope,van Dyne,Wasp
Create the readWriteFile.py Python 3 script:
# Import libraries.
import csv
import mysql.connector
from mysql.connector import errorcode
from csv import reader
# Attempt the statement.
# ============================================================
# Use a try-catch block to manage the connection.
# ============================================================
try:
# Open connection.
cnx = mysql.connector.connect( user='student'
, password='student'
, host='127.0.0.1'
, database='sakila')
# Create cursor.
cursor = cnx.cursor()
# Open file in read mode and pass the file object to reader.
with open('avenger.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
# Declare the dynamic statement.
stmt = ("INSERT INTO avenger "
"(first_name, last_name, avenger_name) "
"VALUES "
"(%s, %s, %s)")
# Iterate over each row in the csv using reader object
for row in csv_reader:
cursor.execute(stmt, row)
# Commit the writes.
cnx.commit()
#close the connection to the database.
cursor.close()
# Handle exception and close connection.
except mysql.connector.Error as e:
if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif e.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print("Error code:", e.errno) # error number
print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
print("Error message:", e.msg) # error message
# Close the connection when the try block completes.
else:
cnx.close()
Run the readWriteFile.py file:
python3 readWriteFile.py
Query the avenger table:
SELECT * FROM avenger;
It returns:
+------------+------------+-----------+-----------------+
| avenger_id | first_name | last_name | avenger_name |
+------------+------------+-----------+-----------------+
| 1001 | Anthony | Stark | Iron Man |
| 1002 | Thor | Odinson | God of Thunder |
| 1003 | Steven | Rogers | Captain America |
| 1004 | Bruce | Banner | Hulk |
| 1005 | Clinton | Barton | Hawkeye |
| 1006 | Natasha | Romanoff | Black Widow |
| 1007 | Peter | Parker | Spiderman |
| 1008 | Steven | Strange | Dr. Strange |
| 1009 | Scott | Lange | Ant-man |
| 1010 | Hope | van Dyne | Wasp |
+------------+------------+-----------+-----------------+
10 rows in set (0.00 sec)
[Less]
|
Posted
over 3 years
ago
by
Michael McLaughlin
In 2009, I showed an example of how to use the MySQL LOAD DATA INFILE command. Last year, I updated the details to reset the secure_file-priv privilege to use the LOAD DATA INFILE command, but you can avoid that approach with a simple Python 3
... [More]
program like the one in this example. You also can use MySQL Shell’s new parallel table import feature, as noted in the comment on this blog post.
The example requires creating an avenger table, avenger.csv file, a readWriteData.py Python script, run the readWriteData.py Python script, and a query that validates the insertion of the avenger.csv file’s data into the avenger table. The complete code in five steps using the sakila demonstration database:
Creating the avenger table with the create_avenger.sql script:
-- Conditionally drop the avenger table.
DROP TABLE IF EXISTS avenger;
-- Create the avenger table.
CREATE TABLE avenger
( avenger_id int unsigned PRIMARY KEY AUTO_INCREMENT
, first_name varchar(20)
, last_name varchar(20)
, avenger_name varchar(20))
ENGINE=InnoDB
AUTO_INCREMENT=1001
DEFAULT CHARSET=utf8mb4
COLLATE=utf8mb4_0900_ai_ci;
Create the avenger.csv file with the following data:
Anthony,Stark,Iron Man
Thor,Odinson,God of Thunder
Steven,Rogers,Captain America
Bruce,Banner,Hulk
Clinton,Barton,Hawkeye
Natasha,Romanoff,Black Widow
Peter,Parker,Spiderman
Steven,Strange,Dr. Strange
Scott,Lange,Ant-man
Hope,van Dyne,Wasp
Create the readWriteFile.py Python 3 script:
# Import libraries.
import csv
import mysql.connector
from mysql.connector import errorcode
from csv import reader
# Attempt the statement.
# ============================================================
# Use a try-catch block to manage the connection.
# ============================================================
try:
# Open connection.
cnx = mysql.connector.connect( user='student'
, password='student'
, host='127.0.0.1'
, database='sakila')
# Create cursor.
cursor = cnx.cursor()
# Open file in read mode and pass the file object to reader.
with open('avenger.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
# Declare the dynamic statement.
stmt = ("INSERT INTO avenger "
"(first_name, last_name, avenger_name) "
"VALUES "
"(%s, %s, %s)")
# Iterate over each row in the csv using reader object
for row in csv_reader:
cursor.execute(stmt, row)
# Commit the writes.
cnx.commit()
#close the connection to the database.
cursor.close()
# Handle exception and close connection.
except mysql.connector.Error as e:
if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif e.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print("Error code:", e.errno) # error number
print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
print("Error message:", e.msg) # error message
# Close the connection when the try block completes.
else:
cnx.close()
Run the readWriteFile.py file:
python3 readWriteFile.py
Query the avenger table:
SELECT * FROM avenger;
It returns:
+------------+------------+-----------+-----------------+
| avenger_id | first_name | last_name | avenger_name |
+------------+------------+-----------+-----------------+
| 1001 | Anthony | Stark | Iron Man |
| 1002 | Thor | Odinson | God of Thunder |
| 1003 | Steven | Rogers | Captain America |
| 1004 | Bruce | Banner | Hulk |
| 1005 | Clinton | Barton | Hawkeye |
| 1006 | Natasha | Romanoff | Black Widow |
| 1007 | Peter | Parker | Spiderman |
| 1008 | Steven | Strange | Dr. Strange |
| 1009 | Scott | Lange | Ant-man |
| 1010 | Hope | van Dyne | Wasp |
+------------+------------+-----------+-----------------+
10 rows in set (0.00 sec)
[Less]
|
Posted
over 3 years
ago
by
Marco Tusa
Percona XtraDB Cluster 8.0.25 has introduced a new option to perform online schema modifications: NBO (Non Blocking Operation).
When using PXC the cluster relies on wsrep_OSU_method parameter to define the Online Schema Upgrade (OSU) method the node
... [More]
uses to replicate DDL statements.
Until now we normally have 3 options:
Use Total Isolation Order (TOI, the default)
Use Rolling Schema Upgrade (RSU)
Use Percona’s online schema change tool (TOI + PTOSC)
Each method has some positive and negative aspects. TOI will lock the whole cluster from being able to accept data modifications for the entire time it takes to perform the DDL operation. RSU will misalign the schema definition between the nodes, and in any case the node performing the DDL operation is still locked. Finally TOI+PTOSC will rely on creating triggers and copying data, so in some cases this can be very impactful.
The new Non Blocking Operation (NBO) method is to help to reduce the impact on the cluster and make it easier to perform some DDL operations.
At the moment we only support a limited set of operations with NBO like:
ALTER INDEX
CREATE INDEX
DROP INDEX
Any other command will result in an error message ER_NOT_SUPPORTED_YET.
But let us see how it works and what is the impact while we will also compare it with the default method TOI.
What we will do is working with 4 connections:
1 to perform ddl2 to perform insert data in the table being altered3 to perform insert data on a different table 4-5 checking the other two nodes operations
PXC must be at least Ver 8.0.25-15.1.
The table we will modify is :
DC1-1(root@localhost) [windmills_s]>show create table windmills_test\G
*************************** 1. row ***************************
Table: windmills_test
Create Table: CREATE TABLE `windmills_test` (
`id` bigint NOT NULL AUTO_INCREMENT,
`uuid` char(36) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`millid` smallint NOT NULL,
`kwatts_s` int NOT NULL,
`date` date NOT NULL,
`location` varchar(50) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`active` tinyint NOT NULL DEFAULT '1',
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`strrecordtype` char(3) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
KEY `IDX_millid` (`millid`,`active`),
KEY `IDX_active` (`id`,`active`),
KEY `kuuid_x` (`uuid`),
KEY `millid_x` (`millid`),
KEY `active_x` (`active`)
) ENGINE=InnoDB AUTO_INCREMENT=8199260 DEFAULT CHARSET=utf8mb3 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)
And contains ~5 million rows.
DC1-1(root@localhost) [windmills_s]>select count(*) from windmills_test;
+----------+
| count(*) |
+----------+
| 5002909 |
+----------+
1 row in set (0.44 sec)
The commands.Connection 1:
ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE;
ALTER TABLE windmills_test drop INDEX idx_1, ALGORITHM=INPLACE;
Connection 2:
while [ 1 = 1 ];do da=$(date +'%s.%3N');/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "insert into windmills_test select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;" -e "select count(*) from windmills_s.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done
Connection 3:
while [ 1 = 1 ];do da=$(date +'%s.%3N');/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "insert into windmills8 select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;" -e "select count(*) from windmills_s.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done
Connections 4-5:
while [ 1 = 1 ];do echo "$(date +'%T.%3N')";/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "show full processlist;"|egrep -i -e "(windmills_test|windmills_s)"|grep -i -v localhost;sleep 1;done
Operations:
start inserts from connections
start commands in connections 4 - 5 on the other nodes
execute:
for TOI
DC1-1(root@localhost) [windmills_s]>SET SESSION wsrep_OSU_method=TOI;
for NBO
DC1-1(root@localhost) [windmills_s]>SET SESSION wsrep_OSU_method=NBO;
For both
DC1-1(root@localhost) [windmills_s]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=shared;
Let us run it
Altering a table with TOI.
DC1-1(root@localhost) [windmills_s]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE;
Query OK, 0 rows affected (1 min 4.74 sec)
Records: 0 Duplicates: 0 Warnings: 0
Inserts in the altering table (connection 2):
.450
.492
64.993 <--- Alter blocks all inserts on the table we are altering
.788
.609
Inserts on the other table (connection 3):
.455
.461
64.161 <--- Alter blocks all inserts on all the other tables as well
.641
.483
On the other nodes at the same time of the ALTER we can see:
Id User db Command Time State Info Time_ms Rows_sent Rows_examined
15 system user windmills_s Query 102 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 102238 0 0 <--- time from start
So in short we have the whole cluster locked for ~64 seconds. During this period of time, all the operations to modify data or structure were on hold.
Let us now try with NBO
Inserts in the altering table:
.437
.487
120.758 <---- Execution time increase
.617
.510
Inserts on the other table:
.468
.485
25.061 <---- still a metalock, but not locking the other tables for the whole duration
.494
.471
On the other nodes at the same time of the ALTER we can see:
Id User db Command Time State Info Time_ms Rows_sent Rows_examined
110068 system user windmills_s Connect 86 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 120420 0 0
In this case what is also interesting to note is that:
We have a moment of metalock:
110174 pmm 127.0.0.1:42728 NULL Query 2 Waiting for table metadata lock SELECT x FROM information_schema.tables WHERE TABLE_SCHEMA = 'windmills_s' 1486 10 0
110068 system user connecting host windmills_s Connect 111 closing tables ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 111000 0 0
The execution time is longer
Summarizing:
TOI NBO
Time on hold for insert for altering table ~64 sec ~120 sec
Time on hold for insert for another table ~64 sec ~25 sec
metalock whole time only at the end
What is happening, what are the differences and why takes longer with NBO?
Let see at very high level how the two works:
TOI: when you issue a DDL like ADD INDEX a metadata lock is taken on the table and it will be released only at the end of the operation. During this time, you cannot:
Perform DMLs on any cluster node
Alter another table in the cluster
NBO: the metadata lock is taken at the start and at the end for a very brief period of time. The ADD INDEX operation will then work on each node independently. The lock taken at the end is to have all the nodes agree on the operation and commit or roll back (using cluster error voting). This final phase costs a bit more in time and is what adds a few seconds to the operation execution. But during the operation:
You can alter another table (using NBO)
You can continue to insert data, except in the table(s) you are altering.
On node crash the operation will continue on the other nodes, and if successful it will persist.
In short the cluster server behavior changes significantly when using NBO, offering significant flexibility compared to TOI. The cost in time should not linearly increase with the dimension of the table, but more in relation to the single node efficiency in performing the ALTER operation.
Conclusion
NBO can be significantly helpful to reduce the impact of DDL on the cluster, for now limited to the widely used creation/modification/drop of an index. But in the future … we may expand it.
The feature is still a technology preview, so do not trust in production, but test it and let us know what you think.
Final comment. Another distribution has introduced NBO, but only if you buy the enterprise version.
Percona, which is truly open source with facts not just words, has implemented NBO in standard PXC, and the code is fully open source. This is not the first one, but just another of the many features Percona is offering for free, while others ask you to buy the enterprise version.
Enjoy the product and let us have your feedback!
Great MySQL to all!
[Less]
|
Posted
over 3 years
ago
by
Marco Tusa
Percona XtraDB Cluster 8.0.25 (PXC) has introduced a new option to perform online schema modifications: NBO (Non-Blocking Operation).
When using PXC, the cluster relies on the wsrep_OSU_method parameter to define the Online Schema Upgrade (OSU)
... [More]
method the node uses to replicate DDL statements.
Until now, we normally have three options:
Use Total Isolation Order (TOI, the default)
Use Rolling Schema Upgrade (RSU)
Use Percona’s online schema change tool (TOI + PTOSC)
Each method has some positive and negative aspects. TOI will lock the whole cluster from being able to accept data modifications for the entire time it takes to perform the DDL operation. RSU will misalign the schema definition between the nodes, and in any case, the node performing the DDL operation is still locked. Finally, TOI+PTOSC will rely on creating triggers and copying data, so in some cases, this can be very impactful.
The new Non-Blocking Operation (NBO) method is to help to reduce the impact on the cluster and make it easier to perform some DDL operations.
At the moment we only support a limited set of operations with NBO like:
ALTER INDEX
CREATE INDEX
DROP INDEX
Any other command will result in an error message ER_NOT_SUPPORTED_YET.
But let us see how it works and what the impact is while we will also compare it with the default method TOI.
What we will do is work with four connections:
1 – to perform ddl2 – to perform insert data in the table being altered3 – to perform insert data on a different table4-5 – checking the other two nodes operations
PXC must be at least Version 8.0.25-15.1.
The table we will modify is :DC1-1(root@localhost) [windmills_s]>show create table windmills_test\G
*************************** 1. row ***************************
Table: windmills_test
Create Table: CREATE TABLE `windmills_test` (
`id` bigint NOT NULL AUTO_INCREMENT,
`uuid` char(36) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`millid` smallint NOT NULL,
`kwatts_s` int NOT NULL,
`date` date NOT NULL,
`location` varchar(50) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`active` tinyint NOT NULL DEFAULT '1',
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`strrecordtype` char(3) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
PRIMARY KEY (`id`),
KEY `IDX_millid` (`millid`,`active`),
KEY `IDX_active` (`id`,`active`),
KEY `kuuid_x` (`uuid`),
KEY `millid_x` (`millid`),
KEY `active_x` (`active`)
) ENGINE=InnoDB AUTO_INCREMENT=8199260 DEFAULT CHARSET=utf8mb3 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)And contains ~five million rows.DC1-1(root@localhost) [windmills_s]>select count(*) from windmills_test;
+----------+
| count(*) |
+----------+
| 5002909 |
+----------+
1 row in set (0.44 sec)
The Commands
Connection 1:ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE;
ALTER TABLE windmills_test drop INDEX idx_1, ALGORITHM=INPLACE;
Connection 2:while [ 1 = 1 ];do da=$(date +'%s.%3N');/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "insert into windmills_test select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;" -e "select count(*) from windmills_s.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done
Connection 3:while [ 1 = 1 ];do da=$(date +'%s.%3N');/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "insert into windmills8 select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;" -e "select count(*) from windmills_s.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done
Connections 4-5:while [ 1 = 1 ];do echo "$(date +'%T.%3N')";/opt/mysql_templates/PXC8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "show full processlist;"|egrep -i -e "(windmills_test|windmills_s)"|grep -i -v localhost;sleep 1;done
Operations
Start inserts from connections
Start commands in connections 4 – 5 on the other nodes
Execute:
For TOI
DC1-1(root@localhost) [windmills_s]>SET SESSION wsrep_OSU_method=TOI;
For NBO
DC1-1(root@localhost) [windmills_s]>SET SESSION wsrep_OSU_method=NBO;
For both
DC1-1(root@localhost) [windmills_s]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=shared;
Let’s Run It
Altering a Table with TOI
DC1-1(root@localhost) [windmills_s]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE;
Query OK, 0 rows affected (1 min 4.74 sec)
Records: 0 Duplicates: 0 Warnings: 0
Inserts in the altering table (connection 2):.450
.492
64.993 <--- Alter blocks all inserts on the table we are altering
.788
.609
Inserts on the other table (connection 3):.455
.461
64.161 <--- Alter blocks all inserts on all the other tables as well
.641
.483
On the other nodes at the same time of the ALTER we can see:Id User db Command Time State Info Time_ms Rows_sent Rows_examined
15 system user windmills_s Query 102 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 102238 0 0 <--- time from startIn short, we have the whole cluster locked for ~64 seconds. During this period of time, all the operations to modify data or structure were on hold.
Let’s Now Try With NBO
Inserts in the altering table:.437
.487
120.758 <---- Execution time increase
.617
.510
Inserts on the other table:.468
.485
25.061 <---- still a metalock, but not locking the other tables for the whole duration
.494
.471
On the other nodes at the same time of the ALTER we can see:Id User db Command Time State Info Time_ms Rows_sent Rows_examined
110068 system user windmills_s Connect 86 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 120420 0 0
In this case, what is also interesting to note is that:
We have a moment of metalock:
110174 pmm 127.0.0.1:42728 NULL Query 2 Waiting for table metadata lock SELECT x FROM information_schema.tables WHERE TABLE_SCHEMA = 'windmills_s' 1486 10 0
110068 system user connecting host windmills_s Connect 111 closing tables ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE 111000 0 0
The execution time is longer
Summarizing:TOI NBO
Time on hold for insert for altering table ~64 sec ~120 sec
Time on hold for insert for another table ~64 sec ~25 sec
metalock whole time only at the end
What is Happening? What are the Differences and Why Does it Take Longer with NBO?
Let’s see at a very high level how the two work:
TOI: when you issue a DDL like ADD INDEX a metadata lock is taken on the table and it will be released only at the end of the operation. During this time, you cannot:
Perform DMLs on any cluster node
Alter another table in the cluster
NBO: the metadata lock is taken at the start and at the end for a very brief period of time. The ADD INDEX operation will then work on each node independently. The lock taken at the end is to have all the nodes agree on the operation and commit or rollback (using cluster error voting). This final phase costs a bit more in time and is what adds a few seconds to the operation execution. But during the operation:
You can alter another table (using NBO)
You can continue to insert data, except in the table(s) you are altering.
On node crash, the operation will continue on the other nodes, and if successful it will persist.
In short, the cluster server behavior changes significantly when using NBO, offering significant flexibility compared to TOI. The cost in time should not linearly increase with the dimension of the table, but more in relation to the single node efficiency in performing the ALTER operation.
Conclusion
NBO can be significantly helpful to reduce the impact of DDL on the cluster, for now, limited to the widely used creation/modification/drop of an index. But in the future … we may expand it.
The feature is still a technology preview, so do not trust it in production, but test it and let us know what you think.
Final comment: another distribution has introduced NBO, but only if you buy the enterprise version.
Percona, which is truly open source with facts not just words, has implemented NBO in standard PXC, and the code is fully open source. This is not the first one, but just another of the many features Percona is offering for free while others ask you to buy the enterprise version.
Enjoy the product and let us have your feedback! Great MySQL to all! [Less]
|
Posted
over 3 years
ago
by
MySQL Performance Blog
We have a quickstart guide for how to install Percona Distribution for MySQL Operator on minikube. Installing the minimal version works well as it is described in the guide. After that, we will have one HAproxy and one Percona XtraDB Cluster (PXC)
... [More]
node to work with.
Minikube provides Kubernetes locally. One can try using the provided local k8s to try the more advanced scenarios such as the one described here.
Following that guide, everything works well, until we get to the part of deploying a cluster withdeploy/cr.yamlEven after that, things seemingly work.$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster1-haproxy-0 0/2 ContainerCreating 0 5s
cluster1-pxc-0 0/3 Init:0/1 0 5s
percona-xtradb-cluster-operator-77bfd8cdc5-rcqsp 1/1 Running 1 62sThat is until the second pod is getting created. The creation of that pod will be stuck forever in a pending state.$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster1-haproxy-0 1/2 Running 0 93s
cluster1-pxc-0 3/3 Running 0 93s
cluster1-pxc-1 0/3 Pending 0 10s
percona-xtradb-cluster-operator-77bfd8cdc5-rcqsp 1/1 Running 1 2m30sWhen checking cluster1-pxc-1 pods withkubectl describe pod cluster1-pxc-1the reason becomes clear.Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 66s (x2 over 66s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 63s default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.Anti-affinity rules are specified for different pods in the cluster, which makes sense, normally – one would want to have the different PXC instances in different failure domains, so we can have actual fault tolerance. I could have made this one work by editing the anti-affinity rules in cr.yaml, which would have been suitable for testing purposes, but I was wondering if there is a better way to have a more complicated local k8s setup. Kind can give that, and it’s an ideal playground for following the second guide. Alternatively, the anti-affinity rules can be edited, but I wanted to have an easy test environment for a full setup.
In this example, I am using macOS and DockerDesktop for Mac, kind can be installed via homebrew.$ cat kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: workerThis way I have one control and 3 worker nodes (running kubelet), a redundant control plane is also supported, but not needed for this testing. With this, the cluster can be created.$ kind create cluster --name k8s-playground --config kind-config.yaml
Creating cluster "k8s-playground" ...
✓ Ensuring node image (kindest/node:v1.21.1)
✓ Preparing nodes
✓ Writing configuration
✓ Starting control-plane
✓ Installing CNI
✓ Installing StorageClass
✓ Joining worker nodes
Set kubectl context to "kind-k8s-playground"
You can now use your cluster with:
kubectl cluster-info --context kind-k8s-playgroundHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community
Each node will be a docker container.$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d404954433e kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker2
93a293dfc423 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 127.0.0.1:64922->6443/tcp k8s-playground-control-plane
e531e10b0384 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker
383a89f6d9f8 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker3From this point on, kubectl is configured, and we can follow the second guide for the Percona Distribution for MySQL Operator.
After that, we need to wait for a while for the cluster to come up.$ kubectl apply -f deploy/cr.yaml
perconaxtradbcluster.pxc.percona.com/cluster1 created$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster1-haproxy-0 0/2 ContainerCreating 0 4s
cluster1-pxc-0 0/3 Init:0/1 0 4s
percona-xtradb-cluster-operator-d99c748-d5nq6 1/1 Running 0 21sAfter a few minutes, the cluster will be running as expected.$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster1-haproxy-0 2/2 Running 0 5m5s
cluster1-haproxy-1 2/2 Running 0 3m20s
cluster1-haproxy-2 2/2 Running 0 2m55s
cluster1-pxc-0 3/3 Running 0 5m5s
cluster1-pxc-1 3/3 Running 0 3m32s
cluster1-pxc-2 3/3 Running 0 119s
percona-xtradb-cluster-operator-d99c748-d5nq6 1/1 Running 0 5m22s$ kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- mysql -h cluster1-haproxy -uroot -proot_password -e "show global status like 'wsrep_cluster_size'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
pod "percona-client" deletedFor that last check, I used the default password from secret.yaml. If you changed that, use the password it’s changed to.
Kind will work on macOS out of the box like this as a simple solution. In order to try Percona software in local playgrounds (on Linux or in a Linux virtual machine), you can also check anydbver, created and maintained by Nickolay Ihalainen.
At the end of the experiments, the kind k8s can be destroyed.$ kind delete cluster --name k8s-playground
Deleting cluster "k8s-playground" ... [Less]
|