Posted
over 3 years
ago
by
Joshua Otwell
I recently learned how to use MySQL to generate MySQL statements for several admin-type Data Definition Language (DDL) SQL queries I needed to execute on the server. For many DBA’s this is likely nothing new and in fact, is a well-used tool in their
... [More]
kit. However, I mostly focus on developer-related tasks, so it is a fresh-in-my-mind learning experience. Continue reading for more information…
Some backstory for context
I’ve recently been porting over an existing vanilla PHP application to CodeIgniter 4. One of the new features is bootstrapping all of the necessary MySQL CREATE TABLE statements dynamically based on information parsed from a .fxl file (which is close kin to a .xml file structure-wise).
Each CREATE TABLE statement is complete with FOREIGN KEY constraint clauses to ensure data integrity between linking tables.
Wanting to perform several smoke tests for the table creation process, I came to a situation in which I needed to DROP the FOREIGN KEY for each table, and then DROP all the tables once each constraint had been disabled. With 60 plus tables, executing that many statements by hand are just not feasible.
What do I do?
Google and learn specific queries against the INFORMATION_SCHEMA database along with using the CONCAT() function, and generating all the needed SQL statements.
Although I won’t use the tables and data I work with within the application I’m building, I’ll use the familiar sakila practice database for the examples.
Self-Promotion:
If you enjoy the content written here, by all means, share this blog and your favorite post(s) with others who may benefit from or like it as well. Since coffee is my favorite drink, you can even buy me one if you would like!
SQL to SQL
The KEY_COLUMN_USAGE table in the INFORMATION_SCHEMA database is chock-full of goodies. The following query returns all the constraints for all the tables in the ‘sakila’ database:
SELECT
TABLE_NAME,
COLUMN_NAME,
CONSTRAINT_NAME,
REFERENCED_TABLE_NAME,
REFERENCED_COLUMN_NAMEFROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGEWHERE
REFERENCED_TABLE_SCHEMA = 'sakila';
Query on the KEY_COLUMN_USAGE table.
Related: The REFERENTIAL_CONSTRAINTS_TABLE in the INFORMATION_SCHEMA database is another great source of information on FOREIGN KEYS.
Suppose for whatever reason we want to drop the FOREIGN KEY constraint for each table? That’s a great number of ALTER TABLE statements
Here we can use a little craftiness and the CONCAT() function to generate all the necessary DDL statements with this SELECT query:
SELECT CONCAT('ALTER TABLE ',TABLE_NAME, ' DROP FOREIGN KEY ', CONSTRAINT_NAME, ';') AS DDLFROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGEWHERE
REFERENCED_TABLE_SCHEMA = 'sakila'AND
REFERENCED_TABLE_NAME IS NOT NULL;
Multiple ALTER TABLE statements were created dynamically.
All of the individual ALTER TABLE statements can be saved to a script and run whenever needed.
Support my blog by visiting my Tip Jar. Thank you so much! Every bit helps.
Tip Jar
MySQL to MySQL – Adding a column to multiple tables
Based on the same concept, if we needed to add an identical column to all the tables in the schema, we can use the CONCAT() function and a SELECT query against the same table in the INFORMATION_SCHEMA and produce the necessary ALTER TABLE statements as shown below:
SELECT CONCAT('ALTER TABLE ',TABLE_NAME, ' ADD COLUMN my_column TEXT DEFAULT NULL', ';') AS DDLFROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGEWHERE
REFERENCED_TABLE_SCHEMA = 'sakila'
Note: Scripting out mass DDL statements like this is a great skill to have. However, you must exercise caution because you are applying changes to a large number of tables in your database, for better or worse!
It’s worth noting: Also, this is my first encounter with the tables in the INFORMATION_SCHEMA database. Please let me know if the example queries are inaccurate or inadequate for their purposes and if any other alternatives or corrective measures are needed.
What are your favorite scripting tips and tricks?
I’d love to know what your go-to scripting tips are. Please share them in the comments below if you would like and many thanks!
As always, if you have any questions or see any mistakes in the code, please let me know via the comments. Constructive comments help me provide accurate blog posts and are much appreciated. Thank you for reading
Are you a Medium member? If so, receive an email notification each time I publish a blog post if you prefer the Medium platform. Not a member? No worries! Use my sign-up link (I will get a commission at no extra cost to you) and join. I really enjoy reading all the great content there and I know you will too!!!
Like what you have read? See anything incorrect? Please comment below and thank you for reading!!!
A Call To Action!
Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.
Visit the Portfolio-Projects page to see blog posts/technical writing I have completed for clients.
To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)
Be sure and visit the “Best Of” page for a collection of my best blog posts.
Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.
Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.
Promotional: I am making Gmail HTML Email Signature Templates over in my Etsy shop. Make your emails stand out and pop with your very own.
The post Use MySQL to generate MySQL for Admin tasks appeared first on Digital Owl's Prose. [Less]
|
Posted
over 3 years
ago
by
Vlad Mihalcea
Introduction In this article, I’m going to show you the RevoGain software architecture. RevoGain is a web application that helps Revolut users calculate their stocks and crypto trading gains, and I started building it at the beginning of September
... [More]
2021. Hype vs. Reality Have you ever been to a software conference and felt disconnected from all those hyped technologies that were presented so beautifully on stage? I certainly had. Whenever I came back from a software conference and looked at my project codebase, I could see a large gap between what’s being... Read More
The post RevoGain Software Architecture appeared first on Vlad Mihalcea. [Less]
|
Posted
over 3 years
ago
by
MySQL Server Dev Team
The Japanese supercomputer has a mission: Help solve the world's biggest problems. The cloud is essential to making that goal reality.
|
Posted
over 3 years
ago
by
Yngve Svendsen
Here’s how SailGP collects, analyzes, and streams racing metrics in real time to help crews deftly navigate their F50 vessels and connect with fans.
|
Posted
over 3 years
ago
by
MySQL Server Dev Team
Here’s how SailGP collects, analyzes, and streams racing metrics in real time to help crews deftly navigate their F50 vessels and connect with fans.
|
Posted
over 3 years
ago
by
MySQL Performance Blog
If you ever had to deal with performance and/or disk space issues related to temporary tables, I bet you eventually found yourself puzzled. There are many possible scenarios depending on the type of temporary table, settings, and MySQL version used.
... [More]
We have observed a pretty long evolution in that matter due to a couple of reasons. One of them was the need to completely eliminate the need to use the deprecated MyISAM engine, and at the same time introduce more performant and reliable alternatives. Another set of improvements was required related to InnoDB, where it was necessary to lower the overhead of temporary tables using that engine.
For that reason, I decided to gather them in a sort of summary which may help to troubleshoot their usage. Due to vast changes between major MySQL releases, I divided the article by them.
MySQL 5.6
(If you are still using that version, you are encouraged to consider upgrading it soon as it has reached EOL.)
User-Created Temporary Tables
When a table is created using CREATE TEMPORARY TABLE clause, it will use the engine defined by default_tmp_storage_engine (defaults to InnoDB) if not explicitly defined otherwise and will be stored inside the directory defined by the tmpdir variable.
An example one may look like this:mysql > create temporary table tmp1 (id int, a varchar(10));
Query OK, 0 rows affected (0.02 sec)
mysql > show create table tmp1\G
*************************** 1. row ***************************
Table: tmp1
Create Table: CREATE TEMPORARY TABLE `tmp1` (
`id` int(11) DEFAULT NULL,
`a` varchar(10) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)But how do you find the file created on disk that stores this table data? While this query may help:mysql > select table_id,space,name,path from information_schema.INNODB_SYS_DATAFILES join information_schema.INNODB_SYS_TABLES using (space) where name like '%tmp%'\G
*************************** 1. row ***************************
table_id: 21
space: 7
name: tmp/#sql11765a_2_1
path: /data/sandboxes/msb_5_6_51/tmp/#sql11765a_2_1.ibd
1 row in set (0.00 sec)We don’t see the original table name here. Even by looking at the buffer pool, we still don’t have the real name:mysql > select TABLE_NAME from information_schema.INNODB_BUFFER_PAGE where table_name like '%tmp%';
+-------------------------+
| TABLE_NAME |
+-------------------------+
| `tmp`.`#sql11765a_2_1` |
+-------------------------+
1 row in set (0.07 sec)Here comes the extension available in the Percona Server for MySQL 5.6 variant – additional information_schema table: GLOBAL_TEMPORARY_TABLES. With that one, we can craft a query that provides a bit more information:mysql > select SPACE,TABLE_SCHEMA,TABLE_NAME,ENGINE,g.NAME,PATH from information_schema.GLOBAL_TEMPORARY_TABLES g LEFT JOIN information_schema.INNODB_SYS_TABLES s ON s.NAME LIKE CONCAT('%', g.name, '%') LEFT JOIN information_schema.INNODB_SYS_DATAFILES USING(SPACE)\G
*************************** 1. row ***************************
SPACE: 16
TABLE_SCHEMA: test
TABLE_NAME: tmp1
ENGINE: InnoDB
NAME: #sql12c75d_2_0
PATH: /data/sandboxes/msb_ps5_6_47/tmp/#sql12c75d_2_0.ibd
*************************** 2. row ***************************
SPACE: NULL
TABLE_SCHEMA: test
TABLE_NAME: tmp3
ENGINE: MEMORY
NAME: #sql12c75d_2_2
PATH: NULL
*************************** 3. row ***************************
SPACE: NULL
TABLE_SCHEMA: test
TABLE_NAME: tmp2
ENGINE: MyISAM
NAME: #sql12c75d_2_1
PATH: NULL
3 rows in set (0.00 sec)So at least for the InnoDB temp table, we can correlate the exact table name with the file path.
Internal Temporary Tables
These are ones created by MySQL in the process of executing a query. We don’t have any access to such tables, but let’s see how we can investigate their usage.
This type is created in memory (using MEMORY engine) as long as its size doesn’t exceed either tmp_table_size or max_heap_table_size variables, and if no TEXT/BLOB columns are in use. If such a table has to be stored on disk though, in MySQL 5.6 it will use MyISAM storage and also tmpdir used as a location. Quick example, on 10M rows sysbench table, query producing big internal temporary table:mysql > SELECT pad, COUNT(*) FROM sbtest1 GROUP BY pad;And we can see the related files growing:$ ls -lh /data/sandboxes/msb_5_6_51/tmp/
total 808M
-rw-rw---- 1 przemek przemek 329M Sep 29 23:24 '#sql_11765a_0.MYD'
-rw-rw---- 1 przemek przemek 479M Sep 29 23:24 '#sql_11765a_0.MYI'It may be difficult to correlate a particular temp table and its client connection though. The only information I found is:mysql > select FILE_NAME,EVENT_NAME from performance_schema.file_summary_by_instance where file_name like '%tmp%' \G
*************************** 1. row ***************************
FILE_NAME: /data/sandboxes/msb_5_6_51/tmp/Innodb Merge Temp File
EVENT_NAME: wait/io/file/innodb/innodb_temp_file
*************************** 2. row ***************************
FILE_NAME: /data/sandboxes/msb_5_6_51/tmp/#sql_11765a_0.MYI
EVENT_NAME: wait/io/file/myisam/kfile
*************************** 3. row ***************************
FILE_NAME: /data/sandboxes/msb_5_6_51/tmp/#sql_11765a_0.MYD
EVENT_NAME: wait/io/file/myisam/dfile
3 rows in set (0.00 sec)
MySQL 5.7
User-Created Temporary Tables
As earlier, the default_tmp_storage_engine variable decides on the engine used. But two changes happened here. InnoDB temporary tables now use a common dedicated shared tablespace – ibtmp1, unless it is compressed. Moreover, we have an additional information_schema view: INNODB_TEMP_TABLE_INFO. Given that, we can get information like below:mysql > select name, FILE_NAME, FILE_TYPE, TABLESPACE_NAME, SPACE, PER_TABLE_TABLESPACE, IS_COMPRESSED from INFORMATION_SCHEMA.INNODB_TEMP_TABLE_INFO join INFORMATION_SCHEMA.FILES on FILE_ID=SPACE\G
*************************** 1. row ***************************
name: #sql12cf58_2_5
FILE_NAME: ./ibtmp1
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_temporary
SPACE: 109
PER_TABLE_TABLESPACE: FALSE
IS_COMPRESSED: FALSE
*************************** 2. row ***************************
name: #sql12cf58_2_4
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_4.ibd
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_file_per_table_110
SPACE: 110
PER_TABLE_TABLESPACE: TRUE
IS_COMPRESSED: TRUE
2 rows in set (0.01 sec)But again to correlate with a table name, the Percona Server for MySQL extension needs to be used:mysql > select g.TABLE_SCHEMA, g.TABLE_NAME, name, FILE_NAME, FILE_TYPE, TABLESPACE_NAME, SPACE, PER_TABLE_TABLESPACE, IS_COMPRESSED from INFORMATION_SCHEMA.INNODB_TEMP_TABLE_INFO join INFORMATION_SCHEMA.FILES on FILE_ID=SPACE join information_schema.GLOBAL_TEMPORARY_TABLES g using (name)\G
*************************** 1. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp1
name: #sql12cf58_2_5
FILE_NAME: ./ibtmp1
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_temporary
SPACE: 109
PER_TABLE_TABLESPACE: FALSE
IS_COMPRESSED: FALSE
*************************** 2. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp3
name: #sql12cf58_2_4
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_4.ibd
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_file_per_table_110
SPACE: 110
PER_TABLE_TABLESPACE: TRUE
IS_COMPRESSED: TRUE
2 rows in set (0.01 sec)Alternatively, to see also MyISAM and related .frm files, we can use:mysql > SELECT g.TABLE_SCHEMA, g.TABLE_NAME, NAME, f.FILE_NAME, g.ENGINE, TABLESPACE_NAME, PER_TABLE_TABLESPACE, SPACE FROM information_schema.GLOBAL_TEMPORARY_TABLES g join performance_schema.file_instances f ON FILE_NAME LIKE CONCAT('%', g.name, '%') left join INFORMATION_SCHEMA.INNODB_TEMP_TABLE_INFO using (name) left join INFORMATION_SCHEMA.FILES fl on space=FILE_ID order by table_name\G
*************************** 1. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp1
NAME: #sql12cf58_2_5
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_5.frm
ENGINE: InnoDB
TABLESPACE_NAME: innodb_temporary
PER_TABLE_TABLESPACE: FALSE
SPACE: 109
*************************** 2. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp2
NAME: #sql12cf58_2_6
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_6.MYD
ENGINE: MyISAM
TABLESPACE_NAME: NULL
PER_TABLE_TABLESPACE: NULL
SPACE: NULL
*************************** 3. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp2
NAME: #sql12cf58_2_6
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_6.MYI
ENGINE: MyISAM
TABLESPACE_NAME: NULL
PER_TABLE_TABLESPACE: NULL
SPACE: NULL
*************************** 4. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp2
NAME: #sql12cf58_2_6
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_6.frm
ENGINE: MyISAM
TABLESPACE_NAME: NULL
PER_TABLE_TABLESPACE: NULL
SPACE: NULL
*************************** 5. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp3
NAME: #sql12cf58_2_4
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_4.frm
ENGINE: InnoDB
TABLESPACE_NAME: innodb_file_per_table_110
PER_TABLE_TABLESPACE: TRUE
SPACE: 110
*************************** 6. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: tmp3
NAME: #sql12cf58_2_4
FILE_NAME: /data/sandboxes/msb_ps5_7_33/tmp/#sql12cf58_2_4.ibd
ENGINE: InnoDB
TABLESPACE_NAME: innodb_file_per_table_110
PER_TABLE_TABLESPACE: TRUE
SPACE: 110
6 rows in set (0.01 sec)
Internal Temporary Tables
For internal temporary tables in 5.7, it is similar in terms of in-memory ones. But the default engine for on-disk temp tables is defined via a new variable: internal_tmp_disk_storage_engine, which now defaults also to InnoDB, and also the ibtmp1 tablespace is used to store its contents.
Insight into this shared temporary tablespace is pretty limited. We can check its size and how much free space is currently available. An example view was taken during heavy query ongoing:mysql > select FILE_NAME, FILE_TYPE, TABLESPACE_NAME, ENGINE, TOTAL_EXTENTS, FREE_EXTENTS, EXTENT_SIZE/1024/1024 as 'extent in MB', MAXIMUM_SIZE from INFORMATION_SCHEMA.FILES where file_name like '%ibtmp%'\G
*************************** 1. row ***************************
FILE_NAME: ./ibtmp1
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_temporary
ENGINE: InnoDB
TOTAL_EXTENTS: 588
FREE_EXTENTS: 1
extent in MB: 1.00000000
MAXIMUM_SIZE: NULL
1 row in set (0.00 sec)And after the query is finished we can see most of the space is freed (FREE_EXTENTS):mysql > select FILE_NAME, FILE_TYPE, TABLESPACE_NAME, ENGINE, TOTAL_EXTENTS, FREE_EXTENTS, EXTENT_SIZE/1024/1024 as 'extent in MB', MAXIMUM_SIZE from INFORMATION_SCHEMA.FILES where file_name like '%ibtmp%'\G
*************************** 1. row ***************************
FILE_NAME: ./ibtmp1
FILE_TYPE: TEMPORARY
TABLESPACE_NAME: innodb_temporary
ENGINE: InnoDB
TOTAL_EXTENTS: 780
FREE_EXTENTS: 764
extent in MB: 1.00000000
MAXIMUM_SIZE: NULL
1 row in set (0.00 sec)However, the tablespace won’t be truncated unless MySQL is restarted:$ ls -lh msb_5_7_35/data/ibtmp*
-rw-r----- 1 przemek przemek 780M Sep 30 19:50 msb_5_7_35/data/ibtmp1To see the writing activity (which may turn out to be much higher for a single query than total size growth made by it):mysql > select FILE_NAME, SUM_NUMBER_OF_BYTES_WRITE/1024/1024/1024 as GB_written from performance_schema.file_summary_by_instance where file_name like '%ibtmp%' \G
*************************** 1. row ***************************
FILE_NAME: /data/sandboxes/msb_5_7_35/data/ibtmp1
GB_written: 46.925933837891
1 row in set (0.00 sec)
MySQL 8.0
For simplicity, let’s skip how things worked before 8.0.16 and discuss only how it works since then, as the changes in that matter are quite significant:
internal_tmp_disk_storage_engine variable was removed and it is no longer possible to use the MyISAM engine for internal temporary tables
shared ibtmp1 table space is no longer used for either temporary table type
a pool of new Session Temporary Tablespaces was introduced to handle both user and internal temporary tables on disk and is located by default in the main data directory
the new TempTable engine for in-memory tables uses both the space in memory as well as mmapped files on disk
User-Created Temporary Tables
For an example temporary table:mysql > create temporary table tmp1 (id int, a varchar(10));
Query OK, 0 rows affected (0.00 sec)
mysql > select * from information_schema.INNODB_TEMP_TABLE_INFO;
+----------+----------------+--------+------------+
| TABLE_ID | NAME | N_COLS | SPACE |
+----------+----------------+--------+------------+
| 1089 | #sqlbbeb3_a_12 | 5 | 4243767289 |
+----------+----------------+--------+------------+
1 row in set (0.00 sec)We can correlate which file was used from that pool by looking at the space number:mysql > select * from INFORMATION_SCHEMA.INNODB_SESSION_TEMP_TABLESPACES ;
+----+------------+----------------------------+-------+----------+-----------+
| ID | SPACE | PATH | SIZE | STATE | PURPOSE |
+----+------------+----------------------------+-------+----------+-----------+
| 10 | 4243767290 | ./#innodb_temp/temp_10.ibt | 81920 | ACTIVE | INTRINSIC |
| 10 | 4243767289 | ./#innodb_temp/temp_9.ibt | 98304 | ACTIVE | USER |
| 0 | 4243767281 | ./#innodb_temp/temp_1.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767282 | ./#innodb_temp/temp_2.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767283 | ./#innodb_temp/temp_3.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767284 | ./#innodb_temp/temp_4.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767285 | ./#innodb_temp/temp_5.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767286 | ./#innodb_temp/temp_6.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767287 | ./#innodb_temp/temp_7.ibt | 81920 | INACTIVE | NONE |
| 0 | 4243767288 | ./#innodb_temp/temp_8.ibt | 81920 | INACTIVE | NONE |
+----+------------+----------------------------+-------+----------+-----------+
10 rows in set (0.00 sec)But again, no way to look for the table name. Fortunately, Percona Server for MySQL still has the GLOBAL_TEMPORARY_TABLES table, so given the three available system views, we can get better information on user-created temporary tables using various engines, like below:mysql > SELECT SESSION_ID, SPACE, PATH, TABLE_SCHEMA, TABLE_NAME, SIZE, DATA_LENGTH, INDEX_LENGTH, ENGINE, PURPOSE FROM information_schema.GLOBAL_TEMPORARY_TABLES LEFT JOIN information_schema.INNODB_TEMP_TABLE_INFO USING(NAME) LEFT JOIN INFORMATION_SCHEMA.INNODB_SESSION_TEMP_TABLESPACES USING(SPACE)\G
*************************** 1. row ***************************
SESSION_ID: 10
SPACE: 4243767290
PATH: ./#innodb_temp/temp_10.ibt
TABLE_SCHEMA: test
TABLE_NAME: tmp3
SIZE: 98304
DATA_LENGTH: 16384
INDEX_LENGTH: 0
ENGINE: InnoDB
PURPOSE: USER
*************************** 2. row ***************************
SESSION_ID: 13
SPACE: NULL
PATH: NULL
TABLE_SCHEMA: test
TABLE_NAME: tmp41
SIZE: NULL
DATA_LENGTH: 24
INDEX_LENGTH: 1024
ENGINE: MyISAM
PURPOSE: NULL
*************************** 3. row ***************************
SESSION_ID: 13
SPACE: NULL
PATH: NULL
TABLE_SCHEMA: test
TABLE_NAME: tmp40
SIZE: NULL
DATA_LENGTH: 128256
INDEX_LENGTH: 0
ENGINE: MEMORY
PURPOSE: NULL
*************************** 4. row ***************************
SESSION_ID: 13
SPACE: 4243767287
PATH: ./#innodb_temp/temp_7.ibt
TABLE_SCHEMA: test
TABLE_NAME: tmp33
SIZE: 98304
DATA_LENGTH: 16384
INDEX_LENGTH: 0
ENGINE: InnoDB
PURPOSE: USER
4 rows in set (0.01 sec)Similar to ibtmp1, these tablespaces are not truncated apart from MySQL restart.
From the above, we can see that user connection 10 has one open InnoDB temporary table, and connection 13 has three temporary tables using three different engines.
Internal Temporary Tables
While a heavy query is running in connection 10, we can get the following views:mysql > show processlist\G
...
*************************** 2. row ***************************
Id: 10
User: msandbox
Host: localhost
db: test
Command: Query
Time: 108
State: converting HEAP to ondisk
Info: SELECT pad, COUNT(*) FROM sbtest1 GROUP BY pad
mysql > select * from performance_schema.memory_summary_global_by_event_name where EVENT_NAME like '%temptable%'\G
*************************** 1. row ***************************
EVENT_NAME: memory/temptable/physical_disk
COUNT_ALLOC: 2
COUNT_FREE: 0
SUM_NUMBER_OF_BYTES_ALLOC: 1073741824
SUM_NUMBER_OF_BYTES_FREE: 0
LOW_COUNT_USED: 0
CURRENT_COUNT_USED: 2
HIGH_COUNT_USED: 2
LOW_NUMBER_OF_BYTES_USED: 0
CURRENT_NUMBER_OF_BYTES_USED: 1073741824
HIGH_NUMBER_OF_BYTES_USED: 1073741824
*************************** 2. row ***************************
EVENT_NAME: memory/temptable/physical_ram
COUNT_ALLOC: 12
COUNT_FREE: 1
SUM_NUMBER_OF_BYTES_ALLOC: 1074790400
SUM_NUMBER_OF_BYTES_FREE: 1048576
LOW_COUNT_USED: 0
CURRENT_COUNT_USED: 11
HIGH_COUNT_USED: 11
LOW_NUMBER_OF_BYTES_USED: 0
CURRENT_NUMBER_OF_BYTES_USED: 1073741824
HIGH_NUMBER_OF_BYTES_USED: 1073741824
2 rows in set (0.00 sec)
mysql > select * from INFORMATION_SCHEMA.INNODB_SESSION_TEMP_TABLESPACES where id=10\G
*************************** 1. row ***************************
ID: 10
SPACE: 4243767290
PATH: ./#innodb_temp/temp_10.ibt
SIZE: 2399141888
STATE: ACTIVE
PURPOSE: INTRINSIC
*************************** 2. row ***************************
ID: 10
SPACE: 4243767289
PATH: ./#innodb_temp/temp_9.ibt
SIZE: 98304
STATE: ACTIVE
PURPOSE: USER
2 rows in set (0.00 sec)From the above, we can see the query created a huge temporary table, that first exceeded temptable_max_ram variable and continued to grow in a mmapped file (still TempTable engine), but as also temptable_max_mmap was reached, the table had to be converted to on-disk InnoDB intrinsic table. The same pool of temporary InnoDB tables is used in this case, but we can see the purpose information, depending if the table is external (user-created) or internal.
The mmapped file is not visible in the file system as it has deleted state, but can be watched with lsof:mysqld 862655 przemek 52u REG 253,3 133644288 52764900 /data/sandboxes/msb_ps8_0_23/tmp/mysql_temptable.8YIGV8 (deleted)It is important to know here, that as long as mmapped space has not exceeded, the Created_tmp_disk_tables counter is not incremented even though a file is created on disk here.
Also, in Percona Server for MySQL, the extended slow log, the size of temporary tables when the TempTable engine is used, is not accounted for: https://jira.percona.com/browse/PS-5168 – it shows “Tmp_table_sizes: 0”.
In some use cases, there are problems reported with the TempTable. It is possible to switch back to the old Memory engine via the internal_tmp_mem_storage_engine variable if needed.
References
https://www.percona.com/blog/2017/12/04/internal-temporary-tables-mysql-5-7/
https://dev.mysql.com/worklog/task/?id=7682
https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-temp-table-info.html
https://bugs.mysql.com/bug.php?id=96236
https://www.percona.com/doc/percona-server/8.0/diagnostics/slow_extended.html#other-information
https://dev.mysql.com/doc/refman/8.0/en/internal-temporary-tables.html
Complete the 2021 Percona Open Source Data Management Software Survey
Have Your Say! [Less]
|
Posted
over 3 years
ago
by
Joshua Otwell
Hey friends over the weekend (Friday actually) I’ve published another issue of my weekly newsletter, OpenLamp.tech. Continue reading for more information…
This issue is full of great curated reads as well as a specific question I’ve asked
... [More]
readers and the community. Looking for some suggestions on planned migrations for upgrades to my WordPress site.
The Newsletter for PHP and MySQL Developers
In this weeks newsletter, there’s information on:
Here’s the full issue for you to read. You can always subscribe (it’s free) and have the weekly issue of OpenLamp.tech delivered directly to your inbox. Just think, no more waiting for me to post it here each Monday.
Visit the sign-up form below to subscribe…
The Newsletter for PHP and MySQL Developers
Like what you have read? See anything incorrect? Please comment below and thank you for reading!!!
A Call To Action!
Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.
Visit the Portfolio-Projects page to see blog posts/technical writing I have completed for clients.
To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)
Be sure and visit the “Best Of” page for a collection of my best blog posts.
Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.
Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.
The post OpenLamp.tech Issue #2 appeared first on Digital Owl's Prose. [Less]
|
Posted
over 3 years
ago
by
Alena Subotina
The article gives a deep insight into how to restore MySQL database from a backup file using the mysqldump utility and third-party backup and restore tools, such as dbForge Studio for MySQL, MySQL Workbench, and phpMyAdmin. Contents Use the mysql
... [More]
command to restore a database from the command line Create the database Restore the database […]
The post How to Restore a MySQL Database with Command Line or Restore Tools appeared first on Devart Blog. [Less]
|
Posted
over 3 years
ago
by
PlanetScale
The power of PlanetScale within a beautiful GUI — now you can connect to development branches, switch between them, and even create deploy requests in Arctype.Read the full story
|
Posted
over 3 years
ago
by
MyDBOPS
Exporting the table with MySQL native utility is a time-consuming process since it is a single thread. While having a lesser table size is affordable, But when having a larger data to be exported with native utility will consume more amount
... [More]
of time to export the data. The main bottleneck of MySQL native utility is a single thread.
MySQL shell utility will make the process easier. It will export the table in parallel load and we can import the data back with parallel thread and also will provide the current progress status on export/import progress.
util.exportTable() utility was introduced in Shell – 8.0.22 version, will export the data with the parallel thread in a controlled manner. We can store the data in either local or Cloud Infrastructure Object Storage bucket as well.
We Will see about the compression ratio along with the time taken for native MySQL vs Shell utility
Feature :
CompressionProgress statusSupported output format – csv,csv-unix,tsvControlled process (Maxrate) Output file to local or Oracle Cloud Infrastructure Object Storage bucketExample :
MySQL localhost JS > util.exportTable("classicmodels.employees","file:///home/vagrant/employees.txt")
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
100% (23 rows / ~23 rows), 0.00 rows/s, 0.00 B/s
Dump duration: 00:00:00s
Total duration: 00:00:00s
Data size: 1.66 KB
Rows written: 23
Bytes written: 1.66 KB
Average throughput: 1.66 KB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/employees.txt", {
"characterSet": "utf8mb4",
"schema": "classicmodels",
"table": "employees"
})
MySQL localhost JS >
We should make sure the directory already exists. But default it will generate the file with fieldsTerminatedBy – TAB. At last, it will provide the import command which needs to use while restoring the data back.
Export as CSV :
To export the file as CSV we need to use the dialect option will determine the output format.
MySQL localhost JS > util.exportTable("classicmodels.employees","file:///home/vagrant/employees.csv",{dialect:"csv"})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
100% (23 rows / ~23 rows), 0.00 rows/s, 0.00 B/s
Dump duration: 00:00:00s
Total duration: 00:00:00s
Data size: 1.96 KB
Rows written: 23
Bytes written: 1.96 KB
Average throughput: 1.96 KB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/employees.csv", {
"characterSet": "utf8mb4",
"dialect": "csv",
"schema": "classicmodels",
"table": "employees"
})
MySQL localhost JS >
Compression:
Two methods of compression zstd and gzip, By default the compression is disabled. We can enable this by adding a compression option.
MySQL localhost JS > util.exportTable("classicmodels.employees","file:///home/vagrant/employees.zstd",{compression:"zstd"})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
100% (23 rows / ~23 rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:00s
Total duration: 00:00:00s
Uncompressed data size: 1.66 KB
Compressed data size: 0 bytes
Compression ratio: 1659.0
Rows written: 23
Bytes written: 0 bytes
Average uncompressed throughput: 1.66 KB/s
Average compressed throughput: 0.00 B/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/employees.zstd", {
"characterSet": "utf8mb4",
"schema": "classicmodels",
"table": "employees"
})
MySQL localhost JS >
Maxrate:
While exporting the table with a parallel thread, we need to take care of the load as well. Exporting process should not become the bottleneck for other processes. To make it a controlled process, we can set the maximum number of bytes per second per thread for maxrate option.
MySQL localhost JS > util.exportTable("classicmodels.employees","file:///home/vagrant/employees.csv",{dialect:"csv",maxRate:"10",fieldsOptionallyEnclosed: true, fieldsTerminatedBy: ",", linesTerminatedBy: "\n", fieldsEnclosedBy: '"'})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
100% (23 rows / ~23 rows), 0.00 rows/s, 0.00 B/s
Dump duration: 00:00:00s
Total duration: 00:00:00s
Data size: 1.94 KB
Rows written: 23
Bytes written: 1.94 KB
Average throughput: 1.94 KB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/employees.csv", {
"characterSet": "utf8mb4",
"dialect": "csv",
"fieldsEnclosedBy": "\"",
"fieldsOptionallyEnclosed": true,
"fieldsTerminatedBy": ",",
"linesTerminatedBy": "\n",
"schema": "classicmodels",
"table": "employees"
})
MySQL localhost JS >
Export time – Native Vs Shell utility
For testing the export time taken for Native MySQL vs Shell utility, I have used the below lab environment.
MySQL version5.7.35Shell version8.0.27Table size5.16 GBNote: Data loading is done using sysbench.
Native method :
I have exported the data from the table with single thread native MySQL. The execution time was 3 min and13 sec and the file is around 5 GB.
mysql> select * from sbtest1 INTO OUTFILE '/var/lib/mysql-files/sample_native.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Query OK, 26216172 rows affected (3 min 13.74 sec)
[root@centos11 mysql-files]# ls -ltrh
total 5.1G
-rw-rw-rw-. 1 mysql mysql 5.1G Nov 15 17:03 sample_native.csv
Shell utility :
The same table export is done using Shell utility – util.exportTable, the export is done within 2 min and 30 sec.
MySQL localhost JS > util.exportTable("sbtest.sbtest1","file:///home/vagrant/Sample_utility.csv",{dialect:"csv"})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
101% (26.22M rows / ~25.88M rows), 139.50K rows/s, 27.35 MB/s
Dump duration: 00:02:33s
Total duration: 00:02:33s
Data size: 5.31 GB
Rows written: 26216172
Bytes written: 5.31 GB
Average throughput: 34.63 MB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/Sample_utility.csv", {
"characterSet": "utf8mb4",
"dialect": "csv",
"schema": "sbtest",
"table": "sbtest1"
})
MySQL localhost JS >
[vagrant@centos11 ~]$ ls -ltrh Sample_utility.csv
-rw-r-----. 1 root root 5.0G Nov 15 17:08 Sample_utility.csv
Native MySQLShell UtilityTime taken3 min and 13 sec2 min and 33 secExport file size5.1 GB5 GBCompression ratio
While performing the export to a larger table we need to take care of the export file size as well. So I have compared the compression ratio for zstd and gzip methods.
gzip method :
MySQL localhost JS > util.exportTable("sbtest.sbtest1","file:///home/vagrant/Sample_utility.gz",{compression:"gzip"})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
101% (26.22M rows / ~25.88M rows), 122.02K rows/s, 24.26 MB/s uncompressed, 12.52 MB/s compressed
Dump duration: 00:03:49s
Total duration: 00:03:50s
Uncompressed data size: 5.18 GB
Compressed data size: 2.68 GB
Compression ratio: 1.9
Rows written: 26216172
Bytes written: 2.68 GB
Average uncompressed throughput: 22.52 MB/s
Average compressed throughput: 11.64 MB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/Sample_utility.gz", {
"characterSet": "utf8mb4",
"schema": "sbtest",
"table": "sbtest1"
})
MySQL localhost JS >
[vagrant@centos11 ~]$ ls -ltrh Sample_utility.gz
-rw-r-----. 1 root root 2.5G Nov 15 17:14 Sample_utility.gz
Zstd method :
MySQL localhost JS > util.exportTable("sbtest.sbtest1","file:///home/vagrant/employees.zstd",{compression:"zstd"})
Initializing - done
Gathering information - done
Running data dump using 1 thread.
NOTE: Progress information uses estimated values and may not be accurate.
Writing table metadata - done
Starting data dump
101% (26.22M rows / ~25.88M rows), 165.84K rows/s, 32.40 MB/s uncompressed, 14.50 MB/s compressed
Dump duration: 00:02:38s
Total duration: 00:02:38s
Uncompressed data size: 5.18 GB
Compressed data size: 2.32 GB
Compression ratio: 2.2
Rows written: 26216172
Bytes written: 2.32 GB
Average uncompressed throughput: 32.61 MB/s
Average compressed throughput: 14.63 MB/s
The dump can be loaded using:
util.importTable("file:///home/vagrant/employees.zstd", {
"characterSet": "utf8mb4",
"schema": "sbtest",
"table": "sbtest1"
})
MySQL localhost JS >
[vagrant@centos11 ~]$ ls -ltrh employees.zstd
-rw-r-----. 1 vagrant vagrant 2.2G Nov 15 17:50 employees.zstd
GzipZstdExecution time3 min and 50 sec2 min and 38 secExport file size2.5 GB2.1 GBBy MySQL Shell utility we have exported the tables with parallel thread along with the compression. [Less]
|