I Use This!
Moderate Activity

News

Analyzed about 18 hours ago. based on code collected about 24 hours ago.
Posted about 2 years ago by PlanetScale
Learn how to build a Laravel CRUD application, connect it to a MySQL database, and seed it with data.Read the full story
Posted about 2 years ago by Frederic Descamps
On the previous article we saw how to easily deploy Apache Zeppelin on OCI with MySQL using Terraform. In this article we will see how we can accelerate queries enabling MySQL HeatWave. Enabling HeatWave If you used a MySQL Database Service ... [More] Shape compatible with HeatWave (MySQL.HeatWave.VM.Standard.E3), using the same Resource Manager Stack, you have the possibility to enable HeatWave and deploy a HeatWave Cluster for your MySQL. The first think to do, is to edit the previous applied stack: As we used a HeatWave compatible shape, we have the choice to deploy a HeatWave Cluster and we can choose it: Save and Apply… the HeatWave cluster will be added: If you have a MySQL Shell session open (on the Compute instance) to your database, you can also check that all has been deployed successfully: You can see some details in OCI’s dashboard too: Loading Data to HeatWave If you are familiar with MySQL HeatWave, you might already know how to load some data to HeatWave, if not, it’s very simple to find the easiest way to perform this operation. Just edit the HeatWave Cluster and click on “Estimate Node Count”: After having generated the estimate, select the airportdb: And as result you will see if more nodes are recommended and also a statement to run: After having entered that command in MySQL, the data is loaded in HeatWave and Zeppeling will use HeatWave everytime a query qualifies for HeatWave. I’ve added another query in Zeppelin that executes very very quickly with HeatWave: If the same query is ran without HeatWave, instead of being less than 1 sec, it will take more than 4 minutes: I’ve tried the same query in Zeppelin without HeatWave, I stopped it after 38 hours… I’ve no idea why the query stayed in Executing state in processlist. If you know the reason let me know. We tried to execute using jdbc and MySQL-Connector-Java and it executed in several minutes too. With HeatWave it’s almost instantaneous. We can also check that the queries were offloaded successfully to HeatWave: Conclusion It is very easy to add HeatWave to an existing MySQL Database Service instance if it runs one supported shape. Using Resource Manager’s Stack this is just one click. And as you can see Zeppelin can highly benefit from HeatWave when the queries are slow. [Less]
Posted about 2 years ago by Frederic Descamps
Recently, somebody requested some help as he was trying to modify the structure of a large table. The requested modification was related in changing a fixed length string to a variable length string column. If you don’t know it yet, MySQL supports ... [More] multiple algorithms when altering a table (DDL): COPY: operations are performed on a copyINPLACE: operations may rebuild the table inplace (no copy)INSTANT: operations only change the metadata in the data dictionnary The operation that interests us today needs to copy the full table. And this was the problem for the user as the MySQL Datadir could not store twice the amount of that large table. In this blog post I will explain a method that can be used to solve this problem. There are also others method like a logical dump and restore for example, but in this article, we will work with tablespaces. The user explained that he had plenty of storage on his system but not in the volume mounted for MySQL datadir. The Problem To simulate the problem, I used the airport-db sample database and we will modify the booking table that is +/- 5G on disk: # ls -lh airportdb/booking.ibd -rw-r-----. 1 mysql mysql 5.1G Feb 14 20:03 airportdb/booking.ibd I’ve created a volume for MySQL that is 9.8G: Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg02-mysql 9.8G 7.5G 1.9G 80% /var/lib/mysql As you can see, we have only 1.9G free that won’t be enough to copy our large table. If we try to modify the table booking to change the seat column from CHAR(4) to VARCHAR(10), we will see the following error: mysql> ALTER TABLE booking MODIFY seat VARCHAR(10); ERROR 1114 (HY000): The table '#sql-e0a4_8' is full In MySQL Error Log we can also see the following messages: 2022-02-14T14:10:23.958456Z 8 [ERROR] [MY-012639] [InnoDB] Write to file ./airportdb/#sql-e0a4_8.ibd failed at offset 1927282688, 1048576 bytes should have been written, only 12288 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded. 2022-02-14T14:10:23.958474Z 8 [ERROR] [MY-012640] [InnoDB] Error number 28 means 'No space left on device' 2022-02-14T14:10:23.958492Z 8 [Warning] [MY-012145] [InnoDB] Error while writing 4194304 zeroes to ./airportdb/#sql-e0a4_8.ibd starting at offset 1925185536 2022-02-14T14:10:24.053932Z 8 [ERROR] [MY-013132] [Server] The table '#sql-e0a4_8' is full! The Solution With InnoDB, it’s possible to use different types of tablespaces: system tablespacefile-per-table tablespacegeneral tablespaceFile-per-table tablespace have the possibility to be stored on a different place, but this can only be defined at table’s creation. See Creating Tables Externally in the MySQL Manual. So, this is not an option for us. We will then create a new general tablespace on a different directory (or volume). And then modify the large table while adding it in that new tablespace. When done, if needed we can put back the table space as it was in the MySQL data directory. First we need to modify my.cnf to specify where InnoDB will be able to also find tablespace by adding the following line under [mysqld]section: innodb_directories=/var/lib/mysql2 We then need to restart mysqld: mysql> restart; And we can verify that the change was applied successfully: mysql> select @@innodb_directories; +----------------------+ | @@innodb_directories | +----------------------+ | /var/lib/mysql2 | +----------------------+ 1 row in set (0.00 sec) Now we will create the new tablespace that will be stored in this new directory: mysql> CREATE TABLESPACE `new_tablespace` ADD DATAFILE '/var/lib/mysql2/nts.ibd' ENGINE=INNODB; We can now modify the large table and at the same time move it to its new tablespace: mysql> ALTER TABLE booking MODIFY seat VARCHAR(10), TABLESPACE new_tablespace; /var/lib/mysql2 is a different directory (or a mount point) where there is enough space to have the copy of the table (that will be the final table after the ALTER). While the ALTER statement is running, you can also get its progression using the following query: mysql> SELECT stmt.THREAD_ID, stmt.SQL_TEXT, stage.EVENT_NAME AS State, stage.WORK_COMPLETED, stage.WORK_ESTIMATED, lpad(CONCAT(ROUND(100*stage.WORK_COMPLETED/ stage.WORK_ESTIMATED, 2),"%"),10," ") AS COMPLETED_AT, lpad(format_pico_time(stmt.TIMER_WAIT), 10, " ") AS STARTED_AGO, lpad(format_pico_time(stmt.TIMER_WAIT/ ROUND(100*stage.WORK_COMPLETED/ stage.WORK_ESTIMATED,2)*100), 10, " ") AS ESTIMATED_FULL_TIME, lpad(format_pico_time((stmt.TIMER_WAIT/ ROUND(100*stage.WORK_COMPLETED/ stage.WORK_ESTIMATED,2)*100)-stmt.TIMER_WAIT), 10, " ") AS ESTIMATED_REMAINING_TIME, current_allocated MEMORY FROM performance_schema.events_statements_current stmt INNER JOIN sys.memory_by_thread_by_current_bytes mt ON mt.thread_id = stmt.thread_id INNER JOIN performance_schema.events_stages_current stage ON stage.THREAD_ID = stmt.THREAD_ID\G *************************** 1. row *************************** THREAD_ID: 51 SQL_TEXT: ALTER TABLE booking MODIFY seat VARCHAR(10), TABLESPACE new_tablespace State: stage/sql/copy to tmp table WORK_COMPLETED: 46393043 WORK_ESTIMATED: 54185083 COMPLETED_AT: 85.62% STARTED_AGO: 1.31 h ESTIMATED_FULL_TIME: 1.52 h ESTIMATED_REMAINING_TIME: 13.15 min MEMORY: 2.39 MiB Now we can see in the new folder the size of the tablespace: # ls -lh mysql2 total 7.4G -rw-r-----. 1 mysql mysql 7.4G Feb 14 22:12 nts.ibd We can also see that there is more space in the MySQL datadir as the previous .idb file of this table is gone: Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg02-mysql 9.8G 1.5G 7.9G 16% /var/lib/mysql Now that the schema is changed as we desired, we still have the option to put it back in the original MySQL datadir if we have enough diskspace: mysql> ALTER TABLE booking TABLESPACE=innodb_file_per_table; Now the table (more specifically the tablespace) is back to its initial location: # ls -lh mysql/airportdb/booking.ibd -rw-r-----. 1 mysql mysql 5.3G Feb 14 22:34 mysql/airportdb/booking.ibd # ls -lh mysql2/ total 7.4G -rw-r-----. 1 mysql mysql 7.4G Feb 14 22:34 nts.ibd The new general tablespace we used on a different disk still exists and consume unnecessary diskspace, we can delete it (from MySQL !): mysql> SELECT it.*, PATH FROM INFORMATION_SCHEMA.INNODB_TABLESPACES it JOIN INFORMATION_SCHEMA.INNODB_TABLESPACES_BRIEF itb ON itb.SPACE=it.SPACE WHERE it.NAME='new_tablespace'\G *************************** 1. row *************************** SPACE: 63 NAME: new_tablespace FLAG: 18432 ROW_FORMAT: Any PAGE_SIZE: 16384 ZIP_PAGE_SIZE: 0 SPACE_TYPE: General FS_BLOCK_SIZE: 4096 FILE_SIZE: 7860125696 ALLOCATED_SIZE: 7860129792 AUTOEXTEND_SIZE: 0 SERVER_VERSION: 8.0.26 SPACE_VERSION: 1 ENCRYPTION: N STATE: normal PATH: /var/lib/mysql2/nts.ibd Before deleting it we verify that there are no tables using it (we use the value of SPACE from the query above, 63): mysql> SELECT * FROM innodb_tables WHERE space=63; Empty set (0.0009 sec) As there are none, we can just drop it and reclaim its disk space: mysql> DROP TABLESPACE new_tablespace; Query OK, 0 rows affected (0.3385 sec) Conclusion In MySQL 8.0 with some InnoDB tablespace management, it’s possible to deal temporary or not with disk space issue. The same technique can be of course used to put some tables on dedicated disk to split IOPS for example. Enjoy MySQL 8.0 ! [Less]
Posted about 2 years ago by Joshua Otwell
This week’s issue of OpenLampTech, the newsletter for PHP/MySQL developers is once again loaded to the brim with the best content I can curate for your reading pleasure. I absolutely enjoy running and sharing the newsletter and I hope you are ... [More] enjoying the content just as much. Thank you!!!! Do you need to learn MySQL? Just starting and confused about how to query a table and get the data you need? I am creating premium MySQL Beginners content for this specific reason. To help those who want to learn the basics of MySQL but don’t know where to start. Learn more about the premium blog posts as I develop and release them. Tell me more! Image by Click on , consider Thank you!  from Pixabay  The Newsletter for PHP and MySQL Developers We have a fantastic issue this week with articles covering: WordPress(dot)com vs WordPress(dot)orgLaravel Eloquent TipsWordPress database prefixMySQL COALESCE() functionA newsletter shoutoutDatabase concurrencyAnd much more…Instead of waiting on me to repost here each week, get your very own subscription absolutely free and have the OpenLampTech newsletter delivered directly to your inbox. Past issues and similar reading MySQL ROW_NUMBER() window functionMySQL SHOW TABLESOpenLampTech issue #12 Like what you have read? See anything incorrect? Please comment below and thank you for reading!!! A Call To Action! Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well. Visit the Portfolio-Projects page to see blog posts/technical writing I have completed for clients. To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…) Be sure and visit the “Best Of” page for a collection of my best blog posts. Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters. Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own. How can I help you? Are you thinking of starting up a blog? I use WordPress for my blog. Let’s both save money on the plans offered. Grab a Gmail HTML Email Signature template from my Etsy shop and make your emails pop and stand out. Need hosting for your next web application or WordPress site? I use and highly recommend Hostinger. They have great pricing and service.I enjoy reading Refind: The essence of the web, every morning in your inbox. Subscribe for free. Help me get a premium subscription by signing up yourself with my referral link.Grab a free pack of mobile Creator wallpapers.Just getting started or wanting to learn MySQL? Find out about my premium blog posts and MySQL Beginner Series here.Disclosure: Some of the services and products links in this post are affiliate links. At no additional cost to you, should you make a purchase by clicking through one of them, I will receive a commission. The Newsletter for PHP and MySQL Developers The post OpenLampTech issue #13 appeared first on Digital Owl's Prose. [Less]
Posted about 2 years ago by Frederic Descamps
Apache Zeppelin is a web-based notebook that enables data-driven, interactive data analytics, machine learning, and collaborative documents with SQL, Scala, Python, R. Today we will see how we can deploy Apache Zeppelin on a Compute instance in ... [More] Oracle Cloud Infrastructure (OCI) using MySQL Database Service (MDS). You can manually deploy a compute instance and install zeppelin on it then connect to MySQL Database Service or you can use the Terraform modules I’m sharing on GitHub: https://github.com/lefred/oci-zeppelin-mds Finally you can also deploy all this by just clicking on the follow button to use OCI’s Ressource Manager Stack. This is the easiest and recommended way to deploy all the resources you need. From the instances to the VCN and security lists. Start by clicking on this button: Resource Manager Once you click on the button, after being authenticated to OCI, you will see a similar screen: You need to accept the Oracle Terms of Use, some fields will be automatically filled and you can click on “Next”: Now will will reach a form to configure some variables: Fill the required fields. You also have the possibility to deploy a High Available setup for your MDS instance or a HeatWave Cluster (if you use a compatible shape). We will see that later, for the moment just fill the passwords and click “Next”: You will get a review panel, you can enable the apply of the stack and click on “Create”: Now, OCI will create all the resources, Zeppellin will be installed and MySQL Connector/J will also be installed to be used with Zeppelin. Connecting to Zeppelin When the stack is created and applied, you will have some important output information that you need to be able to connect to the environment we just deployed: You just need to use that public ip in your browser and you will reach Zeppelin: Add the MySQL Interpreter MySQL Connector/J is installed but we need to create an interpreter to use it with Zeppelin. We first click on the top right and choose “Interpreter”: Then we click on “+Create”: We then fill with the following information: As you can see, we use the jdbc interpreter group and the driver is com.mysql.jdbc.Driver. We also need to specify the IP of the MDS instance. And finally, we also need to specify MySQL Connector/J as dependency (/opt/zeppelin/interpreter/mysql/mysql-connector-java.jar): Create a Notebook We are now ready to create our first Notebook. I’ve loaded the example airport-db on my MDS instance using the Compute instance where MySQL Shell is also installed already by the Stack. And we need to select our new Interpreter: We can start with a simple query: Excellent, now we can use different queries to create some visualization of the data: Conclusion As you can see, it’s very easy to deploy Apache Zeppelin on OCI with MySQL Database Service as back-end. On the next article, we will see how we can enable MySQL Heatwave to speed up all the processed queries when creating large Notebooks. [Less]
Posted about 2 years ago by Frederic Descamps
Apache Zeppelin is a web-based notebook notebook that enables data-driven, interactive data analytics, machine learning, and collaborative documents with SQL, Scala, Python, R. Today we will see how we can deploy Apache Zeppelin on a Compute ... [More] instance in Oracle Cloud Infrastructure (OCI) using MySQL Database Service (MDS). You can manually deploy a compute instance and install zeppelin on it then connect to MySQL Database Service or you can use the Terraform modules I’m sharing on GitHub: https://github.com/lefred/oci-zeppelin-mds Finally you can also deploy all this by just clicking on the follow button to use OCI’s Ressource Manager Stack. This is the easiest and recommended way to deploy all the resources you need. From the instances to the VCN and security lists. Start by clicking on this button: Resource Manager Once you click on the button, after being authenticated to OCI, you will see a similar screen: You need to accept the Oracle Terms of Use, some fields will be automatically filled and you can click on “Next”: Now will will reach a form to configure some variables: Fill the required fields. You also have the possibility to deploy a High Available setup for your MDS instance or a HeatWave Cluster (if you use a compatible shape). We will see that later, for the moment just fill the passwords and click “Next”: You will get a review panel, you can enable the apply of the stack and click on “Create”: Now, OCI will create all the resources, Zeppellin will be installed and MySQL Connector/J will also be installed to be used with Zeppelin. Connecting to Zeppelin When the stack is created and applied, you will have some important output information that you need to be able to connect to the environment we just deployed: You just need to use that public ip in your browser and you will reach Zeppelin: Add the MySQL Interpreter MySQL Connector/J is installed but we need to create an interpreter to use it with Zeppelin. We first click on the top right and choose “Interpreter”: Then we click on “+Create”: We then fill with the following information: As you can see, we use the jdbc interpreter group and the driver is com.mysql.jdbc.Driver. We also need to specify the IP of the MDS instance. And finally, we also need to specify MySQL Connector/J as dependency (/opt/zeppelin/interpreter/mysql/mysql-connector-java.jar): Create a Notebook We are now ready to create our first Notebook. I’ve loaded the example airport-db on my MDS instance using the Compute instance where MySQL Shell is also installed already by the Stack. And we need to select our new Interpreter: We can start with a simple query: Excellent, now we can use different queries to create some visualization of the data: Conclusion As you can see, it’s very easy to deploy Apache Zeppelin on OCI with MySQL Database Service as back-end. On the next article, we will see how we can enable MySQL Heatwave to speed up all the processed queries when creating large Notebooks. [Less]
Posted about 2 years ago by Michael McLaughlin
I always get interesting feedback on some posts. On my test case for discovering the STR_TO_DATE function’s behavior, the comment was tragically valid. I failed to cleanup after my test case. That was correct, and I should have dropped param table ... [More] and the two procedures. While appending the drop statements is the easiest, I thought it was an opportunity to have a bit of fun and write another procedure that will cleanup test case tables within the test_month_name procedure. Here’s sample dynamic drop_table procedure that you can use in other MySQL stored procedures: CREATE PROCEDURE drop_table ( table_name VARCHAR(64)) BEGIN /* Declare a local variable for the SQL statement. */ DECLARE stmt VARCHAR(1024); /* Set a session variable with two parameter markers. */ SET @SQL := CONCAT('DROP TABLE ',table_name); /* Check if the constraint exists. */ IF EXISTS (SELECT NULL FROM information_schema.tables t WHERE t.table_schema = database() AND t.table_name = table_name) THEN /* Dynamically allocated and run statement. */ PREPARE stmt FROM @SQL; EXECUTE stmt; DEALLOCATE PREPARE stmt; END IF; END; $$ You can now put a call to the drop_table procedure in the test_month_name procedure from the earlier post. For convenience, here’s the modified test_month_name procedure with the call on line #33 right before you leave the loop and procedure: CREATE PROCEDURE test_month_name() BEGIN /* Declare a handler variable. */ DECLARE month_name VARCHAR(9); /* Declare a handler variable. */ DECLARE fetched INT DEFAULT 0; /* Cursors must come after variables and before event handlers. */ DECLARE month_cursor CURSOR FOR SELECT m.month_name FROM month m; /* Declare a not found record handler to close a cursor loop. */ DECLARE CONTINUE HANDLER FOR NOT FOUND SET fetched = 1; /* Open cursor and start simple loop. */ OPEN month_cursor; cursor_loop:LOOP /* Fetch a record from the cursor. */ FETCH month_cursor INTO month_name; /* Place the catch handler for no more rows found immediately after the fetch operations. */ IF fetched = 1 THEN /* Fetch the partial strings that fail to find a month. */ SELECT * FROM param; /* Conditionally drop the param table. */ CALL drop_table('param'); /* Leave the loop. */ LEAVE cursor_loop; END IF; /* Call the subfunction because stored procedures do not support nested loops. */ CALL read_string(month_name); END LOOP; END; $$ As always, I hope sample code examples help others solve problems. [Less]
Posted about 2 years ago by Oracle MySQL Group
Recent improvements to MySQL Database Service: Support for version 8.0.28, MySQL version selection, MySQL simplified state management after maintenance events, and MySQL Database Service Data Masking and De-Identification Functions.
Posted about 2 years ago by Oracle MySQL Group
If you plan to deploy a Drupal site to OCI using MySQL Database Service, there is a very easy way to perform that deployment: using OCI’s Resource Manager ! As you know, security is something very important for Oracle in OCI, therefor, exposing the database on the Internet is not allowed...
Posted about 2 years ago by MySQL Performance Blog
Incremental backups of MySQL, specifically for the InnoDB engine, are taken by copying modified pages from the previous backup. The brute force method takes backups by scanning every page in tablespace file in the server data directory is an ... [More] expensive operation. The time required for incremental backups increases as the data-dir size increases. To solve this problem, Percona Server for MySQL introduced a  “Changed Page Tracking” feature in 5.6 that enables Percona XtraBackup (PXB) to copy only the modified pages required for incremental backups. See the Percona XtraBackup Documentation for more information. From MySQL 8.0.18, a similar feature “Changed Page Tracking” is implemented, see this blog post.  PXB 8.0.27 supports the upstream MySQL implementation of “Changed Page Tracking” and can take advantage of it for incremental backups. Backups using Percona Server for MySQL feature “Changed Page Tracking” is deprecated and will be removed in a future release Cases Where Page Tracking is Useful When the incremental is small, page tracking can be many times faster; the server is not required to scan all of the database pages. The backup copies the list of pages provided by the page tracking file.  In our test where one percent of data was changed after the full backup of 100 GB, an incremental backup took 30 seconds compared to the five minutes duration without page tracking.As the size of incremental backup increases, the benefit of page tracking is reduced. If the incremental backup size is less than 50% of the full backup, then page tracking performed better than a full scan. The results may vary based on the type of workload. For example, if your workload consists of new page inserts or random updates of different database pages, then page tracking may take longer. Prerequisite To use this feature, the mysqlbackup component must be installed and enabled on the Server. The user should do this installation before using the page tracking feature.INSTALL COMPONENT "file://component_mysqlbackup";To verify if the component was installed successfully, use:SELECT COUNT(1) FROM mysql.component WHERE  component_urn='file://component_mysqlbackup';After the component is loaded and active on the server, use the –page-tracking option with a backup and subsequent incremental backups will use the page tracking data to copy modified pages from the last backup. Usage You can use the –page-tracking option with full or incremental backups with PXB 8.0.27. It serves a dual purpose: At the start of the backup, sets the page-tracking data so that the next incremental backup can use page tracking. During an incremental backup, call the mysqlbackup component to get the list of modified pages from the previous backup. Example of a Full Backup xtrabackup --backup --target-dir=$FULL_BACK --page-trackingYou can see the page tracking information in the log file.xtrabackup: pagetracking is started on the server with LSN 18084452 xtrabackup: pagetracking: Checkpoint lsn is 18084472 and page tracking start lsn is 18084452 Example of an Incremental Backup xtrabackup --backup --target-dir=$INC_BACKUP  --incremental-basedir=$FULL_BACKUP --page-trackingYou can see the page tracking information in the log file.xtrabackup: pagetracking is started on the server with LSN 18084748 220120 11:30:30 xtrabackup: pagetracking: calling get pages with start lsn 18084748 and end lsn 18084768 xtrabackup: Using pagetracking feature for incremental backup Purging Page-Tracking on Server When we use page-tracking for PXB, a file is created under the server’s datadir, which collects information about changed pages. This file keeps growing until page tracking is deactivated with SELECT mysqlbackup_page_track_set(false);. If the server is stopped and restarted a new file is opened, but the old file(s) is not purged. As long as the page tracking is not deactivated explicitly, the file(s) grow. To get rid of old page-track data, that is not needed anymore, the following procedure is recommended before every *full* backup, do:SELECT mysqlbackup_page_track_set(false); SELECT mysqlbackup_page_track_purge_up_to(9223372036854775807); SELECT mysqlbackup_page_track_set(true);The big number is the highest possible LSN. It causes all page tracking files to be purged. The purge interface is added in MySQL Server version 8.0.28. Open Issue with Page Tracking If an index was recently added to a table after the last LSN checkpoint and it was built in place using an exclusive algorithm. You may get bad incremental backup with page tracking. More details in  PS-8032.   [Less]