I Use This!
Very High Activity

News

Analyzed 33 minutes ago. based on code collected about 5 hours ago.
Posted over 2 years ago
JDBC FDW 0.1.0 newly released. We have just newly released Foreign Data Wrapper for databases with JDBC interface. This release can work with PostgreSQL 13 and 14. This FDW is implemented in C language with JDK library. The existing JDBC ... [More] FDWs are not well maintained and their features are very limited, this jdbc_fdw is released in order to solve these problems. This release supports the following features : Support PostgreSQL 13.0 14.0. Support SELECT feature. Support INSERT feature. Support UPDATE feature. Support DELETE feature. Support push down WHERE clause. Support push down aggregate functions. This is developed by Toshiba Software Engineering & Technology Center. Please see the repository for details. Source repository : https://github.com/pgspider/jdbc_fdw [Less]
Posted over 2 years ago
I'm pleased to announce the new releases of pg_builder and pg_wrapper packages. The main topic of these releases is support for Postgres 14 and PHP 8.1 pg_builder version 2.0.0 pg_builder is a query builder for Postgres backed by a partial PHP ... [More] reimplementation of PostgreSQL's own SQL parser. It supports almost all syntax available in Postgres 14 for SELECT (and VALUES), INSERT, UPDATE, and DELETE queries. With pg_builder it is possible to start with a manually written query, parse it into an Abstract Syntax Tree, add query parts (either as Node objects or as strings) to this tree or remove them, and finally convert the tree back to an SQL string. Release highlights Support for new syntax of Postgres 14: most of the keywords can be used as column aliases without AS; DISTINCT clause for GROUP BY; SEARCH and CYCLE clauses for Common Table Expressions; alias for USING clause of JOIN expressions. SQL functions with custom argument format (arguments separated by keywords, keywords as arguments, etc) are now parsed to specialized Nodes and will appear in generated SQL the same way they did in source: trim(trailing 'o' from 'foo') rather than pg_catalog.rtrim('foo', 'o'). This follows the changes done in Postgres 14 itself. No E_DEPRECATED errors when running under PHP 8.1 Full release notes The package can be downloaded from Github or installed with Composer: $ composer require sad_spirit/pg_builder pg_builder can be used on its own, using it together with pg_wrapper allows to run the built queries with transparent conversion of query parameters to Postgres types. pg_wrapper version 2.0.0 pg_wrapper provides converters for PostgreSQL data types and an OO wrapper around PHP's native pgsql extension that uses these converters. Conversion of query result fields is done automatically using database metadata, query parameters may require specifying type. For those types where a corresponding native PHP type or class is available it is used (text -> string, timestamp -> DateTimeImmutable, hstore -> associative array, etc.). For other types (geometric types, ranges) the package provides custom classes. Release highlights Full support for multirange types added in Postgres 14, with types\Multirange and its descendants to represent the values on PHP side and converters\containers\MultiRangeConverter to transform the values to and from DB string representation. Support changes to pgsql extension done in PHP 8.1: objects are used instead of resources for connection and query results. Full release notes The package can be downloaded from Github or installed with Composer: $ composer require sad_spirit/pg_wrapper [Less]
Posted over 2 years ago
pg_query_rewrite is a PostgreSQL extension that can modify SQL statements received by PostgreSQL before the backend executes them. This tries to be similar to the Rewriter Query Rewrite Plugin in MySQL. Release 0.0.3 of pg_query_rewrite was ... [More] released. This is a maintenance release to add support to PostgreSQL 14. Documentation: https://github.com/pierreforstmann/pg_query_rewrite/#readme Download: https://pgxn.org/dist/pg_query_rewrite/0.0.3/ Support: use GitHub report tool at https://github.com/pierreforstmann/pg_query_rewrite/issues [Less]
Posted over 2 years ago
We have just released PGSpider v2.0.0. PGSpider is High-Performance SQL Cluster Engine for distributed big data. PGSpider can access a number of data sources using Foreign Data Wrapper(FDW) and retrieves the distributed data source vertically. ... [More] Usage of PGSpider is the same as PostgreSQL. You can use any client applications such as libpq and psql. This release improves following items : Publish full source code The old version was reqired to apply the patch to PostgreSQL Based on PostgreSQL 14.0 Pushdown SQL function in the target list Pushdown JOIN if all tables in a query are located in a single data source Change the program name ('pgspider') and default port number(4813) PGSpider supports following features : Multi-Tenant : User can get records in multi tables by one SQL easily. If there are tables with similar schema in each data source, PGSpider can view them as a single virtual table: We call it as Multi-Tenant table. Parallel processing : PGSpider executes queries and fetches results from child nodes in parallel. PGSpider expands Multi-Tenant table to child tables, creates new threads for each child table to access corresponding data source. Pushdown : WHERE clause and aggregation functions are pushed down to child nodes. Pushdown to Multi-tenant tables occur error when AVG, STDDEV and VARIANCE are used. PGSPider improves this error, PGSpider can execute them. JOIN also pushed down if all tables in a query are located in a single data source. This is developed by Toshiba Software Engineering & Technology Center. Please see the repository for details, and tell us feedback. Source repository : https://github.com/pgspider/pgspider Best Regards, TAIGA Katayama [Less]
Posted over 2 years ago
We have just released version 2.1.1 of the Foreign Data Wrapper for GridDB. This release can work with PostgreSQL 10, 11, 12, 13 and 14. This release improves following item (from 2.0.0): Support get version feature Support pushdown scalar ... [More] operator ANY/ALL (ARRAY) Support more functions to pushdown Support keep connection control and connection cache information Support bulk INSERT by using batch_size option for PostgreSQL 14 Fix some bugs The FDW supports following features : Support SELECT, INSERT, UPDATE and DELETE Support function is push-down in WHERE clause Support LIMIT and OFFSET push-down when having LIMIT clause only or both LIMIT and OFFSET Support ORDER BY push-down Support aggregation push-down This is developed by Toshiba Software Engineering & Technology Center. Please see the repository for details. Source repository : https://github.com/pgspider/griddb_fdw GridDB is KVS and time series database. Please see the repository for details. https://github.com/griddb/griddb_nosql [Less]
Posted over 2 years ago
We have just released version 2.1.1 of the Foreign Data Wrapper for SQLite. This release can work with PostgreSQL 10, 11, 12, 13 and 14. This release improves following items : Support Insert/Update with generated column Support check invalid ... [More] options Bug fixings: Fix issue of accessing fts table on SQLite Fix memory leak The FDW supports following key features : SELECT, INSERT, UPDATE and DELETE of foreign tables WHERE clauses is push-downed Aggregate is push-downed ORDER BY is push-downed JOIN is push-downed (LEFT,RIGHT,INNER) LIMIT and OFFSET are push-downed (*when all tables queried are fdw) Transactions This is developed by Toshiba Software Engineering & Technology Center. Please see the repository for details. Source repository : https://github.com/pgspider/sqlite_fdw Best Regards, TAIGA Katayama [Less]
Posted over 2 years ago
We have just released version 1.1.1 of the Foreign Data Wrapper for InfluxDB. This release can work with PostgreSQL 10, 11, 12, 13 and 14. This release improves following item (from 1.0.0): Support bulk INSERT feature Support GROUP By ... [More] times(), fill() feature of InfluxDB The FDW supports following features : InfluxDB FDW supports pushed down some aggregate functions: count, stddev, sum, max, min. InfluxDB FDW supports INSERT, DELETE statements. InfluxDB FDW supports bulk INSERT by using batch_size option from PostgreSQL version 14 or later. WHERE clauses including timestamp, interval and now() functions are pushed down. LIMIT...OFFSET clauses are pushed down when there is LIMIT clause only or both LIMIT and OFFSET. This is developed by Toshiba Software Engineering & Technology Center. Please see the repository for details. Source repository : https://github.com/pgspider/influxdb_fdw [Less]
Posted over 2 years ago
What is Pgpool-II? Pgpool-II is a tool to add useful features to PostgreSQL, including: connection pooling load balancing automatic fail over and more. Minor releases Pgpool Global Development Group is pleased to announce the availability of ... [More] following versions of Pgpool-II: 4.2.7 4.1.10 4.0.17 3.7.22 The purpose of this release is to provide packages for PostgreSQL 14. Please take a look at release notes. You can download the source code and RPMs. [Less]
Posted over 2 years ago
The PostgreSQL Core Team has published its activity report for the period between June 2019 through December 2021. You can find a copy of the activity report here: https://www.postgresql.org/developer/corereports/june2019_december2021/ The Core Team ... [More] looks to provide transparency into its activities by publishing a summary to the community on a yearly basis. Unfortunately, more than two years has passed since we published the last report. We apologize for this and aim to do better going forward. If you have any questions, please feel free to reach out to the Core Team: [email protected]. Sincerely, The PostgreSQL Core Team [Less]
Posted over 2 years ago
What's new in DLE 3.0? The Postgres.ai team is happy to announce the release of version 3.0 of Database Lab Engine (DLE), the most advanced open-source software ever released that empowers development, testing, and troubleshooting environments for ... [More] fast-growing projects. The use of Database Lab Engine 3.0 provides a competitive advantage to companies via implementing the "Shift-left testing" approach in software development. Database Lab Engine is an open-source technology that enables thin cloning for PostgreSQL. Thin clones are exceptionally useful when you need to scale the development process. DLE can manage dozens of independent clones of your database on a single machine, so each engineer or automation process works with their very own database provisioned in seconds without extra costs. Among major changes in DLE 3.0: UI included to the core, it allows working with a single DLE instance, persistent clones: clones now survive DLE (or VM) restart, for the "logical" data provisioning mode: the ability to switch reset clone's state using a snapshot from different pool/dataset, better logging and configuration simplicity, improvements for the cases when multiple DLEs are running on a single machine, PostgreSQL 14 support. Starting with version 3.0.0, DLE collects non-personally identifiable telemetry data. This feature is enabled by default but can be switched off. Read more in the DLE documentation. Keeping telemetry enabled can be considered your contribution to the DLE development because it helps make decisions down the road of the open-source product development. Further, we discuss the most requested changes that were implemented in DLE 3.0 – all of them were created based on real-life user experience and invaluable feedback from the growing community of users and contributors. Worth noting Like Database Lab? Give us a GitHub star: https://github.com/postgres-ai/database-lab. Action required to migrate from a previous version. If you are running DLE 2.5 or older, please read carefully and follow the Migration notes. To get help, reach out to the Postgres.ai team and the growing community of Database Lab users and contributors: https://postgres.ai/contact. DLE UI: clone large databases in seconds right in your browser Being open-source software, DLE has always been equipped with API and CLI. As for UI, it was initially available only in the form of SaaS – Database Lab Platform running on Postgres.ai. In response to numerous requests from DLE users, UI has been integrated into the core distribution of DLE 3.0. This change makes open-source DLE even more attractive to users, bringing ease in use and simplifying adoption in fast-growing companies. You can watch a short video demonstrating DLE UI. Some users have told us that with UI in hands, it becomes much easier to explain to colleagues various use cases where DLE can be very helpful. If you like Database Lab, please try to benefit from this change – using UI, demonstrate to others the idea of cloning large databases in a few seconds, and discuss how it can influence your software development, testing processes, as well as incident troubleshooting and SQL optimization. Persistent clones: keep working with your cloned Postgres databases during maintenance Another feature added to DLE 3.0 is also something that DLE users have asked a lot about. Before 3.0, any restart of DLE meant the loss of all clones created – so DLE upgrades, VM restarts, and even simple reconfiguration of DLE always needed a maintenance window, interrupting work. A partial solution to this problem was the ability to reconfigure DLE without restarts introduced in DLE 2.0. However, this wasn't helpful in the cases of DLE upgrades or VM restarts. Now with DLE 3.0, this problem is fully solved: If you are running DLE 2.5 or older, plan one more maintenance window – and this will be the last one for upgrades. All subsequent upgrades will keep clones alive. If you experience a VM failure – not uncommon in cloud environments – once it's back, clones will be re-created, keeping the database state. Of course, in the case of VM restart, DB connections are lost and need to be recreated. However, if you need to restart just DLE, all clone containers will keep running, and users can now continue working with them even during DLE restart, without any work interruptions. Advanced reset for the "logical" mode In DLE 2.5, we have implemented the ability to reset to any available snapshot – a convenient way for your clones to travel in time fast. In 2.5, this was supported only for the "physical" provisioning mode (restoring data directory, PGDATA, from physical backups, or obtaining it from the source using pg_basebackup). In other words, it was ready to work only if you manage Postgres yourself and can copy PGDATA or establish physical replication connection to your databases. Something that is not available to the users of RDS and other managed Postgres services. For DLE running in the "logical" data provisioning mode (based on dump/restore – the only option for most managed Postgres cloud offerings such as Amazon RDS), DLE 2.5 provided the ability to operate with multiple copies of PGDATA was implemented, which allowed having a full refresh without downtime. However, if DLE users were running clones on the "old" PGDATA version, they needed to recreate them to unlock the next full refresh – and this was somewhat inconvenient because of quite unpredictable port allocation. In DLE 3.0, it is now possible to reset a clone's state to any database version (snapshot), even if that version is provided by another copy of PGDATA running on a different pool/dataset. It means that users can keep their clone running on the same port for a long time, having stable DB credentials (including the port), and when needed, once the full refresh has finished, switch to the freshest database version in seconds. This makes the experience of working with the "logical" almost on par with the "physical" one. Running multiple DLEs on a single machine Originally, DLE was designed to run on a dedicated machine, physical or virtual. However, many users found it inconvenient – in some cases, development and testing environments are subject to budget optimization, so it may make a lot of sense to run multiple DLEs on a single machine, especially if the organization has many smaller databases (a typical case for those who deal with microservice architectures). DLE 3.0 has several improvements that simplify running multiple DLEs on a single machine. Particularly, the new configuration option selectedPool allows having a single ZFS pool and running multiple DLEs in a way where each one of them has its own dataset. This drastically simplifies free disk space management: instead of fragmented space allocation and dealing with many "free disk space" numbers, the DLE administrator now needs to control just a single number and adjust disk size much less often. We are planning to discuss the aspects of running multiple DLEs on a single machine in a separate article. Further reading DLE 3.0 release notes Database Lab Documentation Tutorial for any database Tutorial for Amazon RDS Interactive tutorial (Katacoda) Request for feedback and contributions Feedback and contributions would be greatly appreciated: Database Lab Community Slack DLE & DB Migration Checker issue tracker Issue tracker of the Terraform module for Database Lab [Less]