Posted
over 3 years
ago
This message is being sent from the Community Code of Conduct Committee.
As part of the community CoC policy, the Committee membership is to be refreshed on an annual basis.
We are seeking up to three (3) volunteers to serve on the Committee for the
... [More]
coming year, October 1, 2021 - September 30, 2022. Members serve a minimum of one year, and a maximum of three years.
We are extending the original call for volunteers issued in August in order to increase the diversity of the candidate pool. We are seeking people who reflect the demographics of the PostgreSQL community, with the goal to have members from multiple countries and varied demographics.
The time commitment for Committee involvement varies, based on internal administrative work and the number of active investigations. We estimate an average of 5 hours per month, but that could increase if there is an increase in the number of incident reports.
You can view the current Committee members here:
https://www.postgresql.org/about/policies/coc_committee/
If you are interested, please complete the questionnaire below, and email your responses to the Committee at [email protected] no later than September 20, 2021, AOE. You can check your local time against AOE at https://time.is/Anywhere_on_Earth
The questionnaire:
Your name:
Current employer:
Current country of residence:
(We ask for employer and residence because one of the goals of the Committee is to have representation from a variety of geographical areas. We also want to avoid a concentration of members from one company.)
1. What interests you about being on the CoC Committee?
2. Have you been on another CoC Committee, or had a similar role at another organization? (Prior experience is not required, it's just helpful to know everyone's background.)
3. What else do you want to tell us about yourself that is helpful for us to know about your potential involvement with the CCoC Committee?
Please be sure to send your reply to the CCoC email listed above, and not as a reply to this message. Thank you.
[Less]
|
Posted
over 3 years
ago
PostgreSQL Weekly News - September 12, 2021
PostgreSQL Product News
pg_dumpbinary 2.5, a program used to dump a PostgreSQL database in binary
format, released.
pgBadger v11.6, a PostgreSQL log analyzer and graph tool written in
Perl, released.
... [More]
[pgagroal 1.3.0, a high-performance protocol-native connection pool for
PostgreSQL, released
PostgreSQL Jobs for September
https://archives.postgresql.org/pgsql-jobs/2021-09/
PostgreSQL in the News
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to [email protected].
Applied Patches
Michaël Paquier pushed:
Remove some unused variables in TAP tests. Author: Amul Sul Discussion:
https://postgr.es/m/CAAJ_b96xuFh4JZE6p-zhLyDu7q=NbxJfb1z_yeAu6t-MqaBC+Q@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/5fcb23c18fe408bfc6669aa5bca2d21896f9fe90
Move the shared memory size calculation to its own function. This change
refactors the shared memory size calculation in
CreateSharedMemoryAndSemaphores() to its own function. This is intended for
use in a future change related to the setup of huge pages and shared memory
with some GUCs, while useful on its own for extensions. Author: Nathan
Bossart Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/0bd305ee1d427ef29f5fa4fa20567e3b3f5ff792
Clean up some code using "(expr) ? true : false". All the code paths
simplified here were already using a boolean or used an expression that led to
zero or one, making the extra bits unnecessary. Author: Justin Pryzby
Reviewed-by: Tom Lane, Michael Paquier, Peter Smith Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/fd0625c7a9c679c0c1e896014b8f49a489c3a245
Introduce GUC shared_memory_size. This runtime-computed GUC shows the size of
the server's main shared memory area, taking into account the amount of shared
memory allocated by extensions as this is calculated after processing
shared_preload_libraries. Author: Nathan Bossart Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/bd1788051b02cfddcd9ef0e2fd094972f372b8fd
Fix compilation warning in ipci.c. A Size had better use %zu when printed.
Oversight in bd17880, per buildfarm member lapwing.
https://git.postgresql.org/pg/commitdiff/aa37a439db6bd328d68ce815ab9e12467f42493b
Make shared_memory_size a preset option. bd17880 set up that as a memory
parameter, but the docs told a different story. A preset parameter is adapted
here, as this option is compiled at startup time. Reported-by: Fujii Masao
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/3b231596ccfc6bb6564c63a574e429765c0f775e
Peter Eisentraut pushed:
Improve fix pkg-config files for static linking. Amend
4c2eab3a0dec2eae40892fb525830a5947a398c7 to link against the libraries without
the "_shlib" suffix, since this is meant for static linking.
https://git.postgresql.org/pg/commitdiff/55392bc5b0e0c1a8045627bbc41b4ec7143c4cc7
Fix incorrect format placeholders.
https://git.postgresql.org/pg/commitdiff/bb1412baa5b57652ef69f7e995657d085fd308e4
Disable anonymous record hash support except in special cases. Commit
01e658fa74 added hash support for row types. This also added support for
hashing anonymous record types, using the same approach that the type cache
uses for comparison support for record types: It just reports that it works,
but it might fail at run time if a component type doesn't actually support the
operation. We get away with that for comparison because most types support
that. But some types don't support hashing, so the current state can result
in failures at run time where the planner chooses hashing over sorting,
whereas that previously worked if only sorting was an option. We do, however,
want the record hashing support for path tracking in recursive unions, and the
SEARCH and CYCLE clauses built on that. In that case, hashing is the only
plan option. So enable that, this commit implements the following approach:
The type cache does not report that hashing is available for the record type.
This undoes that part of 01e658fa74. Instead, callers that require hashing no
matter what can override that result themselves. This patch only touches the
callers to make the aforementioned recursive query cases work, namely the
parse analysis of unions, as well as the hash_array() function. Reported-by:
Sait Talha Nisanci [email protected] Bug: #17158 Discussion:
https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
https://git.postgresql.org/pg/commitdiff/a3d2b1bbe904b0ca8d9fdde20f25295ff3e21f79
Fix typo.
https://git.postgresql.org/pg/commitdiff/7390b6421a98b70554b6b5edea5d6e012dfdbbba
Remove useless casts. Casting the argument of strVal() to (Value *) is
useless, since strVal() already does that. Most code didn't do that anyway;
this was apparently just a style that snuck into certain files. Reviewed-by:
Dagfinn Ilmari Mannsåker [email protected] Reviewed-by: Kyotaro Horiguchi
[email protected] Discussion:
https://www.postgresql.org/message-id/flat/[email protected]
https://git.postgresql.org/pg/commitdiff/cbdf75bf8053f88bbae6b307f34ab057424a370f
Remove Value node struct. The Value node struct is a weird construct. It is
its own node type, but most of the time, it actually has a node type of
Integer, Float, String, or BitString. As a consequence, the struct name and
the node type don't match most of the time, and so it has to be treated
specially a lot. There doesn't seem to be any value in the special construct.
There is very little code that wants to accept all Value variants but nothing
else (and even if it did, this doesn't provide any convenient way to check
it), and most code wants either just one particular node type (usually
String), or it accepts a broader set of node types besides just Value. This
change removes the Value struct and node type and replaces them by separate
Integer, Float, String, and BitString node types that are proper node types
and structs of their own and behave mostly like normal node types. Also, this
removes the T_Null node tag, which was previously also a possible variant of
Value but wasn't actually used outside of the Value contained in A_Const.
Replace that by an isnull field in A_Const. Reviewed-by: Dagfinn Ilmari
Mannsåker [email protected] Reviewed-by: Kyotaro Horiguchi
[email protected] Discussion:
https://www.postgresql.org/message-id/flat/[email protected]
https://git.postgresql.org/pg/commitdiff/639a86e36aaecb84faaf941dcd0b183ba0aba9e9
Fix _equalA_Const. 639a86e36aaecb84faaf941dcd0b183ba0aba9e9 neglected to make
the necessary adjustments to _equalA_Const. Found only via
COPY_PARSE_PLAN_TREES.
https://git.postgresql.org/pg/commitdiff/0ffbe900ce599d204536b9623291e05e965da23e
Fujii Masao pushed:
Fix typo in comments. Author: Hou Zhijie Discussion:
https://postgr.es/m/OS0PR01MB5716E6A6535FDFDC5A1B004194CE9@OS0PR01MB5716.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/78aa616be74a13156f4fc8db3a131f1fdc2cce47
postgres_fdw: Allow application_name of remote connection to be set via GUC.
This commit adds postgres_fdw.application_name GUC which specifies a value for
application_name configuration parameter used when postgres_fdw establishes a
connection to a foreign server. This GUC setting always overrides
application_name option of the foreign server object. This GUC is useful when
we want to specify our own application_name per remote connection. Previously
application_name of a remote connection could be set basically only via
options of a server object. But which meant that every session connecting to
the same foreign server basically should use the same application_name. Also
if we want to change the setting, we had to execute "ALTER SERVER ... OPTIONS
..." command. It was inconvenient. Author: Hayato Kuroda Reviewed-by:
Masahiro Ikeda, Fujii Masao Discussion:
https://postgr.es/m/TYCPR01MB5870D1E8B949DAF6D3B84E02F5F29@TYCPR01MB5870.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/449ab6350526e99d33363706b759951ebad7928e
postgres_fdw: Revert unstable tests for postgres_fdw.application_name. Commit
449ab63505 added the tests that check that postgres_fdw.application_name GUC
works as expected. But they were unstable and caused some buildfarm members to
report the failure. This commit reverts those unstable tests. Reported-by:
Tom Lane as per buildfarm Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/98dbef90eb29b13079ba3bd260b3c5818904ee86
Fix issue with WAL archiving in standby. Previously, walreceiver always closed
the currently-opened WAL segment and created its archive notification file,
after it finished writing the current segment up and received any WAL data
that should be written into the next segment. If walreceiver exited just
before any WAL data in the next segment arrived at standby, it did not create
the archive notification file of the current segment even though that's known
completed. This behavior could cause WAL archiving of the segment to be
delayed until subsequent restartpoints or checkpoints created its notification
file. To fix the issue, this commit changes walreceiver so that it creates an
archive notification file of a current WAL segment immediately if that's known
completed before receiving next WAL data. Back-patch to all supported
branches. Reported-by: Kyotaro Horiguchi Author: Fujii Masao Reviewed-by:
Kyotaro Horiguchi Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/596ba75cb11173a528c6b6ec0142a282e42b69ec
pgbench: Stop counting skipped transactions as soon as timer is exceeded. When
throttling is used, transactions that lag behind schedule by more than the
latency limit are counted and reported as skipped. Previously, there was the
case where pgbench counted all skipped transactions even if the timer
specified in -T option was exceeded. This could take a very long time to do
that especially when unrealistically high rate setting in -R option caused
quite a lot of transactions that lagged behind schedule. This could prevent
pgbench from ending immediately, and so pgbench could look like it got stuck
to users. To fix the issue, this commit changes pgbench so that it stops
counting skipped transactions as soon as the timer is exceeded. The timer can
make pgbench end soon even when there are lots of skipped transactions that
have not been counted yet. Note that there is no guarantee that all skipped
transactions are counted under -T though there is under -t. This is OK in
practice because it's very unlikely to happen with realistic setting. Also
this is not the issue that this commit newly introdues. There used to be the
case where pgbench ended without counting all skipped transactions since
before. Back-patch to v14. Per discussion, we decided not to bother
back-patch to the stable branches because it's hard to imagine the issue
happens in practice (with realistic setting). Author: Yugo Nagata, Fabien
COELHO Reviewed-by: Greg Sabino Mullane, Fujii Masao Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/9bcbd7c581a5de3b915ef8fe0262e4abd9cb6e59
Tom Lane pushed:
Make timetz_zone() stable, and correct a bug for DYNTZ abbreviations.
Historically, timetz_zone() has used time(NULL) as the reference point for
deciding whether DST is active. That means its result can change
intra-statement, requiring it to be marked VOLATILE (cf. 35979e6c3). But that
definition is pretty inconsistent with the way we deal with timestamps
elsewhere. Let's make it use the transaction start time ("now()") as the
reference point instead. That lets it be marked STABLE, and also saves a
kernel call per invocation. While at it, remove the function's use of
pg_time_t and pg_localtime. Those are inconsistent with the other code in this
area, which indeed created a bug: timetz_zone() delivered completely wrong
answers if the zone was specified by a dynamic TZ abbreviation. (We need to
do something about that in the back branches, but the fix will look different
from this.) Aleksander Alekseev and Tom Lane Discussion:
https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/388e71af880d579212c2489686903c2cfdea9032
Fix actively-misleading comments about the contents of struct pg_tm. pgtime.h
documented the PG interpretation of tm_mon right alongside the POSIX
interpretation of tm_year, with no hint that neither comment was correct
throughout our code. Perhaps someday we ought to switch to using two separate
struct definitions to provide a clearer indication of which semantics are in
use where. But I fear the tedium-versus-safety-gain tradeoff would not be
very good. Discussion:
https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/89dba59590fdd03799a47daf8019890d4324fbcf
Further fix psql query-cancel test. The query to wait for pg_sleep to be
running did no such thing, because the regex pattern it used could match
itself. Report:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2021-09-06%2018%3A00%3A20
https://git.postgresql.org/pg/commitdiff/bd5846e4a9c1338ded5efcef53511f0d71f53f0e
Fix rewriter to set hasModifyingCTE correctly on rewritten queries. If we copy
data-modifying CTEs from the original query to a replacement query (from a DO
INSTEAD rule), we must set hasModifyingCTE properly in the replacement query.
Failure to do this can cause various unpleasantness, such as unsafe usage of
parallel plans. The code also neglected to propagate hasRecursive, though
that's only cosmetic at the moment. A difficulty arises if the rule action is
an INSERT...SELECT. We attach the original query's RTEs and CTEs to the
sub-SELECT Query, but data-modifying CTEs are only allowed to appear in the
topmost Query. For the moment, throw an error in such cases. It would
probably be possible to avoid this error by attaching the CTEs to the top
INSERT Query instead; but that would require a bunch of new code to adjust
ctelevelsup references. Given the narrowness of the use-case, and the need to
back-patch this fix, it does not seem worth the trouble for now. We can
revisit this if we get field complaints. Per report from Greg Nancarrow.
Back-patch to all supported branches. (The test case added here does not fail
before v10, but there are plenty of places checking top-level hasModifyingCTE
in 9.6, so I have no doubt that this code change is necessary there too.)
Greg Nancarrow and Tom Lane Discussion:
https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com
Discussion:
https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/362e2dcc46195faadd3fa0ba011dd9a8e3829e7a
In psql tab completion, offer spelled-out commands not abbreviations. Various
psql backslash commands have both single-letter and long forms, for example \e
and \edit. Previously, tab completion generally offered the single-letter
form but not the long form. It seems more sensible to offer the long form,
because (a) no useful completion can happen when you've already typed the
single letter, and (b) if you're not so familiar with the command set as to
know that, the long form is likely to be less confusing. Haiying Tang,
reviewed by Dagfinn Ilmari Mannsåker and myself Discussion:
https://postgr.es/m/OS0PR01MB61136018064660F095CB57A8FB129@OS0PR01MB6113.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/7cffa2ed0c9f7f4d96bac7af5284c47e82af5ffa
Fix misleading comments about TOAST access macros. Seems to have been my error
in commit aeb1631ed. Noted by Christoph Berg. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/490798451a3adc32b71b30e285bd99875d67fa2b
Avoid useless malloc/free traffic around getFormattedTypeName(). Coverity
complained that one caller of getFormattedTypeName() failed to free the
returned string. Which is true, but rather than fixing that one, let's get
rid of this tedious and error-prone requirement. Now that
getFormattedTypeName() caches its result, strdup'ing that result and expecting
the caller to free it accomplishes little except to waste cycles. We do
create a leak in the case where getTypes didn't make a TypeInfo for the type,
but that basically shouldn't ever happen. Back-patch, as commit 6c450a861
was. This isn't a particularly interesting bug fix, but the API change seems
like a hazard for future back-patching activity if we don't back-patch it.
https://git.postgresql.org/pg/commitdiff/072e2f8a62002cb01ed6c4e161442e133509349e
Check for relation length overrun soon enough. We don't allow relations to
exceed 2^32-1 blocks, because block numbers are 32 bits and the last possible
block number is reserved to mean InvalidBlockNumber. There is a check for
this in mdextend, but that's really way too late, because the smgr API
requires us to create a buffer for the block-to-be-added, and we do not want
to have any buffer with blocknum InvalidBlockNumber. (Such a case can trigger
assertions in bufmgr.c, plus I think it might confuse ReadBuffer's logic for
data-past-EOF later on.) So put the check into ReadBuffer. Per report from
Christoph Berg. It's been like this forever, so back-patch to all supported
branches. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/8481f99896a192e9fd57f5e1a99e255e27680a10
Avoid fetching from an already-terminated plan. Some plan node types don't
react well to being called again after they've already returned NULL.
PortalRunSelect() has long dealt with this by calling the executor with
NoMovementScanDirection if it sees that we've already run the portal to the
end. However, commit ba2c6d6ce overlooked this point, so that persisting an
already-fully-fetched cursor would fail if it had such a plan. Per report
from Tomas Barton. Back-patch to v11, as the faulty commit was. (I've
omitted a test case because the type of plan that causes a problem isn't all
that stable.) Discussion:
https://postgr.es/m/CAPV2KRjd=ErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/cba79a163267a44205e391137deb543f4f89bc8b
Fix some anomalies with NO SCROLL cursors. We have long forbidden fetching
backwards from a NO SCROLL cursor, but the prohibition didn't extend to cases
in which we rewind the query altogether and then re-fetch forwards. I think
the reason is that this logic was mainly meant to protect plan nodes that
can't be run in the reverse direction. However, re-reading the query output
is problematic if the query is volatile (which includes SELECT FOR UPDATE, not
just queries with volatile functions): the re-read can produce different
results, which confuses the cursor navigation logic completely. Another
reason for disliking this approach is that some code paths will either fetch
backwards or rewind-and-fetch-forwards depending on the distance to the target
row; so that seemingly identical use-cases may or may not draw the "cursor can
only scan forward" error. Hence, let's clean things up by disallowing rewind
as well as fetch-backwards in a NO SCROLL cursor. Ordinarily we'd only make
such a definitional change in HEAD, but there is a third reason to consider
this change now. Commit ba2c6d6ce created some new user-visible anomalies for
non-scrollable cursors WITH HOLD, in that navigation in the cursor result got
confused if the cursor had been partially read before committing. The only
good way to resolve those anomalies is to forbid rewinding such a cursor,
which allows removal of the incorrect cursor state manipulations that
ba2c6d6ce added to PersistHoldablePortal. To minimize the behavioral change
in the back branches (including v14), refuse to rewind a NO SCROLL cursor only
when it has a holdStore, ie has been held over from a previous transaction due
to WITH HOLD. This should avoid breaking most applications that have been
sloppy about whether to declare cursors as scrollable. We'll enforce the
prohibition across-the-board beginning in v15. Back-patch to v11, as
ba2c6d6ce was. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/c1b7a6c2731241cf5af4c08de54a64fc8999d727
Make pg_regexec() robust against out-of-range search_start. If search_start is
greater than the length of the string, we should just return REG_NOMATCH
immediately. (Note that the equality case should not be rejected, since the
pattern might be able to match zero characters.) This guards various internal
assumptions that the min of a range of string positions is not more than the
max. Violation of those assumptions could allow an attempt to fetch
string[search_start-1], possibly causing a crash. Jaime Casanova pointed out
that this situation is reachable with the new regexp_xxx functions that accept
a user-specified start position. I don't believe it's reachable via any
in-core call site in v14 and below. However, extensions could possibly call
pg_regexec with an out-of-range search_start, so let's back-patch the fix
anyway. Discussion:
https://postgr.es/m/20210911180357.GA6870@ahch-to
https://git.postgresql.org/pg/commitdiff/e757080e041214cf6983e3e77ef01e83f1371d72
Álvaro Herrera pushed:
Add PublicationTable and PublicationRelInfo structs. These encapsulate a
relation when referred from replication DDL. Currently they don't do anything
useful (they're just wrappers around RangeVar and Relation respectively) but
in the future they'll be used to carry column lists. Extracted from a larger
patch by Rahila Syed. Author: Rahila Syed [email protected]
Reviewed-by: Álvaro Herrera [email protected] Reviewed-by: Tomas
Vondra [email protected] Reviewed-by: Amit Kapila
[email protected] Discussion:
https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/0c6828fa987b791744b9c8685aadf1baa21f8977
Noah Misch pushed:
AIX: Fix missing libpq symbols by respecting SHLIB_EXPORTS. We make each AIX
shared library export all globals found in .o files that originate in the
library. That doesn't include symbols acquired by -lpgcommon_shlib. That is
good on average, but it became a problem for libpq when commit
e6afa8918c461c1dd80c5063a950518fa4e950cd moved five official libpq API symbols
into src/common. Fix this by implementing the SHLIB_EXPORTS mechanism for
AIX, so affected libraries export the same symbols that they export on Linux.
This reintroduces symbols pg_encoding_to_char, pg_utf_mblen,
pg_char_to_encoding, pg_valid_server_encoding, and
pg_valid_server_encoding_id. Back-patch to v13, where the aforementioned
commit first appeared. While a minor release is usually the wrong time to add
or remove symbol exports in libpq or libecpg, we should expect users to want
each documented symbol. Tony Reix Discussion:
https://postgr.es/m/PR3PR02MB6396742E2FC3E77D37A920BC86C79@PR3PR02MB6396.eurprd02.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/8670b9b999adb66e2e063225496962763c4c28de
Revoke PUBLIC CREATE from public schema, now owned by pg_database_owner. This
switches the default ACL to what the documentation has recommended since
CVE-2018-1058. Upgrades will carry forward any old ownership and ACL. Sites
that declined the 2018 recommendation should take a fresh look. Recipes for
commissioning a new database cluster from scratch may need to create a schema,
grant more privileges, etc. Out-of-tree test suites may require such updates.
Reviewed by Peter Eisentraut. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/b073c3ccd06e4cb845e121387a43faa8c68a7b62
Update src/test/kerberos to account for previous commit.
https://git.postgresql.org/pg/commitdiff/2d689f2ee4615629867c4319a35533696cd16589
Amit Kapila pushed:
Log new catalog xmin candidate in LogicalIncreaseXminForSlot(). Similar to
LogicalIncreaseRestartDecodingForSlot() add a debug message to
LogicalIncreaseXminForSlot() reporting a new catalog_xmin candidate. This
just adds additional diagnostic information during logical decoding that can
aid debugging. Author: Ashutosh Bapat Reviewed-by: Masahiko Sawada, Amit
Kapila Discussion:
https://postgr.es/m/CAExHW5usQWbiUz0hHOCu5twS1O9DvpcPojf6sor=8q--VUuMbA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/4c3478859b7359912d7e99fea702c56b1f53000c
Invalidate relcache for publications defined for all tables. Updates/Deletes
on a relation were allowed even without replica identity after we define the
publication for all tables. This would later lead to an error on subscribers.
The reason was that for such publications we were not invalidating the
relcache and the publication information for relations was not getting
rebuilt. Similarly, we were not invalidating the relcache after dropping of
such publications which will prohibit Updates/Deletes without replica identity
even without any publication. Author: Vignesh C and Hou Zhijie Reviewed-by:
Hou Zhijie, Kyotaro Horiguchi, Amit Kapila Backpatch-through: 10, where it was
introduced Discussion:
https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/8bd534274099aabf721ca4baef2e8a3a379d7b02
Heikki Linnakangas pushed:
Fix missing words in comment. Introduced by commit c3928b467a, backpatch to
v14 like that one. Author: Amit Langote Discussion:
https://www.postgresql.org/message-id/CA+HiwqFQgNLS6VGntMcuJV6erBFV425xA6wBVnY=41GK4zC0Bw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/6ac763f19a6556c43933c5252065a5c81cde094d
Andres Freund pushed:
windows: Only consider us to be running as service if stderr is invalid.
Previously pgwin32_is_service() would falsely return true when postgres is
started from somewhere within a service, but not as a service. That is e.g.
always the case with windows docker containers, which some CI services use to
run windows tests in. When postgres falsely thinks its running as a service,
no messages are writting to stdout / stderr. That can be very confusing and
causes a few tests to fail. To fix additionally check if stderr is invalid in
pgwin32_is_service(). For that to work in backend processes, pg_ctl is changed
to pass down handles so that postgres can do the same check (otherwise
"default" handles are created). While this problem exists in all branches,
there have been no reports by users, the prospective CI usage currently is
only for master, and I am not a windows expert. So doing the change in only
master for now seems the sanest approach. Author: Andres Freund
[email protected] Reviewed-By: Magnus Hagander [email protected]
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/76e38b37a5f179d4c9d2865ff31b79130407530b
Magnus Hagander pushed:
Consistently use read-only instead of "read only". This affects one message
and some documentation that used the format "read only", unlike everything
else that used read-only. Backpatch-through: 14 Discussion:
https://postgr.es/m/CABUevExuxKwn0YM3+wdSeQSvK6CRrJ-hewocGVX3R4-xVX4eMw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/d6c916f020e291b45563d4e76a649e9364cb6f2d
Daniel Gustafsson pushed:
Consistently use "superuser" instead of "super user". The correct nomenclature
for the highest privileged user is superuser and not "super user", this
replaces the few instances where that was used erroneously. No user-visible
changes are done as all changes are in comments, so no back-patching. Author:
Bharath Rupireddy [email protected] Discussion:
https://postgr.es/m/CALj2ACW3snGBD8BAQiArMDS1Y43LuX3ymwO+N8aUg1Hrv6hYNw@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/f7c53bb9e309cfea3206f4ed44b38eb2bd1cf726
[Less]
|
Posted
over 3 years
ago
The PostgreSQL Core Team and the PostgreSQL Community Association of Canada (PGCAC) wish to provide clarification on the ownership and usage of the "PostgreSQL" trademark to address an ongoing issue with a 3rd party organization.
The "PostgreSQL"
... [More]
trademark and other intellectual property and domain names are managed at the request of the PostgreSQL Core Team by the nonprofit PostgreSQL Community Association of Canada (PGCAC). These trademarks are made available for general use under the PostgreSQL Trademark Policy, which you can read at:
https://www.postgresql.org/about/policies/trademarks/
The PostgreSQL trademark policy is modeled after policies implemented by other major open source projects and is designed to be flexible and in the spirit of open source software. This policy is in place to ensure that the PostgreSQL trademarks are not used in ways that may confuse people and to help protect and grow the community brand. Uses of the trademark that are outside of fair use are permitted only through a trademark license issued by PGCAC.
There are additional registered trademarks that use the "PostgreSQL" wordmark but are not used to assume the PostgreSQL brand identity or used to make representations of being the PostgreSQL community. Some of these trademarks were registered prior to the updated PostgreSQL trademark policy; PGCAC has and continues to work with these organizations to ensure trademark compliance.
The PostgreSQL Core Team set up the trademark holdings and policies similar to other aspects of the PostgreSQL project. Through decentralization and fair checks and balances, the trademarks are protected from scenarios such as lack of support or a takeover from a hostile entity. Additionally, it is essential that the trademarks are properly used and defended, otherwise there is a risk that they may be deemed invalid or abandoned by the issuing trademark offices.
In 2020, the PostgreSQL Core Team was made aware that an organization had filed applications to register the "PostgreSQL" and "PostgreSQL Community" trademarks in the European Union and the United States, and had already registered trademarks in Spain. The organization, a 3rd party not-for-profit corporation in Spain called "Fundación PostgreSQL," did not give any indication to the PostgreSQL Core Team or PGCAC that they would file these applications.
When Fundación PostgreSQL’s representative, Álvaro Hernández Tortosa, was contacted about its attempt to register the "PostgreSQL" and "PostgreSQL Community" trademarks in 2020, Fundación PostgreSQL responded that they wished to secure the marks to protect the PostgreSQL brand. However, the registration of the "PostgreSQL" mark by another organization is a violation of the PostgreSQL Trademark Policy as this could lead to user confusion and inconsistent policy and licensing standards. Fundación PostgreSQL was made aware of this in a previous correspondence. This also directly conflicts with the mission of the PGCAC to hold the intellectual property and brand assets of the PostgreSQL project.
When contacted in 2020, Fundación PostgreSQL indicated that they will not withdraw their applications for the "PostgreSQL" and "PostgreSQL Community" trademarks. Fundación PostgreSQL indicated that it would be willing to negotiate with the PGCAC, and while PGCAC made an offer to Fundación PostgreSQL, at the time PGCAC did not receive a response from Fundación PostgreSQL on whether or not the offer was acceptable. Ultimately, PGCAC and PostgreSQL Europe (PGEU), a recognized PostgreSQL nonprofit organization that operates in Europe, elected to file official disputes on the registration of these trademark applications.
In 2021, PGCAC became aware that Fundación PostgreSQL filed additional trademark applications for the "Postgres" trademark in the European Union and United States. Coupled with the original trademark filings, the PostgreSQL Core Team and PGCAC consider this to be a clear violation of the PostgreSQL Trademark Policy. Actions like this put the name and reputation of the PostgreSQL project at risk should an unsanctioned 3rd party take control of PostgreSQL’s trademarks, and could be used to take over domain names and other items.
When presented with an additional cease and desist and the updated terms in which the PostgreSQL Core Team and PGCAC would be willing to settle, Mr. Hernández Tortosa indicated that he would only drop the trademark filings under his terms, which include the weakening of the PGCAC and the potential for outside entities to control the PostgreSQL trademarks.
Due to the risk that these demands would result in loss of control over the trademarks and the aforementioned risks like loss of control over PostgreSQL project resources, the PostgreSQL Core Team and PGCAC view these demands as unacceptable.
The PostgreSQL Core Team and PGCAC still hope for an amicable resolution. However, because these actions infringe upon the "PostgreSQL" and "Postgres" trademarks, the PostgreSQL Core Team and PGCAC will pursue all options until Fundación PostgreSQL surrenders all claims to the "Postgres", "PostgreSQL", and "PostgreSQL Community" trademarks, and any claims to additional trademarks that are in violation of the PostgreSQL Trademark Policy.
The PostgreSQL project has structured its trademark policies and holdings to provide maximum flexibility for people and organizations looking to support PostgreSQL to apply its trademarks. We believe these policies are fair and in line with what other open source communities are doing.
In the spirit of the open source movement, the PostgreSQL community has always tried to operate transparently and fairly, and provide resources for people to adopt, use, and promote PostgreSQL. We continuously look to improve and are very much open to feedback (look no further than the discussions on our mailing lists)!
[Less]
|
Posted
over 3 years
ago
Libreville, Gabon - Sept 8th, 2021
pg_dumpbinary
pg_dumpbinary is a program used to dump a PostgreSQL database with
data dumped in binary format. The resulting dump must be restored
using pg_restorebinary that is provided with this tool.
... [More]
pg_dumpbinary 2.5 was released today, this new release adds support to
data restoration in tables with altered structure.
pg_dumpbinary will now create a file meta-schema..txt per
table that stores the tables columns list at export time. At restore
time pg_restore binary will read this file if present, or get the target
columns list from the pre-data section dumped. The target list will be
appended to the COPY FROM command to avoid error messages like
"ERROR: row field count is 28, expected 29". This is necessary for tables
from upgraded extensions with more columns or if you try to import data
only into an existent table with more columns.
pg_dumpbinary is useful in some situations:
you have bytea that can not be exported by pg_dump because the
total size of the escape/hex output exceed 1Gb.
you have custom type that stores \0 internally in bytea but data
are returned as char/varchar/text which truncate data after
the '\0'. In this case pg_dump will export data in the output
type which will result in data lost.
any other case where binary format can be useful.
If you are in this case pg_dumpbinary will help you by dumping the
PostgreSQL database in binary format. In all other cases you must
use the pg_dump/pg_restore commands distributed with PostgreSQL.
See documentation for a complete description of the features.
Links & Credits
pg_dumpbinary is an open project from LzLabs GmbH.
Any contribution to build a better tool is welcome. You just have to send
your ideas, features requests or patches using the GitHub tools.
Links :
Documentation: https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md
Download: https://github.com/lzlabs/pg_dumpbinary/releases/
Support: use GitHub report tool at https://github.com/lzlabs/pg_dumpbinary/pg_dumpbinary/issues
[Less]
|
Posted
over 3 years
ago
The pgagroal community is happy to
announce version 1.3.0.
This release was driven by Junduo Dong (@An-DJ) and his
Google Summer of Code project to enhance monitoring
and metrics of pgagroal.
New features
Grafana 8 dashboard
New Prometheus metrics
... [More]
within client, pooling and internal categories
Various enhancements and bug fixes.
pgagroal
pgagroal is a high-performance protocol-native connection pool for
PostgreSQL.
Features
High performance
Connection pool
Limit connections for users and databases
Prefill support
Remove idle connections
Perform connection validation
Enable / disable database access
Graceful / fast shutdown
Prometheus support
Grafana 8 dashboard
Remote management
Authentication query support
Failover support
Transport Layer Security (TLS) v1.2+ support
Daemon mode
User vault
Learn more on our web site or GitHub.
Follow on Twitter.
pgagroal is released under the
3-clause BSD license, and is sponsored by Red Hat.
[Less]
|
Posted
over 3 years
ago
Libreville, Gabon - September 4th, 2021
pgBadger
pgBadger is a PostgreSQL performance analyzer, built for speed with
fully detailed reports based on your PostgreSQL log files.
pgBadger 11.6 was released today, this release of pgBadger adds support
... [More]
for PostgreSQL 14 new log information, fixes some issues reported by
users since past seven months and adds some improvements:
Add detection of Query Id in log_line_prefix, new in PG14.
Add advanced regression tests with db exclusion and the explode
feature.
Improve speed of reports generation when --explode is used by using multi-process.
Add --iso-week-number in incremental mode, calendar's weeks start
on a Monday and respect the ISO 8601 week number, range 01 to 53,
where week 1 is the first week that has at least 4 days in the new
year.
Add command line option --keep-comments to not remove comments from
normalized queries. It can be useful if you want to distinguish
between same normalized queries.
Skip INFO lines introduced in PostgreSQL log file by third parties
software.
Add compatibility with PostgresPro log file including rows number
and size in bytes following the statement duration.
Parse times with T's to allow using the timestamps from journalctl.
Improve Windows port.
Backward compatibility:
Expect that --iso-week-number will be the default in next major
release and that --start-monday option will be removed as the week
will always start a Monday. The possibility to have week reports
start a Sunday (current default) will be removed to simplify the
code.
For the complete list of changes, fixes and acknowledgements, please
checkout the release note
Links & Credits
Credits
I would like to thank all users who submitted patches and users who
reported bugs and feature requests, they are all cited the ChangeLog
file.
pgBadger is an open project. Any contribution to build a better tool is
welcome. You just have to send your ideas, features requests or patches
using the GitHub tools or directly to [email protected].
Links
Download: http://pgbadger.darold.net/
Support: use GitHub report tool at https://github.com/darold/pgbadger/issues
Contact: [email protected]
For commercial support contact [email protected] or see the full list of
PostgreSQL companies near of your place at PostgreSQL Professional Services
page, they all do great job and most of them can help you.
About pgBadger
pgBagder is a new generation log analyzer for PostgreSQL, created by
Gilles Darold (also author of ora2pg, the powerful migration tool).
pgBadger is a fast and easy tool to analyze your SQL traffic and create
HTML5 reports with dynamics graphs. pgBadger is the perfect tool to
understand the behavior of your PostgreSQL servers and identify which
SQL queries need to be optimized.
Docs, Download & Demo at http://pgbadger.darold.net/
[Less]
|
Posted
over 3 years
ago
PostgreSQL Weekly News - September 5, 2021
PostgreSQL Product News
pg_dbms_job 1.1.0, an extension to create, manage and use Oracle-style
DBMS_JOB scheduled jobs,
released.
dbForge Data Compare for PostgreSQL v3.4
released
pgmoneta 0.5.0, a backup
... [More]
and restore system for PostgreSQL,
released
pgspider_ext, an extension to create a cluster engine for distributed data based
on PostgreSQL foreign data wrappers,
released.
psycopg2 3.0.0 beta 1, a Python connector for PostgreSQL,
released.
postgresql-wheel, a Python package containing an entire compiled PostgreSQL
server in a single pip installable file,
released
PostgreSQL Jobs for September
https://archives.postgresql.org/pgsql-jobs/2021-09/
PostgreSQL in the News
Planet PostgreSQL: https://planet.postgresql.org/
PostgreSQL Weekly News is brought to you this week by David Fetter
Submit news and announcements by Sunday at 3:00pm PST8PDT to [email protected].
Applied Patches
Michaël Paquier pushed:
Add more tab completion support for ALTER TABLE ADD in psql. This includes the
detection of new patterns for various constraint types, with the addition of
USING INDEX for unique indexes of a table on primary keys and unique
constraints. Author: Dagfinn Ilmari Mannsåker Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/d3fa87657860a50643118c682cdbf53f508a3fab
Refactor one use of IDENTIFY_SYSTEM in WAL streaming code of pg_basebackup.
0c013e0 has done a large refactoring to unify all the code paths using
replication commands, but forgot one code path doing WAL streaming that checks
the validity of a cluster connecting to with IDENTIFY_SYSTEM. There is a
generic routine able to handle that, so make use of it in this code path.
This impacts pg_receivewal and pg_basebackup. Author: Bharath Rupireddy
Discussion:
https://postgr.es/m/CALj2ACVKKYUMC8GE72Y7BP9g1batrrq3sEwUh+1_i2krWZC_2Q@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/99709c9b908eba99ecd787c4dc757f71edd98d87
Add tab completion for data types after ALTER TABLE ADD [COLUMN] in psql. This
allows finding data types that can be used for the creation of a new column,
completing d3fa876. Author: Dagfinn Ilmari Mannsåker Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/f2bbadce6b5052337a11a33ea6bd8d8aebe2610a
Add PostgresNode::command_fails_like(). This is useful to test for a command
failure with some default connection parameters associated to a node, in
combination with checks on error patterns expected. This routine will be used
by an upcoming future patch, but could be also plugged into some of the
existing tests. Extracted from a larger patch by the same author. Author:
Ronan Dunklau Discussion:
https://postgr.es/m/5742739.ga3mSNWIix@aivenronan
https://git.postgresql.org/pg/commitdiff/de1d4fef71cb00370aa5f794fe05355c1d063aea
Refactor one conversion of SQLSTATE to string in elog.c. unpack_sql_state()
has been introduced in d46bc44 to refactor the unpacking of a SQLSTATE into a
string, but it forgot one code path when sending error reports to clients that
could make use of it. This changes the code to also use unpack_sql_state()
there, simplifying a bit the code. Author: Peter Smith Discussion:
https://postgr.es/m/CAHut+PuYituuD1-VVZUNcmCQuc3ZzZMPoO57POgm8tnXOkwJAA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/c4f7a6b87ff350200f4b3afb9fe05e2899161070
doc: Replace some uses of "which" by "that" in parallel.sgml. This makes the
documentation more accurate grammatically. Author: Elena Indrupskaya
Discussion:
https://postgr.es/m/[email protected]
Backpatch-through: 9.6
https://git.postgresql.org/pg/commitdiff/163074ea84efec6ffa4813db43cc956ac5d12565
Amit Kapila pushed:
Refactor sharedfileset.c to separate out fileset implementation. Move fileset
related implementation out of sharedfileset.c to allow its usage by backends
that don't want to share filesets among different processes. After this split,
fileset infrastructure is used by both sharedfileset.c and worker.c for the
named temporary files that survive across transactions. Author: Dilip Kumar,
based on suggestion by Andres Freund Reviewed-by: Hou Zhijie, Masahiko Sawada,
Amit Kapila Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/dcac5e7ac157964f71f15d81c7429130c69c3f9b
Fix incorrect error code in StartupReplicationOrigin().
ERRCODE_CONFIGURATION_LIMIT_EXCEEDED was used for checksum failure, use
ERRCODE_DATA_CORRUPTED instead. Reported-by: Tatsuhito Kasahara Author:
Tatsuhito Kasahara Backpatch-through: 9.6, where it was introduced Discussion:
https://postgr.es/m/CAP0=ZVLHtYffs8SOWcFJWrBGoRzT9QQbk+_aP+E5AHLNXiOorA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/bad6cef32ca2dd60c8be18f95a7157ced189ec99
Fix the random test failure in 001_rep_changes. The check to test whether the
subscription workers were restarting after a change in the subscription was
failing. The reason was that the test was assuming the walsender started
before it reaches the 'streaming' state and the walsender was exiting due to
an error before that. Now, the walsender was erroring out before reaching the
'streaming' state because it tries to acquire the slot before the previous
walsender has exited. In passing, improve the die messages so that it is
easier to investigate the failures in the future if any. Reported-by: Michael
Paquier, as per buildfarm Author: Ajin Cherian Reviewed-by: Masahiko Sawada,
Amit Kapila Backpatch-through: 10, where this test was introduced Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/8d0138ef51b51ba3e419cf49069eb72a15f74706
Optimize fileset usage in apply worker. Use one fileset for the entire worker
lifetime instead of using separate filesets for each streaming transaction.
Now, the changes/subxacts files for every streaming transaction will be
created under the same fileset and the files will be deleted after the
transaction is completed. This patch extends the BufFileOpenFileSet and
BufFileDeleteFileSet APIs to allow users to specify whether to give an error
on missing files. Author: Dilip Kumar, based on suggestion by Thomas Munro
Reviewed-by: Hou Zhijie, Masahiko Sawada, Amit Kapila Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/31c389d8de915b705ee06c7a6e9246e20f36b9dc
Fujii Masao pushed:
pgbench: Avoid unnecessary measurement of connection delays. Commit 547f04e734
changed pgbench so that it used the measurement result of connection delays in
its benchmark report only when -C/--connect option is specified. But
previously those delays were unnecessarily measured even when that option is
not specified. Which was a waste of cycles. This commit improves pgbench so
that it avoids such unnecessary measurement. Back-patch to v14 where commit
547f04e734 first appeared. Author: Yugo Nagata Reviewed-by: Fabien COELHO,
Asif Rehman, Fujii Masao Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/bfd4567b88496bf9669658e5ab381b296dd9ffe1
pgbench: Fix bug in measurement of disconnection delays. When -C/--connect
option is specified, pgbench establishes and closes the connection for each
transaction. In this case pgbench needs to measure the times taken for all
those connections and disconnections, to include the average connection time
in the benchmark result. But previously pgbench could not measure those
disconnection delays. To fix the bug, this commit makes pgbench measure the
disconnection delays whenever the connection is closed at the end of
transaction, if -C/--connect option is specified. Back-patch to v14. Per
discussion, we concluded not to back-patch to v13 or before because changing
that behavior in stable branches would surprise users rather than providing
benefits. Author: Yugo Nagata Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif
Rehman, Fujii Masao Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/4dc528bfa7dacee1cfbe2ec59b25039616268b69
Improve tab-completion for CREATE PUBLICATION. Author: Peter Smith
Reviewed-by: Vignesh C Discussion:
https://postgr.es/m/CAHut+Ps-vkmnWAShWSRVCB3gx8aM=bFoDqWgBNTzofK0q1LpwA@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/b0c066297bdac901a13a5a900a197697a026b357
Enhance pg_stat_reset_single_table_counters function. This commit allows
pg_stat_reset_single_table_counters() to reset statistics for a single
relation shared across all databases in the cluster to zero. Bump catalog
version. Author: B Sadhu Prasad Patro Reviewed-by: Mahendra Singh Thalor,
Himanshu Upadhyaya, Dilip Kumar, Fujii Masao Discussion:
https://postgr.es/m/CAFF0-CGy7EHeF=AqqkGMF85cySPQBgDcvNk73G2O0vL94O5U5A@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/e04267844a9bbf97c2e85c919b84dfe498ab0302
Álvaro Herrera pushed:
psql: Fix name quoting on extended statistics. Per our message style
guidelines, for human consumption we quote qualified names as a whole rather
than each part separately; but commits bc085205c8a4 introduced a deviation for
extended statistics and a4d75c86bf15 copied it. I don't agree with this
policy applying to names shown by psql, but that's a poor reason to deviate
from the practice only in two obscure corners, so make said corners use the
same style as everywhere else. Backpatch to 14. The first of these is older,
but I'm not sure we want to destabilize the psql output in the older branches
for such a small thing. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/a397109114c8569e21ed3c858a16143de28a1897
Report tuple address in data-corruption error message. Most data-corruption
reports mention the location of the problem, but this one failed to. Add it.
Backpatch all the way back. In 12 and older, also assign the
ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93bf890 for 13
and later. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/961dd7565726a507d4551f7ea54ad888fc6ee93a
Revert "Avoid creating archive status ".ready" files too early". This reverts
commit 515e3d84a0b5 and equivalent commits in back branches. This solution to
the problem has a number of problems, so we'll try again with a different
approach. Per note from Andres Freund Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/96b665083eb72570e226cf2d25c960b3acc62040
Daniel Gustafsson pushed:
Prohibit map and grep in void context. map and grep are not intended to be
used as mutators, iterating with side-effects should be done with for or
foreach loops. This fixes the one occurrence of the pattern, and bumps the
perlcritic policy to severity 5 for the map and grep policies. Author:
Dagfinn Ilmari Mannsåker [email protected] Reviewed-by: Michael Paquier
[email protected] Reviewed-by: Andrew Dunstan [email protected]
Reviewed-by: Julien Rouhaud [email protected] Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/bb466c6b0992a1a21c03239a7b0a87ebadd3bee1
Remove superfluous variable assignment. Commit a4205fa00 moved setting conn to
NULL directly after the call to PQfinish, but the original conn = NULL;
remained a few lines down. Fix by removing the superfluous assignment.
Author: Bharath Rupireddy [email protected] Discussion:
https://postgr.es/m/CALj2ACVRiNvMDHYQGiRrGs2Z9dOydfLh2MymEk9i8CSn23UtCg@mail.gmail.com
https://git.postgresql.org/pg/commitdiff/ba1b763102b89bca2711e921cf3083d8487b8c96
Tom Lane pushed:
Fix missed lock acquisition while inlining new-style SQL functions. When
starting to use a query parsetree loaded from the catalogs, we must begin by
applying AcquireRewriteLocks(), to obtain the same relation locks that the
parser would have gotten if the query were entered interactively, and to do
some other cleanup such as dealing with later-dropped columns. New-style SQL
functions are just as subject to this rule as other stored parsetrees;
however, of the places dealing with such functions, only init_sql_fcache had
gotten the memo. In particular, if we successfully inlined a new-style
set-returning SQL function that contained any relation references, we'd either
get an assertion failure or attempt to use those relation(s) sans locks. I
also added AcquireRewriteLocks calls to fmgr_sql_validator and
print_function_sqlbody. Desultory experiments didn't demonstrate any failures
in those, but I suspect that I just didn't try hard enough. Certainly we don't
expect nearby code paths to operate without locks. On the same logic of
it-ought-to-have-the-same-effects-as-the-old-code, call pg_rewrite_query() in
fmgr_sql_validator, too. It's possible that neither code path there needs to
bother with rewriting, but doing the analysis to prove that is beyond my goals
for today. Per bug #17161 from Alexander Lakhin. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/589be6f6c732a20e2bcaa02560de464ebbd48af2
Cache the results of format_type() queries in pg_dump. There's long been a
"TODO: there might be some value in caching the results" annotation on
pg_dump's getFormattedTypeName function; but we hadn't gotten around to
checking what it was costing us to repetitively look up type names. It turns
out that when dumping the current regression database, about 10% of the total
number of queries issued are duplicative format_type() queries. However,
Hubert Depesz Lubaczewski reported a not-unusual case where these account for
over half of the queries issued by pg_dump. Individually these queries aren't
expensive, but when network lag is a factor, they add up to a problem. We can
very easily add some caching to getFormattedTypeName to solve it. Since this
is such a simple fix and can have a visible performance benefit, back-patch to
all supported branches. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/6c450a861f1a928f44c9ae80814ed9a91927c25a
In pg_dump, avoid doing per-table queries for RLS policies. For no
particularly good reason, getPolicies() queried pg_policy separately for each
table. We can collect all the policies in a single query instead, and attach
them to the correct TableInfo objects using findTableByOid() lookups. On the
regression database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local server. Per
complaint from Hubert Depesz Lubaczewski. Since this is such a simple fix and
can have a visible performance benefit, back-patch to all supported branches.
Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/bd3611db5a6f3726094872f59feab426374d2c46
Refactor postgresImportForeignSchema to avoid code duplication. Avoid
repeating fragments of the query we're building, along the same lines as
recent cleanup in pg_dump. I got annoyed about this because aa769f80e broke
my pending patch to change postgres_fdw's collation handling, due to each of
us having incompletely done this same refactoring. Let's finish that job in
hopes of having a more stable base.
https://git.postgresql.org/pg/commitdiff/2dc53fe2a77d8d5f22c656fdf6590198e358a996
Doc: clarify how triggers relate to transactions. Laurenz Albe, per gripe from
Nathan Long. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/469150a240dd79acbe7d86cb5df869d95f4d6d2d
Fix float4/float8 hash functions to produce uniform results for NaNs. The IEEE
754 standard allows a wide variety of bit patterns for NaNs, of which at least
two ("NaN" and "-NaN") are pretty easy to produce from SQL on most machines.
This is problematic because our btree comparison functions deem all NaNs to be
equal, but our float hash functions know nothing about NaNs and will happily
produce varying hash codes for them. That causes unexpected results from
queries that hash a column containing different NaN values. It could also
produce unexpected lookup failures when using a hash index on a float column,
i.e. "WHERE x = 'NaN'" will not find all the rows it should. To fix,
special-case NaN in the float hash functions, not too much unlike the existing
special case that forces zero and minus zero to hash the same. I arranged for
the most vanilla sort of NaN (that coming from the C99 NAN constant) to still
have the same hash code as before, to reduce the risk to existing hash
indexes. I dithered about whether to back-patch this into stable branches,
but ultimately decided to do so. It's a clear improvement for queries that
hash internally. If there is anybody who has -NaN in a hash index, they'd be
well advised to re-index after applying this patch ... but the misbehavior if
they don't will not be much worse than the misbehavior they had before. Per
bug #17172 from Ma Liangzhu. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/ce773f230d9b5bb2e0dd23fec4e5462fd99487fe
In count_usable_fds(), duplicate stderr not stdin. We had a complaint that the
postmaster fails to start if the invoking program closes stdin. That happens
because count_usable_fds expects to be able to dup(0), and if it can't, we
conclude there are no free FDs and go belly-up. So far as I can find, though,
there is no other place in the server that touches stdin, and it's not
unreasonable to expect that a daemon wouldn't use that file. As a simple
improvement, let's dup FD 2 (stderr) instead. Unlike stdin, it *is*
reasonable for us to expect that stderr be open; even if we are configured not
to touch it, common libraries such as libc might try to write error messages
there. Per gripe from Mario Emmenlauer. Given the lack of previous
complaints, I'm not excited about pushing this into stable branches, but it
seems OK to squeeze it into v14. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/c95ede41b8d47b21d58702fbc519e720f41fdaf1
Fix portability issue in tests from commit ce773f230. Modern POSIX seems to
require strtod() to accept "-NaN", but there's nothing about NaN in SUSv2, and
some of our oldest buildfarm members don't like it. Let's try writing it as
-'NaN' instead; that seems to produce the same result, at least on Intel
hardware. Per buildfarm.
https://git.postgresql.org/pg/commitdiff/fd549145d5d9fba3367cbf7e3d4fc7cb3562feb0
Disallow creating an ICU collation if the DB encoding won't support it.
Previously this was allowed, but the collation effectively vanished into the
ether because of the way lookup_collation() works: you could not use the
collation, nor even drop it. Seems better to give an error up front than to
leave the user wondering why it doesn't work. (Because this test is in
DefineCollation not CreateCollation, it does not prevent
pg_import_system_collations from creating ICU collations, regardless of the
initially-chosen encoding.) Per bug #17170 from Andrew Bille. Back-patch to
v10 where ICU support was added. Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/db2760a84191c329c0cdfaa1dae048c32b0c1752
Remove arbitrary MAXPGPATH limit on command lengths in pg_ctl. Replace
fixed-length command buffers with psprintf() calls. We didn't have anything
as convenient as psprintf() when this code was written, but now that we do,
there's little reason for the limitation to stand. Removing it eliminates
some corner cases where (for example) starting the postmaster with a whole lot
of options fails. Most individual file names that pg_ctl deals with are still
restricted to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename. Back-patch to all supported
branches. Phil Krylov Discussion:
https://postgr.es/m/[email protected]
https://git.postgresql.org/pg/commitdiff/87ad491472d6f8620d83ec9db4f515ce303052ac
Minor improvements for psql help output. Fix alphabetization of the output of
"\?", and improve one description. Update PageOutput counts where needed,
fixing breakage from previous patches. Haiying Tang (PageOutput fixes by me)
Discussion:
https://postgr.es/m/OS0PR01MB61136018064660F095CB57A8FB129@OS0PR01MB6113.jpnprd01.prod.outlook.com
https://git.postgresql.org/pg/commitdiff/ac5ea660996ecbbfbe78b881a581132a95d93d26
Further portability tweaks for float4/float8 hash functions. Attempting to
make hashfloat4() look as much as possible like hashfloat8(), I'd figured I
could replace NaNs with get_float4_nan() before widening to float8. However,
results from protosciurus and topminnow show that on some platforms that
produces a different bit-pattern from get_float8_nan(), breaking the intent of
ce773f230. Rearrange so that we use the result of get_float8_nan() for all NaN
cases. As before, back-patch.
https://git.postgresql.org/pg/commitdiff/b30cc0fd6d5d96c63037824c286cec561e092b6f
Tomáš Vondra pushed:
Fix lookup error in extended stats ownership check. When an ownership check on
extended statistics object failed, the code was calling aclcheck_error_type to
report the failure, which is clearly wrong, resulting in cache lookup errors.
Fix by calling aclcheck_error. This issue exists since the introduction of
extended statistics, so backpatch all the way back to PostgreSQL 10. It went
unnoticed because there were no tests triggering the error, so add one.
Reported-by: Mark Dilger Backpatch-through: 10, where extended stats were
introduced Discussion:
https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
https://git.postgresql.org/pg/commitdiff/5be8ce82e84f8f3604916c06668990c524f3856d
Rename the role in stats_ext to have regress_ prefix. Commit 5be8ce82e8 added
a new role to the stats_ext regression suite, but the role name did not start
with regress_ causing failures when running with
ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by renaming the role to start
with the expected regress_ prefix. Backpatch-through: 10, same as the new
regression test Discussion:
https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
https://git.postgresql.org/pg/commitdiff/628bc9d13be8c55492aef122c25dbba3b036366d
Don't print extra parens around expressions in extended stats. The code
printing expressions for extended statistics doubled the parens, producing
results like ((a+1)), which is unnecessary and not consistent with how we
print expressions elsewhere. Fixed by tweaking the code to produce just a
single set of parens. Reported by Mark Dilger, fix by me. Backpatch to 14,
where support for extended statistics on expressions was added. Reported-by:
Mark Dilger Discussion:
https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
https://git.postgresql.org/pg/commitdiff/13380e1476490932c7b15530ead1f649a16e1125
Identify simple column references in extended statistics. Until now, when
defining extended statistics, everything except a plain column reference was
treated as complex expression. So for example "a" was a column reference, but
"(a)" would be an expression. In most cases this does not matter much, but
there were a couple strange consequences. For example CREATE STATISTICS s
ON a FROM t; would fail, because extended stats require at least two columns.
But CREATE STATISTICS s ON (a) FROM t; would succeed, because that
requirement does not apply to expressions. Moreover, that statistics object is
useless - the optimizer will always use the regular statistics collected for
attribute "a". So do a bit more work to identify those expressions
referencing a single column, and translate them to a simple column reference.
Backpatch to 14, where support for extended statistics on expressions was
introduced. Reported-by: Justin Pryzby Backpatch-through: 14 Discussion:
https://postgr.es/m/20210816013255.GS10479%40telsasoft.com
https://git.postgresql.org/pg/commitdiff/537ca68dbb2463f7b1c44e9466b8fbdd7505b2e1
John Naylor pushed:
Mark the timestamptz variant of date_bin() as stable. Previously, it was
immutable by lack of marking. This is not correct, since the time zone could
change. Bump catversion Discussion:
https://www.postgresql.org/message-id/CAFBsxsG2UHk8mOWL0tca%3D_cg%2B_oA5mVRNLhDF0TBw980iOg5NQ%40mail.gmail.com
Backpatch to v14, when this function came in
https://git.postgresql.org/pg/commitdiff/543f36b43d835c819c24b2952ac4ecabbdf2b944
Set the volatility of the timestamptz version of date_bin() back to immutable.
543f36b43d was too hasty in thinking that the volatility of date_bin() had to
match date_trunc(), since only the latter references session_timezone. Bump
catversion Per feedback from Aleksander Alekseev Backpatch to v14, as the
former commit was
https://git.postgresql.org/pg/commitdiff/0c6a6a0ab7675e151a6e46d3ec0846bfce9e030a
Peter Geoghegan pushed:
Add historic commit to git-blame-ignore-revs file. Add a historic pgindent
commit that was missed by the initial work done in commit 8e638845.
https://git.postgresql.org/pg/commitdiff/aec5cc9efda00f7f418c9a068c92cab119602084
Remove unneeded old_rel_pages VACUUM state field. The field hasn't been used
since commit 3d351d91, which redefined pg_class.reltuples to be -1 before the
first VACUUM or ANALYZE. Also rename a local variable of the same name
("old_rel_pages"). This is used by relation truncation to represent the
original relation size at the start of the ongoing VACUUM operation. Rename
it to orig_rel_pages, since that's a lot clearer. (This name matches similar
nearby code.)
https://git.postgresql.org/pg/commitdiff/47029f775adf83796fecb5871ce52488996a7969
vacuumlazy.c: Correct prune state comment. Oversight in commit 7ab96cf6b3.
https://git.postgresql.org/pg/commitdiff/6320806ac3792a297f1c4eb149c3ddeda25d3930
Remove obsolete nbtree relation extension comment. Commit 0d1fe9f7 improved
the approach that vacuumlazy.c takes when it encounters an empty heap page.
It no acquires the relation extension lock.
https://git.postgresql.org/pg/commitdiff/0f6aa893cb58c2a5a92016914c94865635345a22
VACUUM VERBOSE: Don't report "pages removed". It doesn't make any sense to
report this information, since VACUUM VERBOSE reports on heap relation
truncation directly. This was an oversight in commit 7ab96cf6, which made
VACUUM VERBOSE output a little more consistent with nearby autovacuum-specific
log output. Adjust comments that describe how this is supposed to work in
passing. Also bring truncation-related VACUUM VERBOSE output in line with the
convention established for VACUUM VERBOSE output by commit f4f4a649. Author:
Peter Geoghegan [email protected] Backpatch: 14-, where VACUUM VERBOSE's output
changed.
https://git.postgresql.org/pg/commitdiff/b175b9cde720a81852b70c0c7fbc18c72ff1acc6
Peter Eisentraut pushed:
Fix incorrect format placeholders.
https://git.postgresql.org/pg/commitdiff/590ecd982304dec8599d6ca339903982d39a9a1a
Fix pkg-config files for static linking. Since ea53100d5 (PostgreSQL 12), the
shipped pkg-config files have been broken for statically linking libpq because
libpgcommon and libpgport are missing. This patch adds those two missing
private dependencies (in a non-hardcoded way). Reported-by: Filip Gospodinov
[email protected] Discussion:
https://www.postgresql.org/message-id/flat/[email protected]
https://git.postgresql.org/pg/commitdiff/4c2eab3a0dec2eae40892fb525830a5947a398c7
Make pkg-config files cross-compile friendly. Currently the pc files use hard
coded paths for "includedir" and "libdir." Example: Cflags: -I/usr/include
Libs: -L/usr/lib -lpq This is not very fortunate when cross compiling inside
a buildroot, where the includes and libs are inside a staging directory,
because this introduces host paths into the build: checking for
pkg-config...
/builder/shared-workdir/build/sdk/staging_dir/host/bin/pkg-config checking
for PostgreSQL libraries via pkg_config... -L/usr/lib <---- This commit
addresses this by doing the following two things: 1. Instead of hard coding
the paths in "Cflags" and "Libs" "${includedir}" and "${libdir}" are
used. Note: these variables can be overriden on the pkg-config command
line ("--define-variable=libdir=/some/path"). 2. Add the variables
"prefix" and "exec_prefix". If "includedir" and/or "libdir" are using
these then construct them accordingly. This is done because buildroots
(for instance OpenWrt) tend to rename the real pkg-config and call it
indirectly from a script that sets "prefix", "exec_prefix" and "bindir",
like so: pkg-config.real --define-variable=prefix=${STAGING_PREFIX} \
--define-variable=exec_prefix=${STAGING_PREFIX} \
--define-variable=bindir=${STAGING_PREFIX}/bin $@ Example #1: user calls
./configure with "--libdir=/some/lib" and "--includedir=/some/include":
prefix=/usr/local/pgsql exec_prefix=${prefix} libdir=/some/lib
includedir=/some/include Name: libpq Description: PostgreSQL libpq
library Url: http://www.postgresql.org/
Version: 12.1 Requires: Requires.private: Cflags: -I${includedir}
Libs: -L${libdir} -lpq Libs.private: -lcrypt -lm Example #2: user calls
./configure with no arguments: prefix=/usr/local/pgsql
exec_prefix=${prefix} libdir=${exec_prefix}/lib
includedir=${prefix}/include Name: libpq Description: PostgreSQL libpq
library Url: http://www.postgresql.org/
Version: 12.1 Requires: Requires.private: Cflags: -I${includedir}
Libs: -L${libdir} -lpq Libs.private: -lcrypt -lm Like this the paths can
be forced into the staging directory when using a buildroot setup: checking
for pkg-config... /home/sk/tmp/openwrt/staging_dir/host/bin/pkg-config
checking for PostgreSQL libraries via pkg_config...
-L/home/sk/tmp/openwrt/staging_dir/target-mips_24kc_musl/usr/lib Author:
Sebastian Kemper [email protected] Co-authored-by: Peter Eisentraut
[email protected] Discussion:
https://www.postgresql.org/message-id/flat/20200305213827.GA25135%40darth.lan
https://git.postgresql.org/pg/commitdiff/6588d8416e4ef84fd99fb271b63116f207c6c479
Tatsuo Ishii pushed:
Use COPY FREEZE in pgbench for faster benchmark table population. While
populating the pgbench_accounts table, plain COPY was unconditionally used. By
changing it to COPY FREEZE, the time for VACUUM is significantly reduced, thus
the total time of "pgbench -i" is also reduced. This only happens if pgbench
runs against PostgreSQL 14 or later because COPY FREEZE in previous versions
of PostgreSQL does not bring the benefit. Also if partitioning is used, COPY
FREEZE cannot be used. In this case plain COPY will be used too. Author:
Tatsuo Ishii Discussion:
https://postgr.es/m/[email protected]
Reviewed-by: Fabien COELHO, Laurenz Albe, Peter Geoghegan, Dean Rasheed
https://git.postgresql.org/pg/commitdiff/06ba4a63b85e5aa47b325c3235c16c05a0b58b96
[Less]
|
Posted
over 3 years
ago
Devart released a new version of dbForge Data Compare for PostgreSQL. The latest edition comes with new connectivity capabilities.
Devart, one of the leading developers of database management software, ALM solutions, and data providers for the most
... [More]
popular database servers, announced an update of dbForge Data Compare for PostgreSQL, a GUI tool for data comparison and synchronization.
In addition to PostgreSQL and Amazon Redshift, the list of supported connections was extended with the following options:
Connection to IBM Cloud using SSL.
IBM Cloud, developed by the IBM company, is a suite of cloud computing services that includes both platform as a service (PaaS) and infrastructure as a service (IaaS) solutions. IBM Cloud ranks number 5 among the top cloud providers.
Connection to Amazon Aurora.
Amazon Aurora is a modern relational database service, сompatible with MySQL and PostgreSQL. It was developed for the cloud and is managed by Amazon Relational Database Service (RDS).
dbForge Data Compare for PostgreSQL is a powerful, fast, and easy-to-use tool that compares PostgreSQL databases, detects data differences, as well as generates customizable synchronization scripts and comparison reports.
To learn more about the recent release, refer to the https://blog.devart.com/embrace-new-connectivity-opportunities-with-dbforge-data-compare-for-postgresql-v3-4.html.
About Devart
Devart is a multi-product company that develops advanced tools for database development and administration, ALM solutions, data providers for database servers, as well as data integration and backup solutions.
For additional information about Devart, visit https://www.devart.com/.
[Less]
|
Posted
over 3 years
ago
We are immensely proud to release on PyPI the first beta package of Psycopg 3!
Psycopg 3 is a complete rewrite of Psycopg 2, maintaining the same fundamental
libpq wrapper architecture and DB-API interface design, but exposing new
features to better
... [More]
work with the newer versions of Python and PostgreSQL.
On the Python side, Psycopg 3 allows the use of asyncio-based concurrency and
static typing. Many improvement to the Python interface make the library much
simpler and more idiomatic to use,
On the PostgreSQL side, Psycopg 3 makes use of server-side parameters,
prepared statements, binary parameters, and great support for COPY operations.
But the most outstanding feature of the project is not a technical one:
Psycopg 3 was made possible by the great generosity of many sponsors, who have funded the development of the
project. Among the many backers, we are especially grateful to Postgres
Professional and Command Prompt, Inc, which have given the most outstanding
support. But many other companies and individuals, each one in their capacity,
have shown concrete support for free software development and progress. We
sincerely hope that you will find this work useful and that you will feel
proud for having contributed to it.
https://www.psycopg.org/sponsors/
Where do we go from here? The hope is that the interface of the adapter will
not change excessively before a definitive 3.0 release: the project has
already been used in a few production environments, in the past months, and a
lot of real world feedback has already helped to improve the interface and
functionalities. We invite you to test the project and give us your feedback.
So...
pip install -U pip
pip install --pre psycopg[binary]
Please try it, test it, and let us know how it goes!
For more info you can dive into the docs: start from the install and usage pages, I'm sure you will find your way.
Happy hacking!
[Less]
|
Posted
over 3 years
ago
Hello,
I'd like to announce the first release of postgresql-wheel, a Python package containing an entire compiled PostgreSQL server in a single pip installable file.
https://github.com/michelp/postgresql-wheel
While the pip "wheel" is universally
... [More]
used to package Python code and compiled shared libraries, the format is like other archive formats (apt, yum, etc) and can store any kind of compiled package artifacts, like an entire postgres installation binary.
postgresql-wheel uses the "manylinux" build environments to compile PostgreSQL and include an entire server installation in the wheel. This means you can install a local postgresql server into a virtual environment without any sudo or root privileges.
This package was developed to be useful for test runners that depend on postgresql, just add this wheel as a dependency and create and destroy any number of databases in your setup and teardown code, no external postgres server dependencies required!
-Michel
[Less]
|