-
Notifications
You must be signed in to change notification settings - Fork 8
why this version's top event also XidInMVCCSnapshot #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Did you set session_pool_size? In the latest version by default it is set to 10. It means that there will be only 10 worker backends. Can you check number of backends spawned by postgres? |
HI, my test host has 56 cores, what is a comfortable pool size? |
it's my mistake, sorry. |
knizhnik
pushed a commit
that referenced
this issue
Feb 13, 2019
The original setup for dependencies of partitioned objects had serious problems: 1. It did not verify that a drop cascading to a partition-child object also cascaded to at least one of the object's partition parents. Now, normally a child object would share all its dependencies with one or another parent (e.g. a child index's opclass dependencies would be shared with the parent index), so that this oversight is usually harmless. But if some dependency failed to fit this pattern, the child could be dropped while all its parents remain, creating a logically broken situation. (It's easy to construct artificial cases that break it, such as attaching an unrelated extension dependency to the child object and then dropping the extension. I'm not sure if any less-artificial cases exist.) 2. Management of partition dependencies during ATTACH/DETACH PARTITION was complicated and buggy; for example, after detaching a partition table it was possible to create cases where a formerly-child index should be dropped and was not, because the correct set of dependencies had not been reconstructed. Less seriously, because multiple partition relationships were represented identically in pg_depend, there was an order-of-traversal dependency on which partition parent was cited in error messages. We also had some pre-existing order-of-traversal hazards for error messages related to internal and extension dependencies. This is cosmetic to users but causes testing problems. To fix #1, add a check at the end of the partition tree traversal to ensure that at least one partition parent got deleted. To fix #2, establish a new policy that partition dependencies are in addition to, not instead of, a child object's usual dependencies; in this way ATTACH/DETACH PARTITION need not cope with adding or removing the usual dependencies. To fix the cosmetic problem, distinguish between primary and secondary partition dependency entries in pg_depend, by giving them different deptypes. (They behave identically except for having different priorities for being cited in error messages.) This means that the former 'I' dependency type is replaced with new 'P' and 'S' types. This also fixes a longstanding bug that after handling an internal dependency by recursing to the owning object, findDependentObjects did not verify that the current target was now scheduled for deletion, and did not apply the current recursion level's objflags to it. Perhaps that should be back-patched; but in the back branches it would only matter if some concurrent transaction had removed the internal-linkage pg_depend entry before the recursive call found it, or the recursive call somehow failed to find it, both of which seem unlikely. Catversion bump because the contents of pg_depend change for partitioning relationships. Patch HEAD only. It's annoying that we're not fixing #2 in v11, but there seems no practical way to do so given that the problem is exactly a poor choice of what entries to put in pg_depend. We can't really fix that while staying compatible with what's in pg_depend in existing v11 installations. Discussion: https://postgr.es/m/CAH2-Wzkypv1R+teZrr71U23J578NnTBt2X8+Y=Odr4pOdW1rXg@mail.gmail.com
knizhnik
pushed a commit
that referenced
this issue
Jul 1, 2019
…tions. Commit 3d956d9 added support for update row movement in postgres_fdw. This patch fixes the following issues introduced by that commit: * When a remote partition chosen to insert routed rows into was also an UPDATE subplan target rel that would be updated later, the UPDATE that used a direct modification plan modified those routed rows incorrectly because those routed rows were visible to the later UPDATE command. The right fix for this would be to have some way in postgres_fdw in which the later UPDATE command ignores those routed rows, but it seems hard to do so with the current infrastructure. For now throw an error in that case. * When a remote partition chosen to insert routed rows into was also an UPDATE subplan target rel, fmstate created for the UPDATE that used a non-direct modification plan was mistakenly overridden by another fmstate created for inserting those routed rows into the partition. This caused 1) server crash when the partition would be updated later, and 2) resource leak when the partition had been already updated. To avoid that, adjust the treatment of the fmstate for the inserting. As for #1, since we would also have the incorrectness issue as mentioned above, error out in that case as well. Update the docs to mention that postgres_fdw currently does not handle the case where a remote partition chosen to insert a routed row into is also an UPDATE subplan target rel that will be updated later. Author: Amit Langote and Etsuro Fujita Reviewed-by: Amit Langote Backpatch-through: 11 where row movement in postgres_fdw was added Discussion: https://postgr.es/m/[email protected]
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
HI,
i had use this version test tpc-b.
but when i test 2000 connections, the hot spot the same as postgresql community version.
this is test demo:
first patch pgbench
https://commitfest.postgresql.org/18/1388/
and then
best regards,
digoal
The text was updated successfully, but these errors were encountered: