Lists: | pgsql-hackers |
---|
From: | John Morris <john(dot)morris(at)crunchydata(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, stephen(dot)frost <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-10-31 17:11:26 |
Message-ID: | BYAPR13MB26776A35AB57940680D4CE0EA0A0A@BYAPR13MB2677.namprd13.prod.outlook.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Here is an updated patch for tracking Postgres memory usage.
In this new patch, Postgres “reserves” memory, first by updating process-private counters, and then eventually by updating global counters. If the new GUC variable “max_total_memory” is set, reservations exceeding the limit are turned down and treated as though the kernel had reported an out of memory error.
Postgres memory reservations come from multiple sources.
* Malloc calls made by the Postgres memory allocators.
* Static shared memory created by the postmaster at server startup,
* Dynamic shared memory created by the backends.
* A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.
Each process also maintains an accurate count of its actual memory allocations. The process-private variable “my_memory” holds the total allocations for that process. Since there can be no contention, each process updates its own counters very efficiently.
Pgstat now includes global memory counters. These shared memory counters represent the sum of all reservations made by all Postgres processes. For efficiency, these global counters are only updated when new reservations exceed a threshold, currently 1 Mb for each process. Consequently, the global reservation counters are approximate totals which may differ from the actual allocation totals by up to 1 Mb per process.
The max_total_memory limit is checked whenever the global counters are updated. There is no special error handling if a memory allocation exceeds the global limit. That allocation returns a NULL for malloc style allocations or an ENOMEM for shared memory allocations. Postgres has existing mechanisms for dealing with out of memory conditions.
For sanity checking, pgstat now includes the pg_backend_memory_allocation view showing memory allocations made by the backend process. This view includes a scan of the top memory context, so it compares memory allocations reported through pgstat with actual allocations. The two should match.
Two other views were created as well. pg_stat_global_memory_tracking shows how much server memory has been reserved overall and how much memory remains to be reserved. pg_stat_memory_reservation iterates through the memory reserved by each server process. Both of these views use pgstat’s “snapshot” mechanism to ensure consistent values within a transaction.
Performance-wise, there was no measurable impact with either pgbench or a simple “SELECT * from series” query.
Attachment | Content-Type | Size |
---|---|---|
memtrack_v5_adds_memory_tracking_to_postgres.patch | application/octet-stream | 137.7 KB |
From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | John Morris <john(dot)morris(at)crunchydata(dot)com> |
Cc: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, Stephen Frost <sfrost(at)snowman(dot)net>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-04 04:19:00 |
Message-ID: | [email protected] |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
On 2023-10-31 17:11:26 +0000, John Morris wrote:
> Postgres memory reservations come from multiple sources.
>
> * Malloc calls made by the Postgres memory allocators.
> * Static shared memory created by the postmaster at server startup,
> * Dynamic shared memory created by the backends.
> * A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.
>
> Each process also maintains an accurate count of its actual memory
> allocations. The process-private variable “my_memory” holds the total
> allocations for that process. Since there can be no contention, each process
> updates its own counters very efficiently.
I think this will introduce measurable overhead in low concurrency cases and
very substantial overhead / contention when there is a decent amount of
concurrency. This makes all memory allocations > 1MB contend on a single
atomic. Massive amount of energy have been spent writing multi-threaded
allocators that have far less contention than this - the current state is to
never contend on shared resources on any reasonably common path. This gives
away one of the few major advantages our process model has away.
The patch doesn't just introduce contention when limiting is enabled - it
introduces it even when memory usage is just tracked. It makes absolutely no
sense to have a single contended atomic in that case - just have a per-backend
variable in shared memory that's updated. It's *WAY* cheaper to compute the
overall memory usage during querying than to keep a running total in shared
memory.
> Pgstat now includes global memory counters. These shared memory counters
> represent the sum of all reservations made by all Postgres processes. For
> efficiency, these global counters are only updated when new reservations
> exceed a threshold, currently 1 Mb for each process. Consequently, the
> global reservation counters are approximate totals which may differ from the
> actual allocation totals by up to 1 Mb per process.
I see that you added them to the "cumulative" stats system - that doesn't
immediately makes sense to me - what you're tracking here isn't an
accumulating counter, it's something showing the current state, right?
> The max_total_memory limit is checked whenever the global counters are
> updated. There is no special error handling if a memory allocation exceeds
> the global limit. That allocation returns a NULL for malloc style
> allocations or an ENOMEM for shared memory allocations. Postgres has
> existing mechanisms for dealing with out of memory conditions.
I still think it's extremely unwise to do tracking of memory and limiting of
memory in one patch. You should work towards and acceptable patch that just
tracks memory usage in an as simple and low overhead way as possible. Then we
later can build on that.
> For sanity checking, pgstat now includes the pg_backend_memory_allocation
> view showing memory allocations made by the backend process. This view
> includes a scan of the top memory context, so it compares memory allocations
> reported through pgstat with actual allocations. The two should match.
Can't you just do this using the existing pg_backend_memory_contexts view?
> Performance-wise, there was no measurable impact with either pgbench or a
> simple “SELECT * from series” query.
That seems unsurprising - allocations aren't a major part of the work there,
you'd have to regress by a lot to see memory allocator changes to show a
significant performance decrease.
> diff --git a/src/test/regress/expected/opr_sanity.out b/src/test/regress/expected/opr_sanity.out
> index 7a6f36a6a9..6c813ec465 100644
> --- a/src/test/regress/expected/opr_sanity.out
> +++ b/src/test/regress/expected/opr_sanity.out
> @@ -468,9 +468,11 @@ WHERE proallargtypes IS NOT NULL AND
> ARRAY(SELECT proallargtypes[i]
> FROM generate_series(1, array_length(proallargtypes, 1)) g(i)
> WHERE proargmodes IS NULL OR proargmodes[i] IN ('i', 'b', 'v'));
> - oid | proname | proargtypes | proallargtypes | proargmodes
> ------+---------+-------------+----------------+-------------
> -(0 rows)
> + oid | proname | proargtypes | proallargtypes | proargmodes
> +------+----------------------------------+-------------+---------------------------+-------------------
> + 9890 | pg_stat_get_memory_reservation | | {23,23,20,20,20,20,20,20} | {i,o,o,o,o,o,o,o}
> + 9891 | pg_get_backend_memory_allocation | | {23,23,20,20,20,20,20} | {i,o,o,o,o,o,o}
> +(2 rows)
This indicates that your pg_proc entries are broken, they need to fixed rather
than allowed here.
Greetings,
Andres Freund
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | John Morris <john(dot)morris(at)crunchydata(dot)com>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-06 18:02:50 |
Message-ID: | [email protected] |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Greetings,
* Andres Freund (andres(at)anarazel(dot)de) wrote:
> On 2023-10-31 17:11:26 +0000, John Morris wrote:
> > Postgres memory reservations come from multiple sources.
> >
> > * Malloc calls made by the Postgres memory allocators.
> > * Static shared memory created by the postmaster at server startup,
> > * Dynamic shared memory created by the backends.
> > * A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.
> >
> > Each process also maintains an accurate count of its actual memory
> > allocations. The process-private variable “my_memory” holds the total
> > allocations for that process. Since there can be no contention, each process
> > updates its own counters very efficiently.
>
> I think this will introduce measurable overhead in low concurrency cases and
> very substantial overhead / contention when there is a decent amount of
> concurrency. This makes all memory allocations > 1MB contend on a single
> atomic. Massive amount of energy have been spent writing multi-threaded
> allocators that have far less contention than this - the current state is to
> never contend on shared resources on any reasonably common path. This gives
> away one of the few major advantages our process model has away.
We could certainly adjust the size of each reservation to reduce the
frequency of having to hit the atomic. Specific suggestions about how
to benchmark and see the regression that's being worried about here
would be great. Certainly my hope has generally been that when we do a
larger allocation, it's because we're about to go do a bunch of work,
meaning that hopefully the time spent updating the atomic is minor
overall.
> The patch doesn't just introduce contention when limiting is enabled - it
> introduces it even when memory usage is just tracked. It makes absolutely no
> sense to have a single contended atomic in that case - just have a per-backend
> variable in shared memory that's updated. It's *WAY* cheaper to compute the
> overall memory usage during querying than to keep a running total in shared
> memory.
Agreed that we should avoid the contention when limiting isn't being
used, certainly easy to do so, and had actually intended to but that
seems to have gotten lost along the way. Will fix.
Other than that change inside update_global_reservation though, the code
for reporting per-backend memory usage and querying it does work as
you're outlining above inside the stats system.
That said- I just want to confirm that you would agree that querying the
amount of memory used by every backend, to add it all up to enforce an
overall limit, surely isn't something we're going to want to do during
an allocation and that having a global atomic for that is better, right?
> > Pgstat now includes global memory counters. These shared memory counters
> > represent the sum of all reservations made by all Postgres processes. For
> > efficiency, these global counters are only updated when new reservations
> > exceed a threshold, currently 1 Mb for each process. Consequently, the
> > global reservation counters are approximate totals which may differ from the
> > actual allocation totals by up to 1 Mb per process.
>
> I see that you added them to the "cumulative" stats system - that doesn't
> immediately makes sense to me - what you're tracking here isn't an
> accumulating counter, it's something showing the current state, right?
Yes, this is current state, not an accumulation.
> > The max_total_memory limit is checked whenever the global counters are
> > updated. There is no special error handling if a memory allocation exceeds
> > the global limit. That allocation returns a NULL for malloc style
> > allocations or an ENOMEM for shared memory allocations. Postgres has
> > existing mechanisms for dealing with out of memory conditions.
>
> I still think it's extremely unwise to do tracking of memory and limiting of
> memory in one patch. You should work towards and acceptable patch that just
> tracks memory usage in an as simple and low overhead way as possible. Then we
> later can build on that.
Frankly, while tracking is interesting, the limiting is the feature
that's needed more urgently imv. We could possibly split it up but
there's an awful lot of the same code that would need to be changed and
that seems less than ideal. Still, we'll look into this.
> > For sanity checking, pgstat now includes the pg_backend_memory_allocation
> > view showing memory allocations made by the backend process. This view
> > includes a scan of the top memory context, so it compares memory allocations
> > reported through pgstat with actual allocations. The two should match.
>
> Can't you just do this using the existing pg_backend_memory_contexts view?
Not and get a number that you can compare to the local backend number
due to the query itself happening and performing allocations and
creating new contexts. We wanted to be able to show that we are
accounting correctly and exactly matching to what the memory context
system is tracking.
> > - oid | proname | proargtypes | proallargtypes | proargmodes
> > ------+---------+-------------+----------------+-------------
> > -(0 rows)
> > + oid | proname | proargtypes | proallargtypes | proargmodes
> > +------+----------------------------------+-------------+---------------------------+-------------------
> > + 9890 | pg_stat_get_memory_reservation | | {23,23,20,20,20,20,20,20} | {i,o,o,o,o,o,o,o}
> > + 9891 | pg_get_backend_memory_allocation | | {23,23,20,20,20,20,20} | {i,o,o,o,o,o,o}
> > +(2 rows)
>
> This indicates that your pg_proc entries are broken, they need to fixed rather
> than allowed here.
Agreed, will fix.
Thanks!
Stephen
From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | John Morris <john(dot)morris(at)crunchydata(dot)com>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-07 19:55:06 |
Message-ID: | [email protected] |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
On 2023-11-06 13:02:50 -0500, Stephen Frost wrote:
> > > The max_total_memory limit is checked whenever the global counters are
> > > updated. There is no special error handling if a memory allocation exceeds
> > > the global limit. That allocation returns a NULL for malloc style
> > > allocations or an ENOMEM for shared memory allocations. Postgres has
> > > existing mechanisms for dealing with out of memory conditions.
> >
> > I still think it's extremely unwise to do tracking of memory and limiting of
> > memory in one patch. You should work towards and acceptable patch that just
> > tracks memory usage in an as simple and low overhead way as possible. Then we
> > later can build on that.
>
> Frankly, while tracking is interesting, the limiting is the feature
> that's needed more urgently imv.
I agree that we need limiting, but that the tracking needs to be very robust
for that to be usable.
> We could possibly split it up but there's an awful lot of the same code that
> would need to be changed and that seems less than ideal. Still, we'll look
> into this.
Shrug. IMO keeping them together just makes it very likely that neither goes
in.
> > > For sanity checking, pgstat now includes the pg_backend_memory_allocation
> > > view showing memory allocations made by the backend process. This view
> > > includes a scan of the top memory context, so it compares memory allocations
> > > reported through pgstat with actual allocations. The two should match.
> >
> > Can't you just do this using the existing pg_backend_memory_contexts view?
>
> Not and get a number that you can compare to the local backend number
> due to the query itself happening and performing allocations and
> creating new contexts. We wanted to be able to show that we are
> accounting correctly and exactly matching to what the memory context
> system is tracking.
I think creating a separate view for this will be confusing for users, without
really much to show for. Excluding the current query would be useful for other
cases as well, why don't we provide a way to do that with
pg_backend_memory_contexts?
Greetings,
Andres Freund
From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | John Morris <john(dot)morris(at)crunchydata(dot)com>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-07 20:55:48 |
Message-ID: | [email protected] |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Greetings,
* Andres Freund (andres(at)anarazel(dot)de) wrote:
> On 2023-11-06 13:02:50 -0500, Stephen Frost wrote:
> > > > The max_total_memory limit is checked whenever the global counters are
> > > > updated. There is no special error handling if a memory allocation exceeds
> > > > the global limit. That allocation returns a NULL for malloc style
> > > > allocations or an ENOMEM for shared memory allocations. Postgres has
> > > > existing mechanisms for dealing with out of memory conditions.
> > >
> > > I still think it's extremely unwise to do tracking of memory and limiting of
> > > memory in one patch. You should work towards and acceptable patch that just
> > > tracks memory usage in an as simple and low overhead way as possible. Then we
> > > later can build on that.
> >
> > Frankly, while tracking is interesting, the limiting is the feature
> > that's needed more urgently imv.
>
> I agree that we need limiting, but that the tracking needs to be very robust
> for that to be usable.
Is there an issue with the tracking in the patch that you saw? That's
certainly an area that we've tried hard to get right and to match up to
numbers from the rest of the system, such as the memory context system.
> > We could possibly split it up but there's an awful lot of the same code that
> > would need to be changed and that seems less than ideal. Still, we'll look
> > into this.
>
> Shrug. IMO keeping them together just makes it very likely that neither goes
> in.
I'm happy to hear your support for the limiting part of this- that's
encouraging.
> > > > For sanity checking, pgstat now includes the pg_backend_memory_allocation
> > > > view showing memory allocations made by the backend process. This view
> > > > includes a scan of the top memory context, so it compares memory allocations
> > > > reported through pgstat with actual allocations. The two should match.
> > >
> > > Can't you just do this using the existing pg_backend_memory_contexts view?
> >
> > Not and get a number that you can compare to the local backend number
> > due to the query itself happening and performing allocations and
> > creating new contexts. We wanted to be able to show that we are
> > accounting correctly and exactly matching to what the memory context
> > system is tracking.
>
> I think creating a separate view for this will be confusing for users, without
> really much to show for. Excluding the current query would be useful for other
> cases as well, why don't we provide a way to do that with
> pg_backend_memory_contexts?
Both of these feel very much like power-user views, so I'm not terribly
concerned about users getting confused. That said, we could possibly
drop this as a view and just have the functions which are then used in
the regression tests to catch things should the numbers start to
diverge.
Having a way to get the memory contexts which don't include the
currently running query might be interesting too but is rather
independent of what this patch is trying to do. The only reason we
collected up the memory-context info is as a cross-check to the tracking
that we're doing and while the existing memory-context view is just fine
for a lot of other things, it doesn't work for that specific need.
Thanks,
Stephen
From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | John Morris <john(dot)morris(at)crunchydata(dot)com>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-08 17:20:44 |
Message-ID: | [email protected] |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
On 2023-11-07 15:55:48 -0500, Stephen Frost wrote:
> * Andres Freund (andres(at)anarazel(dot)de) wrote:
> > On 2023-11-06 13:02:50 -0500, Stephen Frost wrote:
> > > > > The max_total_memory limit is checked whenever the global counters are
> > > > > updated. There is no special error handling if a memory allocation exceeds
> > > > > the global limit. That allocation returns a NULL for malloc style
> > > > > allocations or an ENOMEM for shared memory allocations. Postgres has
> > > > > existing mechanisms for dealing with out of memory conditions.
> > > >
> > > > I still think it's extremely unwise to do tracking of memory and limiting of
> > > > memory in one patch. You should work towards and acceptable patch that just
> > > > tracks memory usage in an as simple and low overhead way as possible. Then we
> > > > later can build on that.
> > >
> > > Frankly, while tracking is interesting, the limiting is the feature
> > > that's needed more urgently imv.
> >
> > I agree that we need limiting, but that the tracking needs to be very robust
> > for that to be usable.
>
> Is there an issue with the tracking in the patch that you saw? That's
> certainly an area that we've tried hard to get right and to match up to
> numbers from the rest of the system, such as the memory context system.
There's some details I am pretty sure aren't right - the DSM tracking piece
seems bogus to me. But beyond that: I don't know. There's enough other stuff
in the patch that it's hard to focus on that aspect. That's why I'd like to
merge a patch doing just that, so we actually can collect numbers. If any of
the developers of the patch had focused on polishing that part instead of
focusing on the limiting, it'd have been ready to be merged a while ago, maybe
even in 16. I think the limiting piece is unlikely to be ready for 17.
Greetings,
Andres Freund
From: | jian he <jian(dot)universality(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, John Morris <john(dot)morris(at)crunchydata(dot)com>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, "reid(dot)thompson(at)crunchydata(dot)com" <reid(dot)thompson(at)crunchydata(dot)com>, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2023-11-10 09:55:27 |
Message-ID: | CACJufxFdkjzn74CLDbwCAYLDmEWGsehtj_672OHzLhzBSsMO1Q@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Lists: | pgsql-hackers |
hi.
+static void checkAllocations();
should be "static void checkAllocations(void);" ?
PgStatShared_Memtrack there is a lock, but seems not initialized, and
not used. Can you expand on it?
So in view pg_stat_global_memory_tracking, column
"total_memory_reserved" is a point of time, total memory the whole
server reserved/malloced? will it change every time you call it?
the function pg_stat_get_global_memory_tracking provolatile => 's'.
should be a VOLATILE function?
pg_stat_get_memory_reservation, pg_stat_get_global_memory_tracking
should be proretset => 'f'.
+{ oid => '9891',
+ descr => 'statistics: memory utilized by current backend',
+ proname => 'pg_get_backend_memory_allocation', prorows => '1',
proisstrict => 'f',
+ proretset => 't', provolatile => 's', proparallel => 'r',
you declared
+void pgstat_backend_memory_reservation_cb(void);
but seems there is no definition.
this part is unnecessary since you already declared
src/include/catalog/pg_proc.dat?
+/* SQL Callable functions */
+extern Datum pg_stat_get_memory_reservation(PG_FUNCTION_ARGS);
+extern Datum pg_get_backend_memory_allocation(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_global_memory_tracking(PG_FUNCTION_ARGS);
The last sentence is just a plain link, no explanation. something is missing?
<para>
+ Reports how much memory remains available to the server. If a
+ backend process attempts to allocate more memory than remains,
+ the process will fail with an out of memory error, resulting in
+ cancellation of the process's active query/transaction.
+ If memory is not being limited (ie. max_total_memory is zero or not set),
+ this column returns NULL.
+ <xref linkend="guc-max-total-memory"/>.
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>static_shared_memory</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Reports how much static shared memory (non-DSM shared memory)
is being used by
+ the server. Static shared memory is configured by the postmaster at
+ at server startup.
+ <xref linkend="guc-max-total-memory"/>.
+ </para></entry>
+ </row>