-
Notifications
You must be signed in to change notification settings - Fork 336
urllib3 connection pool full using messaging.send_each_for_multicast() #712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight. |
Hi @filippotp, Thanks for reporting the issue! According to the answers in https://stackoverflow.com/questions/53765366/urllib3-connectionpool-connection-pool-is-full-discarding-connection, seeing Here's the reason why those warnings only show up in the logs when Unlike
So if you only send |
Any updates on letting messaging.send_each and messaging.send_each_for_multicast start less threads? Its not obvious to me why starting a thread per token in the multicast message is a good idea. The number of tokens that one wants to send the message to most likely has nothing to do with the desire for concurrency (which depends on system resources, etc.) Note that calling messaging.send_each_for_multicast multiple times (which is what I do now) is a sub-optimal solution. Maybe the SDK can be extended to allow us to configure the max_workers to use when calling messaging.send_each and messaging.send_each_for_multicast? |
+1 on @mortenthansen comment, we got bit by this, sending push by batches of 500 messages, only to realise len(input) was used as the pool_size for the ThreadPool. I really can't figure out a good rationale for this. Spawning 500 threads (which I don't think will happen since python ThreadPool reuses free ones before spawning new ones), isn't cost free and at the very least should be documented as such. |
@filippotp does this means that send_each_for_multicast with 500 messages will send all push notifications, regardless the warnings (will not discard 490 messages out of 500 and send only 10 per batch)? Thank you for your answer in advance :) |
@milemik I think you are referring to @Klemenceo's comment. Anyway, in my personal case the issue has never occurred again after updating firebase-admin to 6.3.0 and definitively migrating to |
Yes, sorry questtion was for @Doris-Ge :) Thank you 😄 |
I have version 6.5.0 and I can see these warnings, in my logs of django (Celery) project, also because of spawning 500 threads the workers consumes 100% CPU machine which affects other workers. What can be the solution for this? @milemik @filippotp |
Hi @ankit-wadhwani-hb to be honest I didn't benchmark this, but it does make sense that threads will use 100% of CPU, maybe this is expected. What matters is that CPU goes back to normal after sending push notifications,and hopefully no notifications will be lost. I will get back to you when I test it. Thank you for this notice! |
This similar thing is happening with me , the cpu utilisation goes to 99-100 % and as soon as I restart the process the cpu drops significantly. |
Quick update if I change the batch size from 500 to 10 it works properly but not sure if it is the right way to do it |
@Jay2109 Well I don't think this is a good fix... This means that more more resources we need to use when sending push notifications (more tasks will be triggered at least in my logic 😄 ). |
What solution have you implemented currently ? |
@Jay2109 @milemik - any other solution? currently I am doing batching of 10 messages at a time otherwise it consumes the full server cpu + memory and hampers other services. |
The batch of 10 also doesn't work for a longer duration after sometime the cpu increases. |
Well @ankit-wadhwani-hb to be honest not sure... I hope that developers of this library are aware of this issue and I expect to hear some answer from them. |
@milemik - Currently there are 20 scheduled bulk notifications mostly that go daily to 100,000 users, I am using celery with python to push the notifications in SQS queue and running a supervisor worker to consume the messages from the queue to send the notifications. |
Ok, and how much push notifications you send in one task? If you are sending push notification with one task, maybe you can do some optimisation and not sending 100.000 in one celery task, but split it into more smaller tasks. Anyway lets wait some answer from someone in development team to give us some answers 😃 |
I am not able to send 500 in a batch after sometime it is eating up my cpu |
Any news? Problem with send_each... send_multicast worked well!! |
if any update or if it is fixed in Beta or in some other language please update here ? |
I think this is more urllib constraint... and there are some ways to update this number... I would like to here some answer from maintainers... 😅 |
2025 and the problem persists... |
+1 - I think it is a very small change to allow the caller to set the |
Hi, I had this issue too. To work around it, I initially handled the requests manually. I built an async solution using httpx.AsyncClient and sent Firebase notification requests in batches. This significantly reduced CPU usage and improved the sending speed. However, there's now a better solution available directly in the Firebase Admin SDK. A new async feature has been merged (but not officially released yet) that adds a send_each_async method. You can use it by installing the SDK from the specific commit like this: |
@horrhamid you are right! We are actively working on optimizing the current send_each APIs. We will track the progress in here #788 |
I'm using FCM to send multicast messages from my Django app running on Heroku.
Since
messaging.send_muticast()
function is deprecated, I changed my code by usingmessaging.send_each_for_muticast()
, as stated in the deprecation warning.After pushing the new code to production, I often see multiple warnings in the Heroku application logs when the app sends messages:
WARNING:urllib3.connectionpool:Connection pool is full, discarding connection: fcm.googleapis.com. Connection pool size: 10
Editing my code back to using
messaging.send_muticast()
seems to solve the issue.The text was updated successfully, but these errors were encountered: