Http work queue depth exceeded

Hallo
Today i got the following error on my Firo Masternode.
I do not understand the error.
What could be the reason?
What is wrong tuned or configured on my masternode?
Why this problem was solved automaticly without my instistation.
Can this problem happen again?
How can i solve this issue in the future?
What would be the correct value of -rpcworkqueue?
How can i apply the -rpcworkqueue?
can i apply this parameter to the firo config?
Thanks for the support.

Log:

[nodename.firo]$ tail -3000 debug.log | grep WARNING | head -n2
2025-02-15 19:45:09 WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting
2025-02-15 19:45:11 WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting
[nodename .firo]$ tail -3000 debug.log | grep WARNING | tail -n2
2025-02-15 19:53:07 WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting
2025-02-15 19:53:08 WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting

Have this caused crashes on your masternode or POSE score increase?

Work queue depth can be increased by adding rpcworkqueue=VALUE in firo.conf. The default value is 16.

Hallo @anwar
Thanks a lot for your help.
No there was no crash or POSE score increase. From this point of view everything is fine. I hope it will not happen again for longer than 8 min. and then with POSE ban.
Do you know why this could happen? Was it triggered from external? Do you think it would be good to increase this value to i.e. 32 to protect my Masternode in the future?

You can try increasing it to 32.

Not sure what is causing this issue. It cannot be triggered externally unless you allowed external IP addresses to access the local RPC call (network stuff does not use RPC).

Are you running anything else on the masternode?

Yes a pivx MN.
i will increase to 32 if it happen again.
Thanks @anwar