IMHO it should rather be something like:
1000CPS in ALERT, 1000CPS in NORMAL, 1000CPS in BULK:
- ALERT gets 600CPS
- NORMAL gets 300CPS
- BULK gets 100CPS
and what if my raid combat addon thats using alert priority doesnt need more than 200CPS, but for some reason GEM or DamageMeters (put any other example here, just naming it as I know it syncs a lot) decide to do a main sync that moment and ask for 10.000CPS? Will then the alert group be throttled to even less than 200CPS?
The priority allocations are certainly up for discussion.
What it basically does now is find a common limit that it then applies to all priorities. Another two examples then:
BULK: 10000CPS, NORMAL: 200CPS, ALERT: 400CPS
- Bulk 400
- Normal 200
- Alert 400
BULK: 10000CPS, NORMAL: 300CPS, ALERT: 400CPS
- Bulk 350
- Normal 300
- Alert 350
Also, in my personal view: any messages with BULK priority should have 0CPS until there's no ALERT queue. I don't want a non-combat guild bank addon to throttle my raid combat addons in the middle of a boss fight.
I don't quite agree. Imagine KLHTM saying "OMG MY MESSAGES ARE SO DAMN IMPORTANT" and putting them in "ALERT" priority. (Pretty likely, every addon author thinks that his own messages are the most important of all).
Suddenly all non-alert traffic is shafted. All of it.
This is why I didn't go with a straight priority override but rather attempt to share available bandwidth over the priorities. But, as I said, this algorithm can certainly be tweaked to allocate more or less bandwidth to different priorities.
My original thinking was simply that a sane author simply WOULDN'T be sending that much alert traffic. So having the alert priotiy be limited to 333cps in a worst-case scenario (full flood in normal and bulk) would then not ever impose a problem?
Agree or disagree here?
Algorithm? Hmm... okay I'll give it a quick try. It will look damn simple, the actual interactions over time are not. I've designed and implemented a commercial grade network traffic shaper at work so I've learned a thing or two about how to optimize this stuff :-P
Basically, what happens is that on each timer pulse, I compute 1000*time_since_last, and give an equal slice of it to all priority queues that have data waiting to be transmitted. They then proceed to use up as much of that slice as possible. Possible spillover or unused bandwidth (because the next message was too large to deliver) is kept until the next tick.
Obviously, this is a bit simplistic if I wanted to do multi gigabit shaping over a larger number of queues. Spillover would need to be handled differently, and I would probably want to keep a decaying average ticking to take the spikes out of it. But this works well enough for these purposes, and, above all, is very light on CPU and mem crunching.