I have never seen the error on a reloadUI but I also don't raid in 3.3. Unfortunately my hands are really tied here. There is no way I can write an addon around unknown weird memory allocation limits. This is a relatively new phenomenon (perhaps 3.0.x), because Recount used to be much more wasteful with memory without causing the issue.
I do recomment deleting data when logging out. I cannot really do much about other cases without crippling desirable Recount functionalty.
You are exactly right. What you want is that after a DC or reloadUI all data is here. For that reason I cannot delete data.
On data size, I have already gone through multiple passes of optimizing Recount. I will not hide that for me speed > size and that risking poorer in-raid performance at the expense of data size is not something I'm willing to do. Skada's data storage isn't particularly more optimized than Recount's. Skada just collects less detail/offers less functionality.
Both Recount and Skada are squeezed for in-raid computational performance in 25-mans. There isn't a lot of computational cycles to try to massage data to take less space. While there may be microoptimizations still to be had I frankly do not have the time to go through that exercise, and it would be a lot of time spend for likely very little gain. And even if I spend the time there is no guarantee at all that it will fix the issue you describe.
Recount gives users many handles to control data size accumulation. I would suggest using them. Here is are the two factors that my far the most contribute to data size accumulation:
1) Time data. Turn this off unless you specifically want to test something.
2) Past fights. Default is 5. I strongly advise against turning this higher unless you can live with the consequences. It is sensible to turn this down to even less.
Further, less potent things to do are:
3) Disable death tracking for any mobs that you do not regularly check deaths for. If you do not ever check death data, disable this.
4) Disable all data tracking for trash mobs, I think this is the default setting though.
Things to do that will help but minimally:
5) Disable unused modules. Almost all modules that can currently be disabled have minimal memory footprint though, so I only recommend this if one is a purist or desparate.
Unfortunately to offer functionality meters have to store the data. There is no real way around it. If people want death data that is descriptive, I will have to store it. So it's up to you if you can live with less info stored, or with the error blizzard throws.
Now what I could do is delete data on logout. I would hate to do this and but it would save me the error message. But it would stiffle functionality of the case where in fact it does allocate as intended. Such as people retaining data after a DC. Frankly the good solution is to lobby with Blizzard to improve their allocation method, perhaps back to how it used to be.
Would it be possible to do a dump of stats per fight to a flat file (eg. csv) to allow longer term analysis using external tools such as Excel? I would really like to be able to compare my performance on certain bosses over time.
No, and it's not possible for an addon to write to any file other than its own saved variables file. You could either have an in-game addon show a frame containing the data in a CSV format to allow you to copy it and paste it in Excel, or you could have an external program that read data from Recount's saved variables file and appended it to another file in another format. Either way, I very much doubt Elsia has time or interest to write something like that.
Have you looked at any of the existing external programs that analyze the combat log?
Now what I could do is delete data on logout. I would hate to do this and but it would save me the error message. But it would stiffle functionality of the case where in fact it does allocate as intended. Such as people retaining data after a DC.
You could just add an option to delete data on logout. That would allow those who do collect lots of data to avoid memory allocation errors on login without having to remember to manually delete their data before logging out, while not forcing everyone to lose data when logging out.
It's a waste though. The memory allocation error does nothing but prevent Recount's SV from loading, hence acts as a reset. If I add the option it will prevent the error from popping up for those that don't want the error message sure, but it will still delete data for everybody.
Unfortunately my time is currently extremely limited. I'd love to research the exact causes for the error (it's a core lua engine error) and why it can occur for certain SV files when clearly the data fit into memory (from which it was originally saved).
If it was clear to me what exactly makes a SV table offensive, there might actually be constructive ways to handle this or at least some certainty that really there is no good solution. If anybody has some deep insight (or actual time to track stuff like this down), much appreciated.
It's a waste though. The memory allocation error does nothing but prevent Recount's SV from loading, hence acts as a reset. If I add the option it will prevent the error from popping up for those that don't want the error message sure, but it will still delete data for everybody.
Erm, how does an option force everyone (or anyone) to delete their data? Have it disabled by default. If someone frequently has memory allocation errors, they can enable the option...
If it was clear to me what exactly makes a SV table offensive, there might actually be constructive ways to handle this or at least some certainty that really there is no good solution. If anybody has some deep insight (or actual time to track stuff like this down), much appreciated.
I'd prefer some help with this than bickering over how to best do band-aid solutions. I have an idea for a band-aid but I won't be working on this for a week because I have no time right now (frankly I don't even have time to post but alas).
If in that week there is a way to track down and potentially fix the real issue, that'd be much much preferable to me. See quoted text for the original request for help and participation.
Asking how an optionforces anyone to do anything is bickering? Okay...
Anyway, if you don't want to make changes to the actual data storage format, and assuming you don't have a way to convince Blizzard revert their change to the way the game loads tables, then really the only "solution" I can think of is to split the data into more than one table so that no individual table is large enough to hit the (unknown) limit. There's nothing inherently "offensive" about a saved variables table... if you try to define a similarly sized table in a single operation from somewhere other than a saved variables file, you'd probably see the same error. In fact, addons can and do trigger memory allocation errors without saved variable involvement; see the dozens of threads on this and other forums pointing fingers at CowTip.
Phanx, I don't know where you picked up the forcing thang... I'm just trying to keep an addon maintained here... Peace, OK?
Anyway, I just did some tests and Recount works fine with my old stored BT raid night content SV. That one is for 20MB addon memory. This one loads fine without error. Then, to create a very large SV file, I had all data collection turned to max, created a whopping 360MB SV just running 5-man instances. This would indeed trigger the error when relogging. However this is by no means normal nor desirable stuff. Full time data and death records for any random critter etc etc. Noone should ever want to run with such settings.
If a normal raid night (say 5 hours worth) accumulates more than 50MB addon memory with Recount in a 25 man raid I'd be worried. As said my own stored file for a full BT raid night was 20MB (which was a raid in the 4-5 hour range) and my own expectation for a raid night is in the 20-30MB range. This is with default settings.
Zidomo reported that even early in a raid he gets the issue. I cannot understand how that happens unless his accumulation settings are such to grow the data very quickly, in which case the fix is to find more sane settings. As a side-note, this clearly isn't the CowTip issue that people reported because it has solely to do with loading SV (in a consistently reproducable way).
Full time data and death records for any random critter etc etc. Noone should ever want to run with such settings.
Well, I'd have to ask — if those settings shouldn't ever be enabled, why are they available to enable? If a setting is available, you've got to expect that someone is going to enable it, and plan accordingly. If there's just no way to have those settings enabled without bloating the saved variables beyond the size limit, they should be removed.
Alternatively, you could store certain types of data in their own tables. For instance, death records and time data could each have their own table, since those data types aren't really mixed with other data types (as far as I know; I don't use either module) in any view.
Well, I'd have to ask — if those settings shouldn't ever be enabled, why are they available to enable?
Well that isn't quite the case though. In a car you may not want to turn on the whipers when the sun is out, yet the car does not prevent you from doing it anyway. Why the heck does the car allow this!
Well the answer is some settings are situational, but detecting the matching situation is hard and up to people, so technology allows it which means that users can screw up but that's live.
On the practical matter I'll have to wait for more input from Zidomo to know what's going on.
Frankly I'm not particularly concerned with a normal raid day requiring a delete at the end of the day to avoid an error at login. I'm much more concerned about reloadui's and disconnects already early in a raid day causing the issue. Have you observed this?
The main piece of information that is of interest is what is Recount's addon memory usage when you see the effect and how long did it take to get there.
I don't get d/c'ed often. The last time it happened was at the end of ICC (what's available of it anyway), after killing Deathbringer Saurfang and watching the scripted event. It took quite long for the game to kick me out, as it was writing all the SVs, including Recount's. That was the first or second week after ICC had come out, so it was a long raiding night with many attempts on bosses before finally clearing, so it made for a lot of data to write. Plus the game had been running for several hours, which probably didn't help the time it took for wow to kick me out cleanly.
Anyway, finally got kicked out to the login screen, tried relogging, the game crashed. As in wow.exe, not a lua error. Tried again, same thing. I deleted Recount's SV for that character and I was able to log back in. That's the worst I've ever had. Usually it's a lua error.
As to memory usage, I guess I could check the memory usage after every boss fight (or after every 5-man), then do a /reloadui and hope for a failure?
Frankly I'm not particularly concerned with a normal raid day requiring a delete at the end of the day to avoid an error at login. I'm much more concerned about reloadui's and disconnects already early in a raid day causing the issue. Have you observed this?
The main piece of information that is of interest is what is Recount's addon memory usage when you see the effect and how long did it take to get there.
Why not concerned? Some/many users may like to review raid/instance data after logging out and coming back. With deleted data, that can't happen.
Also, the main info you need is unfortunately not going to be available. When the error occurs right at logon, the saved variable data is deleted. When the saved variable data is deleted, you are not going to be able to see the memory usage of the data that created the error.
You could, perhaps, do as Shadoweric suggested. A giant amount of testing work, though. What are you trying to determine...the unknown average memory usage cut-off point for regular errors/data deletions? In order to look at redoing how/how much data is stored?
Completing the first two bosses in ICC, I have a saved variable file for a character of over 32 MB. This was doing the bosses and the initial trash. And this is with time data for everything turned off, fight segment maximum only set to 5 (which frankly, is often not enough to be useful; 10 is more reasonable), no mob data at all set to be recorded (only players) and no player buffs or debuffs set to be recorded. All four separate Recount modules enabled (GuessedAbsorbs, DeathTrack, Failbot & Threat).
Have memories of seeing saved variables files over 100MB in size for various runs in the past month with the same options. May or may not do more hardcore investigating; depends on what/why/how you need the memory usage statistics.
As a side-note it is annoying to work around these issues. Clearly tables that can be allocated and written should be loadable, but that's no longer the situation we are given. Ultimately what we are looking at here is working around a bug that isn't ours. And causes for the bug that aren't yet fully understood.
The reason why I am less concerned about resets at end of raid days is that, while I agree it's nice to have the data the next day, at least there is user choice involved. Hypothetically, if you know that Blizz just doesn't allow anymore that very large parses be stored, you can plan and inspect the data before logout, and perhaps take screenshoots. Yes this isn't ideal either, but at least one has some options to cope and use the information in sensible ways.
DCs however are a different story. There the user has no choice whatsoever, hence lost data completely destroys the chance for inspection. There is no tricks to cope, so this is just worst-case. To me that is drastically worse, hence why I am much much more concerned about deletes due to DCs than after logins.
Yes, I do want to get a sense where the cut-off is, or if it's not a cut-off based on table size, perhaps which table configurations (likely numeric key gaps) causes the effect.
Ultimately I want to fix the problem if I can. If I cannot fix it I want to come up with a sensible solution. To be able to do that I first need to understand what causes the problem. This is what I ask input on. So any data points that sharpen what happens here help.
For example if you have a SV of size X that causes the problem. If you can actually make a copy before logout and report the file size that'd be very helpful.
Table size matters because it's the one variable that for me so far has allowed to reproduce the problem, i.e. small data loads fine, large data does not.
Btw 100MB is large. Is this only since 3.3 or also before? Does the 3.3 combat log report much more info than before?
Btw 100MB is large. Is this only since 3.3 or also before? Does the 3.3 combat log report much more info than before?
Since 3.3 with the options & modules as noted. Can look further back with my backed up saved variables to see if there are any similar sized prior to 3.3, but that will take time.
Elsia, would it be feasible for you to write a dummy addon that auto-fills a table with similar in structure to Recount's with random data, saves it, then tried to load it at logon again? That might provide you with some ideas of the cutoff point.
Other than that, I'll see if I can catch a file causing the problem.
As a Disc Priest, I'd like to see heals + absorbs. I saw the plugin on curse but it doesn't seem to work. Could whoever's supporting that take a look and fix it?
As a Disc Priest, I'd like to see heals + absorbs. I saw the plugin on curse but it doesn't seem to work. Could whoever's supporting that take a look and fix it?
RecountGuessedAbsorbs works without issue. It only shows absorbs, though, not absorbs+healing, so will have to do the combined math yourself. And those meter-maids that judge "good healing" on throughput might have a difficult time comprehending it.
One issue with Broker (& Fubar) plugins that allow you to open up the Recount frame to the data area of your choice. One or more of them totally ignore any additional modules such as this you add. So you have to access it directly from within Recount if you use such a mod.
Examples: FuBar_RecountFu. Provides full direct access to all data pages, including any you add that are not part of the base package. My favorite. Broker_RecountInfo. Similar to the above, this is the only other one that provides full access to all Recount data pages, including separate modules you add individually. But on some LDB displays, the text is way too large, etc..
Ones that are limited: Broker_RecountStatistics. Shows you new module mods in the tooltip, but greys them out so that you can't access them. Only can open to original Recount pages. Broker_Recount. The most basic, no LDB data info & no direct access to any data pages. Only the Recount frame with the last data page you had open. cBroker:Recount. Like the above in that it only gives you access to the main Recount frame, not individual data pages. But provides some data in the LDB display.
I do recomment deleting data when logging out. I cannot really do much about other cases without crippling desirable Recount functionalty.
You are exactly right. What you want is that after a DC or reloadUI all data is here. For that reason I cannot delete data.
On data size, I have already gone through multiple passes of optimizing Recount. I will not hide that for me speed > size and that risking poorer in-raid performance at the expense of data size is not something I'm willing to do. Skada's data storage isn't particularly more optimized than Recount's. Skada just collects less detail/offers less functionality.
Both Recount and Skada are squeezed for in-raid computational performance in 25-mans. There isn't a lot of computational cycles to try to massage data to take less space. While there may be microoptimizations still to be had I frankly do not have the time to go through that exercise, and it would be a lot of time spend for likely very little gain. And even if I spend the time there is no guarantee at all that it will fix the issue you describe.
Recount gives users many handles to control data size accumulation. I would suggest using them. Here is are the two factors that my far the most contribute to data size accumulation:
1) Time data. Turn this off unless you specifically want to test something.
2) Past fights. Default is 5. I strongly advise against turning this higher unless you can live with the consequences. It is sensible to turn this down to even less.
Further, less potent things to do are:
3) Disable death tracking for any mobs that you do not regularly check deaths for. If you do not ever check death data, disable this.
4) Disable all data tracking for trash mobs, I think this is the default setting though.
Things to do that will help but minimally:
5) Disable unused modules. Almost all modules that can currently be disabled have minimal memory footprint though, so I only recommend this if one is a purist or desparate.
Unfortunately to offer functionality meters have to store the data. There is no real way around it. If people want death data that is descriptive, I will have to store it. So it's up to you if you can live with less info stored, or with the error blizzard throws.
Now what I could do is delete data on logout. I would hate to do this and but it would save me the error message. But it would stiffle functionality of the case where in fact it does allocate as intended. Such as people retaining data after a DC. Frankly the good solution is to lobby with Blizzard to improve their allocation method, perhaps back to how it used to be.
No, and it's not possible for an addon to write to any file other than its own saved variables file. You could either have an in-game addon show a frame containing the data in a CSV format to allow you to copy it and paste it in Excel, or you could have an external program that read data from Recount's saved variables file and appended it to another file in another format. Either way, I very much doubt Elsia has time or interest to write something like that.
Have you looked at any of the existing external programs that analyze the combat log?
You could just add an option to delete data on logout. That would allow those who do collect lots of data to avoid memory allocation errors on login without having to remember to manually delete their data before logging out, while not forcing everyone to lose data when logging out.
Unfortunately my time is currently extremely limited. I'd love to research the exact causes for the error (it's a core lua engine error) and why it can occur for certain SV files when clearly the data fit into memory (from which it was originally saved).
If it was clear to me what exactly makes a SV table offensive, there might actually be constructive ways to handle this or at least some certainty that really there is no good solution. If anybody has some deep insight (or actual time to track stuff like this down), much appreciated.
Erm, how does an option force everyone (or anyone) to delete their data? Have it disabled by default. If someone frequently has memory allocation errors, they can enable the option...
If in that week there is a way to track down and potentially fix the real issue, that'd be much much preferable to me. See quoted text for the original request for help and participation.
Anyway, if you don't want to make changes to the actual data storage format, and assuming you don't have a way to convince Blizzard revert their change to the way the game loads tables, then really the only "solution" I can think of is to split the data into more than one table so that no individual table is large enough to hit the (unknown) limit. There's nothing inherently "offensive" about a saved variables table... if you try to define a similarly sized table in a single operation from somewhere other than a saved variables file, you'd probably see the same error. In fact, addons can and do trigger memory allocation errors without saved variable involvement; see the dozens of threads on this and other forums pointing fingers at CowTip.
Anyway, I just did some tests and Recount works fine with my old stored BT raid night content SV. That one is for 20MB addon memory. This one loads fine without error. Then, to create a very large SV file, I had all data collection turned to max, created a whopping 360MB SV just running 5-man instances. This would indeed trigger the error when relogging. However this is by no means normal nor desirable stuff. Full time data and death records for any random critter etc etc. Noone should ever want to run with such settings.
If a normal raid night (say 5 hours worth) accumulates more than 50MB addon memory with Recount in a 25 man raid I'd be worried. As said my own stored file for a full BT raid night was 20MB (which was a raid in the 4-5 hour range) and my own expectation for a raid night is in the 20-30MB range. This is with default settings.
Zidomo reported that even early in a raid he gets the issue. I cannot understand how that happens unless his accumulation settings are such to grow the data very quickly, in which case the fix is to find more sane settings. As a side-note, this clearly isn't the CowTip issue that people reported because it has solely to do with loading SV (in a consistently reproducable way).
Well, I'd have to ask — if those settings shouldn't ever be enabled, why are they available to enable? If a setting is available, you've got to expect that someone is going to enable it, and plan accordingly. If there's just no way to have those settings enabled without bloating the saved variables beyond the size limit, they should be removed.
Alternatively, you could store certain types of data in their own tables. For instance, death records and time data could each have their own table, since those data types aren't really mixed with other data types (as far as I know; I don't use either module) in any view.
Well that isn't quite the case though. In a car you may not want to turn on the whipers when the sun is out, yet the car does not prevent you from doing it anyway. Why the heck does the car allow this!
Well the answer is some settings are situational, but detecting the matching situation is hard and up to people, so technology allows it which means that users can screw up but that's live.
On the practical matter I'll have to wait for more input from Zidomo to know what's going on.
I'm fairly sure I run with the default settings, but I'll double-check that when I can.
What kind of info do you need?
The main piece of information that is of interest is what is Recount's addon memory usage when you see the effect and how long did it take to get there.
Anyway, finally got kicked out to the login screen, tried relogging, the game crashed. As in wow.exe, not a lua error. Tried again, same thing. I deleted Recount's SV for that character and I was able to log back in. That's the worst I've ever had. Usually it's a lua error.
As to memory usage, I guess I could check the memory usage after every boss fight (or after every 5-man), then do a /reloadui and hope for a failure?
Why not concerned? Some/many users may like to review raid/instance data after logging out and coming back. With deleted data, that can't happen.
Also, the main info you need is unfortunately not going to be available. When the error occurs right at logon, the saved variable data is deleted. When the saved variable data is deleted, you are not going to be able to see the memory usage of the data that created the error.
You could, perhaps, do as Shadoweric suggested. A giant amount of testing work, though. What are you trying to determine...the unknown average memory usage cut-off point for regular errors/data deletions? In order to look at redoing how/how much data is stored?
Completing the first two bosses in ICC, I have a saved variable file for a character of over 32 MB. This was doing the bosses and the initial trash. And this is with time data for everything turned off, fight segment maximum only set to 5 (which frankly, is often not enough to be useful; 10 is more reasonable), no mob data at all set to be recorded (only players) and no player buffs or debuffs set to be recorded. All four separate Recount modules enabled (GuessedAbsorbs, DeathTrack, Failbot & Threat).
Have memories of seeing saved variables files over 100MB in size for various runs in the past month with the same options. May or may not do more hardcore investigating; depends on what/why/how you need the memory usage statistics.
The reason why I am less concerned about resets at end of raid days is that, while I agree it's nice to have the data the next day, at least there is user choice involved. Hypothetically, if you know that Blizz just doesn't allow anymore that very large parses be stored, you can plan and inspect the data before logout, and perhaps take screenshoots. Yes this isn't ideal either, but at least one has some options to cope and use the information in sensible ways.
DCs however are a different story. There the user has no choice whatsoever, hence lost data completely destroys the chance for inspection. There is no tricks to cope, so this is just worst-case. To me that is drastically worse, hence why I am much much more concerned about deletes due to DCs than after logins.
Yes, I do want to get a sense where the cut-off is, or if it's not a cut-off based on table size, perhaps which table configurations (likely numeric key gaps) causes the effect.
Ultimately I want to fix the problem if I can. If I cannot fix it I want to come up with a sensible solution. To be able to do that I first need to understand what causes the problem. This is what I ask input on. So any data points that sharpen what happens here help.
For example if you have a SV of size X that causes the problem. If you can actually make a copy before logout and report the file size that'd be very helpful.
Table size matters because it's the one variable that for me so far has allowed to reproduce the problem, i.e. small data loads fine, large data does not.
Btw 100MB is large. Is this only since 3.3 or also before? Does the 3.3 combat log report much more info than before?
Since 3.3 with the options & modules as noted. Can look further back with my backed up saved variables to see if there are any similar sized prior to 3.3, but that will take time.
Other than that, I'll see if I can catch a file causing the problem.
RecountGuessedAbsorbs works without issue. It only shows absorbs, though, not absorbs+healing, so will have to do the combined math yourself. And those meter-maids that judge "good healing" on throughput might have a difficult time comprehending it.
One issue with Broker (& Fubar) plugins that allow you to open up the Recount frame to the data area of your choice. One or more of them totally ignore any additional modules such as this you add. So you have to access it directly from within Recount if you use such a mod.
Examples:
FuBar_RecountFu. Provides full direct access to all data pages, including any you add that are not part of the base package. My favorite.
Broker_RecountInfo. Similar to the above, this is the only other one that provides full access to all Recount data pages, including separate modules you add individually. But on some LDB displays, the text is way too large, etc..
Ones that are limited:
Broker_RecountStatistics. Shows you new module mods in the tooltip, but greys them out so that you can't access them. Only can open to original Recount pages.
Broker_Recount. The most basic, no LDB data info & no direct access to any data pages. Only the Recount frame with the last data page you had open.
cBroker:Recount. Like the above in that it only gives you access to the main Recount frame, not individual data pages. But provides some data in the LDB display.
I assumed posting here would be the best way of reporting a bug with that particular plugin, I guess I should probably contact the author directly.