The first method uses 40 bytes for the table t, then 8 more bytes per entry in the table for roughly 120 bytes. Then creates the final string "1:2:3:4:5:6:7:8:9:10" in a C function call and assigns the memory reference to x. The table memory use doesn't get released unless the table reference is later removed or contents wiped. Lua changes the numbers to strings internally.
The second creates the strings "1", "1:2", "1:2:3"... all the way to "1:2:3:4:5:6:7:8:9:10". The first 9 strings eventually gets garbage collected.
The first method is more memory friendly and much faster as x becomes larger.
Incremental means that it works in small steps, not in a single shot.
The mark and sweep algorithm is designed so that all objects that are collectable at the start of the cycle are collected at the end of the cycle.
Because the GC is incremental, some lua code may be executed between the start and the end of the garbage collection, and some garbage may be created during this execution, that won't be collected in the current cycle (but will be in the next).
i've got it seared into my brain that creating temp tables is bad for gc, but i guess it's worth doing to avoid creating tons of useless strings...
Creating a temp table IS bad for gc. But creating a few hundred temporary strings is worse.
Don't be afraid to use temporary tables. If you really want you can also just use a table upvalue which you call wipe() on everytime you want to reuse it. Note that calling wipe() lots is actually worse in performance than just letting GC run though.
Note that calling wipe() lots is actually worse in performance than just letting GC run though.
In general, wipe is more costly than GC unless you have more than 256 keys in your table, then GC is more expensive. Also note that hash-like tables can never be "shrunk back" after they're allocated, while array-like tables can. But if it's a temp table, it's not likely to get that big, so it's usually best to just let it GC.
The long and the short of it is, try different methods, understand the costs of each (both memory and CPU), and pick the one you're most comfortable with.
That doesn't sound right, see http://shadowed.pastey.net/117344 maybe I did the test wrong but the only one that comes close is table.concat without having to wipe/load values into the table.
one thing that might affect your results is that the temporary strings being created in the self concat method ("1", "1:2", etc) aren't be recreated after the first run. not sure what impact this has or how lua manages internal string references like that.
you should probably throw something in there to break it up -- like maybe slip i into the cat as well.
Those weren't ran all at once, I just put it all into one pastey, rechecked the first one after having done a few reloads for other things and it was still about 0.21-0.23s.
what i was getting at is that on the first iteration you'd end up with the 10 different strings that were generated by the concat method. you'd get "1", "1:2", "1:2:3"... etc.
on the 2nd iteration, those strings already exist which really wouldn't be the case with a "real world" example where the data would more likely be unique. that seems to me to be a potential for misleading results. i dunno, maybe i'm wrong, but i would think adding 1000's of strings to the string table would have some impact.
and by iteration, i mean the 1 to 10000 loop, not the different techniques.