When I first read the API for SetBackdrop, I (naively) assumed that edgeSize controlled the visual thickness of the edges, and the insets were for calibrating the background area accordingly (i.e. not losing some background texture underneath the edge, and not showing some background outside a rounded edge corner).
Recently I discovered my mistake (partially explained here): the edge thickness is effectively "hard-coded" in the edgeFile texture (which is a pity, since then it can't be configured by the user without drawing a custom texture), and edgeSize is just a clue to the rendering engine about the scale at which that texture was drawn, so it knows how much to stretch the segments. This means for any given edgeFile, there is really only one edgeSize value which doesn't look terrible, and only one inset value which puts the background texture in the right place.
So how do you know what those magic values are? I based my original code on examples that used the tooltip edgeFile with an edgeSize of 16 and inset 4, which looks good; any other values look terrible. But what happens when I add LibSharedMedia support? Do I always use edgeSize=16,inset=4 for every edgeFile? Is there some way to find out what the correct values are for a given edgeFile?
You can do some disasterous things by fiddling with those 2 paramters including making the edges look more like a # than a square. I think you either have to use the size of the edge tiles which you can compute from the file, or just look and see how its used in another place. I dont think you can determine it programatticly, but I don't have alot to base that conclusion on.
wouldn't you just use an edgeSize that corresponds to the pixel size of the image? if it's 32x32 blocks, use and edgeSize of 32? and the inset is totally dependent on the image content, no? i thought it was amount of empty blackspace (measured in pixels in the original texture) around the texture (so you can't click in emptiness around a frame and have it registers as being in the frame).