The game I am currently working runs smooth in Pico-8 EXE but when I run in Chrome it has significant perf issues; often dropping frames causing hitches.
Is this a know thing? Is there a way to help with this, or diagnos it better?
A couple things:
1) I am running at 60 fps.
2) I do a LOT of dynamic allocations and discarding (its an endless, procedural game).
3) Stat(1) returns about .3 in best case, and .6 in worst case. Does stat(1) account for FPS target or is .6 on 60fps .1 over budget?
4) It runs totally smooth in Edge Browser, but buttons don't work properly (missing release events i think) and image is blurry (not using point filter).
Thanks!
Here's the game:
I believe at 60fps the cpu value is doubled, so 1 is equal to 100% at 60fps (and not 100% @30)
Generating garbage during a frame is known to cause trouble in pico-8 (it's a general issue with most GC'ed languages and programs with hard timing constraints, like games). Unfortunately I can't find the thread about that right now - I think it was about particles. In a nutshell, it's better to re-use 'dead' objects rather than freeing and allocating new objects.
I guess your allocations are mainly for the new wall segments coming in at the bottom? (and possibly the smoke particles?) If so, those are perfect candidates for ring buffer datastructures & alive/dead flags (respectively), which would probably go a long way towards reducing the amount of temporary allocations you need to do. However, even a single temporary allocation now and then will eventually trigger a garbage collection cycle, so the only way to be totally stutter proof is to hold on to everything you allocate. Easier said than done.
In terms of help diagnosing it further, you could try rendering a graph of the previous x frames' worth of memory and cpu (or logging them to a file and using Excel). Unfortunately it's no longer possible to watch memory get collected using stat(0), but it might still help give you a rough idea of what's going on in there over time.
e.g.
-- tracks the previous 128 frames' worth of cpu/memory renderstats, logframe, memstats, cpustats = false, 0, {}, {} function _update() -- game goes here -- toggle graph rendering on/off with left arrow if (btnp(0)) renderstats = not renderstats end function _draw() cls() -- game goes here cpustats[logframe] = stat(1) -- doing this here means the graph drawing stuff isn't tracked (kinda) memstats[logframe] = stat(0) -- same deal, ish. logframe = (logframe+1)%128 -- acts as a ring buffer 'head'. if renderstats then for x = 0,127 do local index=(logframe+x)%128 pset(x, 63, 8) -- 100% pset(x, 127, 11) -- 0% if index <= #cpustats then pset(x, 63+(1-cpustats[index])*64, 7) -- cpu in white pset(x, 63+(1-memstats[index]/1024) * 64, 12) -- memory in blue end end end end |
Catatafish: Thanks for all that! I actually worked in XNA for a while which had a notoriously brutal GC, which essentially meant you could not make any allocations outside of load time. So I definitely get what you are saying :)
I'll probably focus on general perf, and reusing particles to start and see where that gets me.
[Please log in to post a comment]