A couple of devs are investing into multicart games (like 5 or more data carts).
We are all put back by the artificial loading times (eg minutes)
Would it be possible for the fat client to either ignore load time throttling, or only activat throttling for the published version?
I want the multicart multiverse, not loading screens ;)
I'd like to add my voice to this feature request, my Ultima 6 experiment in Pico 8 takes so long to load that it makes developing it super painful, we're taking like 40 seconds on every restart.
It'd be nice if it was something hidden in the config or in a command line switch so regular users don't get confused about performance though.
I thought multi-cart loading took a couple seconds of simulated swap time, not too long?
That said, I want to do "a multi-cart game" too, and how I theoretically plan to get around that is by making the main cart essentially just a game engine, loading the game content from each cart, and each cart being its own game.
Do you think doing it that way is feasible?
I have a main "hub cart" that runs the title screen and loads to other carts. So when you finish world 1, it loads the title cart, does a cutscene there and then loads the "world 2 cart". You can CTRL+F for "JOS" to find where the relevant loading code is located as all carts have that prefix.
Are you doing bulk reads, one cart at a time?
Or are you doing partial reads, possibly switching back and forth between carts?
I seem to recall that each time you select a different cart, there's extra overhead. I may be misremembering, but it's something to check on.
Five carts shouldn't be taking the majority of a minute to load, unless you have very complex decompression routines that are pushing the limits of the lua memory heap, causing it to run garbage collection again and again.
Edit: I just wrote a cart that loaded the full 0x4300 bytes from 10 other carts in a row, this is how it looked when I ran it:
Okay, I wrote a better test app that could do multiple reads and randomize them.
Here's doing a full cart per read:
Here's doing 256-byte pages per read, sequential within each cart:
Here's doing 256-byte pages per read, in a random order within each cart:
Here's doing 256-byte pages per read, in a random order across all carts:
This is pretty much what I expected:
- There's a bit of overhead per read
- There's a lot of overhead per cart switch
So try to buffer your reads into larger chunks, preferably of size 0x4300, and don't skip back and forth between carts.
Edit: By the way, the hitches you see there aren't my cart's fault. It's the hardware/OS doing that. It seems to be related to switching carts. If I do that final test, but intercept the request and make them all come from data1.p8, it looks like this:
Notice the cart icon in the corner only comes up and spins once.
@freds72 Yeah, I understood what you and eniko are doing (getting cart data via reload(), right?). My response was to Guard13007 who seemed to want a cart swap via load(), if I understood it correctly...
In the reload case I think it would make more sense if the pause only occurred once per "frame", and any subsequent reload call should use the same pause if possible.
[Please log in to post a comment]