Log In  

Hi, I'm currently working on a text heavy narrative game and as of now my main text table(Formatted as a string I then parse into a table at runtime) takes up 41% of the compressed size limit. Is there any way I can slim this down without losing text?
Is it viable to store the strings in a seperate cart and then memcpy them into my main cart? Would that stop it running in html/bbs/.exe?
Alternatively I can split the text into distinct sections and then store them as a string of carts which daisy chain into one another but that won't work in html to my knowledge and would rather fill the bbs up unnecessarily with carts that are just meant to be run by another cart.

P#46351 2017-11-15 23:48 ( Edited 2017-11-17 03:15)

This cart by @dddaaannn may be of use to you: https://www.lexaloffle.com/bbs/?tid=2776

P#46352 2017-11-16 00:59 ( Edited 2017-11-16 05:59)

That's pretty useful but I can't get the tool to quite work which is a shame but it has definitely given me a direction to pursue.
As a follow up is there a way of copying things from memory into your lua memory except for peek as accessing only a byte at a time seems inefficient?

P#46367 2017-11-16 09:35 ( Edited 2017-11-16 14:35)

I'm not sure I can fully support the compression tech demo as a tool, but if you want to tell me what happened I can try to help troubleshoot.

The general idea of storing compressed text in cart data both allows you to make use of cart space and lets you store text data efficiently across multiple carts. You can't load Lua between carts like you can with cart data.

There is no way to transfer cart data to Lua RAM except a byte at a time with peek(). (Naturally, the compression demo builds the LZ dictionary in Lua RAM to facilitate reading compressed strings from cart data.)

P#46369 2017-11-16 11:37 ( Edited 2017-11-16 16:37)
1

hey @capnmarcy

I don't think my solution is probably as elegant as the Tale of Two Cities one, but here's what I did to compress text for eggnog's 3CJam game:

https://www.lexaloffle.com/bbs/?tid=30188

  • 1.) Split up the script into a list of the unique words, and a list of the unique 'delimiters' (this is stored in code for Spoopy, but you could certainly have it done in gfx/map/sfx/etc memory instead..)
  • 2.) Code all text into a series of bytes, based on the indexes for the word list above
  • 3.) Store all of those bytes into code, with a certain sequence (iirc, we used $#FFF0) to let the loading script know the text for that page was finished.

That worked for storing all of the IDs for the 131 pages (dictionary of ~1,300 unique words and little under 20k words total) of Spoopy's script into parts of gfx and map data.

How much text / unique words are you looking at?

P#46373 2017-11-16 12:11 ( Edited 2017-11-16 17:11)

@dddaaannn Yeah no worries if it's hard to know whats wrong I got this error trying to run it on my own test code/ the included ones. I left the full message as an issue on the github page.

Typeerror first arg must be string or tuple of strings not bytes 

@enargy Currently at 17k characters / 824 unique words so I'll look into your method thanks

P#46395 2017-11-16 20:48 ( Edited 2017-11-17 01:48)

Let me know if you have any questions. We had to mess with the code a bit to get it under compressed size so it probably isnt the easiest to read.

I can send you a clean version of the relevant functions and a python script I wrote to pull out the unique words / delimiters and convert them to bytes for inserting into a pico8 cart though.

P#46399 2017-11-16 22:15 ( Edited 2017-11-17 03:15)

[Please log in to post a comment]

Follow Lexaloffle:          
Generated 2024-03-28 21:03:44 | 0.018s | Q:18