I wrote this library to cut down on the number of tokens taken up by large tables with string data, e.g. for dialogue text/translations/etc. Most helpful if those tokens are using a lot of your tokens (i.e. more than 150), since the library itself takes up 139 tokens. But all your table declarations can be reduced to 5 tokens each!
Here's an example of what your code can look like.
Before:
my_table = { hello='world', nested={ some='data\'s nested', inside_of='here' }, 'and', 'indexed', 'data', { as='well' } } |
After:
function table_from_string(str) local tab, is_key = {}, true local key,val,is_on_key local function reset() key,val,is_on_key = '','',true end reset() local i, len = 1, #str while i <= len do local char = sub(str, i, i) -- token separator if char == '\31' then if is_on_key then is_on_key = false else tab[tonum(key) or key] = val reset() end -- subtable start elseif char == '\29' then local j,c = i,'' -- checking for subtable end character while (c ~= '\30') do j = j + 1 c = sub(str, j, j) end tab[tonum(key) or key] = table_from_string(sub(str,i+1,j-1)) reset() i = j else if is_on_key then key = key..char else val = val..char end end i = i + 1 end return tab end my_table= table_from_string( '1�and�2�indexed�3�data�4�as�well��hello�world�nested�inside_of�here�some�data\'s nested��' ) |
Clearly it's more helpful if your data table is much larger than this. In my case, my data tables took up almost 200 tokens, and I saved about 50 using this technique. If I go to add more translations in the future, it will save even more.


And no I wouldn't suggest keeping this compressed version as your only record of your data table, of course! For my own workflow, I am setting up an automated script to take my data table from a separate Lua file and periodically update the contents of the p8 file with the stringified version.
[Please log in to post a comment]