Log In  

Instead of just raising the script token limit, I propose that the token limit is fine. It's how they're counted that actually needs more polish. Here is one proposal for what to do:

Make commas, parenthesis, and periods/colons equivalent to whitespace.

They're actually abstractions denoting a specific correlation of two points of data, not a symbol in and of itself. If you're calling a function with two arguments, "funcname(arg1, arg2)" is actually 3 useful points of data, "funcname", "arg1", and "arg2", not 6 just because there are arguments at all.

Look to assembly code for inspiration. At the assembly level, which translates quite directly to bytecode, do you write "function, begin_arguments, argument1, argument_seperator, argument2, end_arguments" as that many different individual tokens? I don't think so. :P It's more like "push argument1, push argument 2, jump to function" isn't it? Three commands, three script tokens.

If you want to go farther, you could consider doing the same for math operations and the string concatenation operator. In the case of "x = y + 7", you have two things being added together and a third thing holding the resulting value, right? That can still be only 3 tokens, instead of 5. "x += 1" would be two tokens instead of three.

And if you want to make local variables usable, ever, make an exception for the word "local" so it doesn't count against you. :P

At that point, the token limit might actually be too high, and everyone will start reaching the character length limit first, ha ha! But it would still be much better.

Ideally, in my mind at least, all of the syntax of Lua would be invisible to the tokenizer (if, then, do, end, {}, etc.) and scripts would count only the variables (and all individual occurrences thereof) and values (numbers, strings, booleans) and not care so much what you're doing with them, only how many times you need to access them.

At that point, we'd want to make long strings count as multiple "tokens" when they're being input/used directly, so that you get penalized for using them to store excessive data. Like, say a string counts 64 characters per token, but afterwards you can use the variable you stored it in for only 1 token. Still better than inputting an array directly into a table, but not excessively overpowered to the point where the character limit becomes more pressing. You know what that would do? Keep the data limits strict (you can raise them with your cart sizes ideas being discussed in other threads or whatnot), but open up all the possibilities for what you can do with the data.

P#16234 2015-11-04 17:26 ( Edited 2015-11-05 15:41)

i feel so massively in agreement with all of this. it feels like it would make so much more sense to be limited by the scope of your code and not the verbosity of the syntax you're working with.

just for the sake of being exhaustive, i think this should unquestionably apply to brackets as well, the same principle applies. a single array access costs twice as many tokens for the equivalent usage of data points, and seeing as tables are bread & butter to lua that feels particularly absurd. (same for dot accessors)

i dont think its right that pico8 should implicitly arbitrate the kind of code you make as a consequence of syntax, i think that's the wrong kind of challenge/limitation and distracts from engaging with the system 'hardware' itself.

P#16255 2015-11-04 23:28 ( Edited 2015-11-05 04:30)

I posted a graph to the picotool thread that roughly indicates how quickly the char limit would become an issue if the token count were reduced: https://www.lexaloffle.com/bbs/?tid=2691 If the new count method resulted in an overall reduction of about 50%, the char count would dominate in pretty much all cases.

I mentioned some Pico-8 token "over"-counting bugs in the other thread. Not as big a gain as changing how they're counted, but notable: https://www.lexaloffle.com/bbs/?tid=2710

I sympathize with the sense of fairness in the proposed counting method, but the end result probably isn't that much different from simply doubling the limit. Function calls cost four tokens. You can either make them count for two, or make room for more function calls.

P#16256 2015-11-04 23:35 ( Edited 2015-11-05 04:35)

i don't think that's necessarily true. what types of operations cost more vs less conditions you to code in certain ways. with double the token count it would still be an excessive use of tokens to iterate an array or use helper functions, for example. i don't think this is a zero-sum scenario, and in fact, i imagine the token counting adjustment should probably come with a reduction in token space even, since it would be a whole new paradigm. there would remain a net gain in the things you are able to program with pico8.

P#16262 2015-11-05 01:46 ( Edited 2015-11-05 06:51)

The entire reason the token count is a preferred method of judging the length of the code is because it's a very human limit that uses smaller numbers to give you a feel for how much is left you can do, rather than exactly how many letters you can type. If we change the paradigm and lower the maximum token count accordingly, it will only become that much more of a useful value, even if the character limit becomes more pressing towards the end.

P#16269 2015-11-05 10:41 ( Edited 2015-11-05 15:41)

[Please log in to post a comment]

Follow Lexaloffle:          
Generated 2024-03-28 10:49:07 | 0.014s | Q:16