Log In  

Fellow Piconians:

  • While I think I made a fairly effective attempt at getting 256-colors displayed in Pico-8, there is still the flicker.

https://www.lexaloffle.com/bbs/?pid=102333

I've been thinking about the 32-colors available for Pico-8 where you can only display 16 at a time, and was wondering how I would even begin to approach something like taking a 24-bit image and converting it to 16-colors using a palette of 32.

It would need to do the following.

  • Take a 128x128 24-bit image and convert it to 32-colors, I can do this in Blitz.

The hard part which I'm not sure how to approach:

  • From those 32-colors, determine which of a limited 16 to use and recode the image so it is just using those 16-colors, possibly some from the extended palette, some possibly not, but chosen based upon the closest to the original color set.

It's no so difficult to convert an image to a fixed palette, but to be able to truncate that palette set based upon the proximity of colors in a smaller set, my brain just can't wrap around that.

Any ideas ?

P#103992 2022-01-01 05:52

If you know any color space shared by all the colors, then a distance calculation would work. It probably wouldn't be the best approximation without knowledge of how human eyes tend to work (on average, since all human eyes are slightly to very different).

To use the most common color space in games as an example, sRGB is just 3 8bit values. If you imagine each color as a point in 3d space, then the colors to prioritize would just be whichever ones have the least average distance to the closest color in the actual image. Since that'd be really difficult and probably not optimal to do all at once, you could trim the options by first checking which colors actually occur most often, then settle the second half or so of the palette one color at a time using distance to the colors that are still leftover. sRGB is also useful in that it's what the eyedropper in any image program will get you, even if it's not the most accurate.

That said, I notice the 256 color attempt is using dithering. For that it'd probably also be a good idea to do some form of check for which colors are close to the line connecting 2 other colors in color space, as that would indicate which ones work best for dithering.

Some math in case you're not familiar with it: the generic formula for distances in any vector space is the square root of the the sum of respective coordinates squared. In 3 dimensional color space, that's sqrt(r * r + b * b + g * g). For the distance between a point and a line, the formula is the magnitude of the cross product of the vector from any point on the line to the isolated point and the line and the line's direction vector, divided by the magnitude of the direction vector. That's kinda complicated though. If you want to just see if they're close, checking level of colinearity is probably a better idea. That can be done by checking if the vectors formed by one end to the point then to the other end are in about the same direction. In 3 dimensional colors space using the line (r1, g1, b1),(r2, g2, b2) and the point (r3, g3, b3), then if ((r3-r1) * (r2-r3) + (b3-b1)*(b2-b3) + (g3-g1)*(g2-g3)) is positive and very close in value to ((r3-r1)^2 + (g3-g1)^2 + (b3-b1)^2) * ((r2-r3)^2 + (g2-g3)^2 + (b2-b3)^2), then the color (r3,g3,b3) can be easily dithered using a combination of (r1,g1,b1) and (r2,g2,b2). (The normal way to calculate this would be dot product divided by magnitudes compared to 1, but that requires unnecessary square roots and division.) How close they would need to be is something that would need testing to and such to figure out though. From there, the actual fill pattern would need to be derived by linear interpolation. That is, by checking the distance form (r1,g1,b1) to the (r3,g3,b3) divided by the distance from (r1,g1,b1) to (r2,g2,b2) to see how much weight each color should have.

Edit: Just realized I forgot to mention why checking if the dot product is positive is useful despite the value for comparison always being positive. If the dot product in the check for the vectors being in the same direction is negative, it means the third color point isn't between the other two. As such, if the dot product is about equivalent to the other value times -1, dithering a good idea, but to represent one of the first 2 colors rather than third color.

P#104007 2022-01-01 10:18 ( Edited 2022-01-01 10:35)

@dw817

32 choose 16 is only about 600M, so an exhaustive approach may be feasible. :)

More seriously - what about being greedy? Take the 16 most prevalent colors after quantizing down to 32. You could just use that palette as-is, or you could use that as a starting point for further optimization. For example, you could try doing pairwise swaps. Try every possible way to swap a color in the palette with one not in the palette; repeat this until no swap improves the image. You could also try that swapping starting from a random palette, but it's probably faster if you start from the greedy one.

@kimiyoribaka

> It probably wouldn't be the best approximation without knowledge of how human eyes tend to work

When I need an easy-to-use color space in which Euclidean distance is a good approximation for perceptual distance, I've found that Lab is usually a good place to start.

P#104008 2022-01-01 10:36 ( Edited 2022-01-01 20:47)

One algo for color quantization is Median Cut. I played around with it for Depict, but always thought the results came out too muddy and so I left it out as an option, since hand choosing the colors seemed to produce better results. Perhaps I wasn't understanding the algo correctly or perhaps there was a flaw in my implementation.

Google color quantization and you should find advice on Median Cut as well as other algos.

P#104040 2022-01-01 20:45

I was trying out Depict, @bikibird. I especially like the "Atkinson" method for this picture:

https://images.unsplash.com/photo-1438761681033-6461ffad8d80?ixlib=rb-1.2.1&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cG9ydHJhaXR8ZW58MHx8MHx8

Results being:

Is there a Pico-8 example program of Atkinson dither ?

P#104043 2022-01-01 21:50 ( Edited 2022-01-01 21:56)

@dw817, not that I'm aware of. In fact, finding out the filter matrix for Atkinson took a lot of deep googling, as I recall. Technically, this is error diffusion. You can google Floyd-Steinberg and get the basics of error correction pretty easily. It's neat. The Atkinson error diffusion matrix is available in the Depict source code: github Ctrl-F Atkinson should take you right to it.

By the way that's Atkinson as in Bill Atkinson, famed programmer from the early days of Apple. There's a set of images he made for the Apple II using the "Atkinson Dither" that are very cool. You would think those would be easy to find on the Internet, but I'm not having any luck.

The Atkinson dither is great for portraits as your image attests, but I've found with Depict that you really can't predict which method is going to give the best results. It really depends on the base image. They're all great, each under the right circumstance. Although, I have not found the right circumstance for the 10 shade dither, that one always looks awful.

You may like this video: Error Diffusion Dithering

P#104046 2022-01-01 22:23 ( Edited 2022-01-01 22:52)

[Please log in to post a comment]