Okay, I know this sounds ridiculous, but hear me out for a second:
The "POCKULUS CHIP" is just a piece of plastic you attach to the front of the PocketC.H.I.P case to split the screen in half, making one eye look at the left half of the screen and the other eye look at the right half. Unlike the Oculus Rift, the Pockulus does not require any fisheye rendering, nor include any headtracking. It has more in common with a View-Master than modern VR.
That means, all it would take for the Pico-8 to have "VR support" with the Pockulus, is to change Pico-8's screen space usage to draw twice, split far enough apart that it uses the 420x272 pixel display evenly, and then either call the _draw() function twice per frame with an argument saying whether the left or right screen is rendering or use one of the already existing secret "stretched 1/2 resolution" screen modes and actually use the entire 128x128 Pico-8 screen to render two stretched images for the left eye and right eye. You can use clip() to constrain rendering to one side or the other. Let the cartridge developer figure out how they want to make depth perception.
I know the Pico-8 display on the PocketC.H.I.P. is probably already 2x scaled up to be 256x256 to make better use of the 420x272 display and 420 doesn't divide evenly into 256 (minus a few pixels in the middle the Pockulus screen separator would cover up), so there would probably really only be about a hundred pixel width per eye total at 2x resolution in this case (so, either 100x64 or 50x128 if using the screen splitting modes, probably?), but that loss would be fine for "three dee!".
NextThingCo shows Pockulus as a 3D stereoscopic device (playing virtual boy games) not a VR device with motion tracking. With pico you may do 2D relief maps. You don't need a lot of power to do this : only 2 pictures . I was playing with this when I was young... (no, I was not living with dinosaurs).
Here's a stab at Red/Green 3D virtual reality.
Once you get past the flicker, it actually works reasonably well with my 3 buck glasses from Amazon.
Charlie_says's "3d Wars" https://www.lexaloffle.com/bbs/?tid=2506 inspired this.
I think what would be better is just using a Raspberry Pi (not the oldest models such as the Zero or the Pi2.)
Make a GPIO wire system that transfers Accelerometer values into individual GPIO pins, and have PICO-8 constantly peek at those values with a function within _update60(), while also setting those to variables. Then, find some way to transfer those variables into 3D camera values.
Make two identical carts, but the difference between them both are local camera axis positions, so it actually gives a feel of depth. Possibly use another two GPIOs to transfer game variables back and forth too!
Finally, when you're ready to test, make an .sh script on the Pi to launch both carts side-by-side.
I recommend while testing:
- (almost a requirement) put a screen into a google cardboard headset, strapped onto your head.
- Having black borders around the game screen
- Applying a fish-eye effect of some sort to the carts, so it doesn't seem flat and hurt your eyes while looking through the lenses of a cardboard headset.
- If a fish eye effect seems too hard of tokens or ram for you, maybe look into getting/making a raspberry pi screen overlay software that does it for you.
[Please log in to post a comment]