I've become quite enamored by the prospects of VR with the Oculus Rift. Shortly after their Kickstarter I picked up a DK1 and started experimenting with mixing my work with rendering interior designs & creating stereo panoramas for viewing in the Rift.
It was exciting to see spaces that were designed for web/print still images, being easily reused for stereo pairs of panoramas that gave you a view to explore with the Rift. Learning the proper method was actually quite a bit of work, digging around forums and web to find the answer to creating a proper stereo panoramic rendering. In the end I've settled with a slice method, having to rotate each camera for each eye around a center point, rendering slices of the scene and then later stitching them together. This method might be more useful when higher screen resolutions are used in the headset so you can appreciate the details of a pre-rendered image vs a real time rendering engine.
But with the announcement of the DK2, and its positional tracking hardware - I'm not questioning how much further pre-rendered content will be able to go. I can see the need for subtle positional tracking in the Rift to really push the realism and selling the image you're looking at. But I don't know how this can be easily applied to pre-rendered content. So I'm starting to switch gears and focus on 3d modeling/texturing methods that will make shifting these assets from being rendered for print, to importing into a game engine for real time interaction.
I'm quite excited about these prospects and hope to share my journey through this site & blog.