Anti-aliasing

Just wanted to leave a quick post for everyone & myself. Something that seemed to work well in my most recent tinkering.

With Unity:

Rendering Path: Forward (Since everything I'm doing at the moment is baked externally. No need for any special lighting/shadows/etc. Possibly this doesn't matter, but I thought I read on the forums somewhere that the quality anti aliasing was only working with Forward.)

Quality:

Anti Aliasing, 8x Multi Sampling
Disabling shadows

Then for overkill, I put FXAA on each camera (LeftEyeAnchor & RightEyeAnchor).
http://forum.unity3d.com/threads/fxaa-fast-approximate-anti-aliasing.97023/
I've been doing this because although at work, just having 8x Mutli Sampling was working, at home my setup still was giving me jaggy edges. Adding FXAA to the cameras resolved this.

Maybe someone can chime in and give a definitive option for smoothing aliasing in Unity for VR

Workflow Exploration; 3dsmax to Unity3d, updated!

Here's my latest method.  I've also included some new tools that are linked below. This post is pretty text heavy, but I hope it might help some people. I think this method is a great compromise on getting a space VR ready, while at the same time delivering still renderings that clients ask for:

My work is done on a 2013 Retina Macbook Pro, running Windows 7 as my benchmark for performance. If things can run decently on my set up then I assume most others should have a buttery smooth experience. (specs: 2.8GHZ, 16GB memory, GeForce GT 650M)

 

1. First step, optimize all geometry. Reduce turbosmooths, and reduce curve interpolation on rendered splines and/or sweeps. Sometimes I'll just apply an optimize modifier to reduce poly count. I try to keep the total scene poly limit to 500k - 1mil.

 

2. Check for flipped normals. I've been running into this problem when dealing with real world projects. Sometimes objects have been mirrored and instanced or just built too quickly and care wasn't taken to double check flipped normals. Sometimes you can get away with this within 3dsmax since it can render back-faces. Unfortunately, in Unity you'll see right through them :)

 

3. Combine. At this point I combine the entire scene into a single object. This is done so that the elements can be broken up by material name, which also gives you a multi/sub-object material to review materials, rename, etc. Combining keeps the amount of objects to deal with lower, and makes for an easier Shaders set up Unity. You can further break the objects up once the baked textures are created.  In addition, it's also a good place to check for sub-object selections and deselect them!

4. Unwrap. I have a maxscript tool that a friend built for me to speed up this step. Tik_uv_unwrap_and_pack_v003.ms, download here. It does a beautiful job. One thing to note is that objects cannot have sub-object selections. This will cause issues in the unwrap (since only the selected polys will get unwrapped). So be sure to check for selection when the scene is combined as a single mesh. I then do a basic UVW Unwrap 'flatten' within texture channel 5 or 10. Then I run UV-Packer through the same channel, which does the best job of optimizing uv space and packing those flattened UVs. Finally, I collapse everything.

 

5. Bake! Everything should be ready to go. A couple important things to note: you should make sure you are using regular cameras to preview the lighting. You can use vray exposure control to match a vray cam exposure so that it works with Render To Texture. Also make sure you're baking your gamma into the render. For me, this is simply changing my color mapping mode to 'color mapping & gamma'. Also make sure output gamma is 2.2 (these settings might be different depending on how you render, test with a single object until the baking matches a vray render)

In the Render to Texture panel, check 'color mapping' & select the desired elements to bake. I currently I only bake with the VrayCompleteMap.

note: vraydisplacement wont bake out, so you'll need to place the texture being used into an actual displace modifier, subdivide the geometry, then collapse & optimize the geometry to be suitable for baking.

6. Export. Prep the file for export once everything has baked. If I net render the baking, I'll run the Render To Texture on my local machine with 'skip existing files' selected. At the point, I'll also choose under Baked Material 'Save Source/Create New Baked (Standard:Blin), and set the target map slot to diffuse color. Note: setting the wire color of all the objects to white before doing this will set the ambient color of the Blin material to white, which will make things easier in Unity when imported)

Run the bake again.  It will skip all the objects since they've baked already, but it will put the textures into standard materials. I'll also click 'Update Baked Materials' for good measure. Then select 'keep baked materials' and clear the shell. Now you're left with a scene that has all standard materials, with white diffuse color, and the baked texture.

Optional: At this point I also use a script to load the diffuse textures into the viewport so they're visible. http://www.scriptspot.com/3ds-max/scripts/turn-viewport-maps-onoff

Shuffle the baked UV (10, or whatever channel you used), into channel 1. Channel 1 is the UV channel Unity uses.

Now you can export everything as an FBX.  Note: I embed media, and for units scale I uncheck automatic and set it to centimeters.

 

7. Import into Unity! The FBX should load automatically if you export it into your Assets folder. The model should also have all the baked textures on all its materials.  The shaders will be default Unity diffuse shaders. I use this script that my friend made to select all shaders and switch them to unlit/texture. UnlitDefaultShaderPostProcessorPlusShaderConverterTool_1.1.unitypackage, download here. There's also a post processor in there that's suppose to automatically convert all shaders to Unlit/Texture. You can delete it if you wish. You should now be in a pretty decent starting place to preview the space in Unity.

The next steps that I take are to isolate and work with materials that should be reflective. These need special shaders that will accept cubemaps, bump maps, etc. You might then go back into 3dsmax and isolate the objects with these materials so that they can be separate geometry for using the correct probes.   You might also re-bake those specific objects to obtain texture textures for bump/spec/gloss/whatever. I recommend looking at Cubemapper and Reflection Manager. Unity 5 should also have a great set of reflection tools included, so look out for that!

I'm also eyeing Amplify Texture as a way to bring in larger baked maps, however I haven't seen performance hits from 2k textures yet, so I've been slow to adopt it into my process.

 Let me know if you have any questions, either by emailing or leaving a comment. Have fun!

Unity3d & Oculus Rift / Accurate Player Height in the Rift

I've had several people tell me they feel too big, or that the space seems small to them. So I finally spent an evening doing some R&D on the issue.

Basically. The 'Skin Width' on the OVRPlayerController should be set to .001 (default is .08)
I believe this is adding some padding beneath (and all around) the player which is adding additional unwanted height. So here's to hoping that someday Oculus will adjust this as a default setting in their OculusUnityIntegration package.

Here are my notes from the evening:

Player stats -
. 6'2" Player height (set in config)
. actual real life eye to floor measurement, 5'9" (stepped on a tape measure and looked into a mirror for the height)

I built a little 3d measuring tape prefab, 7 feet high, with markers at each foot. As well as a ruler graphic with 12 inch marks, on every foot. Dropped this into my scene that was modeled to actual scale. So everything should be accurate!

Fired up the demo. Press space bar, and eye height reads as 1.776, or about 1'8"

My view of the ruler, looking straight forward, was at a height of about 6'1"

With the 'skin width' set to .001, I fire up the demo again. Eye height is now about 5'9.5"

So that's a 4.5" difference, which can make a huge difference when viewing things up close. That basically puts you in heels :D

Then I ran a few other tests, just playing around with the Character Controller's player height & radius. Basically you don't want to mess with these, but I did find that changing the height in there did have an affect on the overall height in the demo. The default is 2m height, which had given me the 6'1" viewing height. A player height of 1.5, brought me down to about 5'. And a player height of 1 m brought me down to 4'1.5"

Also I was curious about what affect the positional camera had on the player's height. As soon as the demo launches and the camera turns on, it starts calculating position, which can turn into a small or big shift in actual sense of viewing height when the headset is put on your face. What I found was that as long as you reset the view, by pressing F2, you could reset to the correct viewing height. Although it was locking up the positional tracking, which was or wasn't getting unstuck after doing a bit of voodoo swaying side to side.

I also played around with the OVRCameraController, which I use less of. Basically it has an option for 'use player eye height', which shifts the players view above the actual placement of the camera prefab in the scene. In my case it was raised about 2'10" off the ground, from a camera placed at 0. I'm guessing this i suppose to simulate a players sitting height, although for myself I feel it would need to come down 5-6 inches lower. Turning off 'use player eye height' has the viewer look from the exact position of the OVRCameraController.

I hope this helps some people, since it had been driving me a bit crazy. And now I need to update my demo!