![]() | Screen Render Options | Forces the engine to render a Canvas multiple times with different resolutions and/or cameras |
The Screen Render Options Node can be used on a Screen or Composition Projector Node to force the Ventuz Engine to render a Canvas multiple times with different resolutions and/or cameras. The Engine can render the content at the highest resolution and filter the result for lower resolution outputs resulting in saving system resources.
The CameraOverride can make use of
Setting | Description |
---|---|
TrackedCamera | For Head Mounted Displays or Camera Tracking Systems |
MatrixCamera | Allows the use of a custom view matrix. The presets zero and identity matrix can be used or modified manually or the Matrix Property can be connected to a matrix coming from for example a Matrix Node or a Script |
SetExtension | Enables a POV (Point of View) or Eyepoint to be visualized. This can be used to give the impression of an extended, view as if looking through a window, when the POV, or camera moves from one place to another |
ZNear | Sets where the Z depth view starts. This will clip objects close to the camera |
ZFar | Sets where the Z depth view stops. This will clip objects far from the camera |
X/Y/Z | Sets the viewpoint of the camera in 3D space |
RenderFilter | Can be set to exclude parts of the hierarchy to be rendered - the filter enumeration can be selected on the output property in the Render Setup Editor |
OverrideWidth/Height | Changes the output resolution |
PrevisWidth/Height | Changes the previsualization resolution for the Previs rendering inside the Ventuz Designer as well as for the Previs Screen output in the Render Setup Editor |
Keep this setting as low as possible to prevent performance issues when working with high resolution outputs | |
Spherical projection | Rendering will be warped to produce the correct result for the spherical display, and touch coordinates are warped the other way around to match the screen. |
Projection for touch defaults to "same as rendering". Some hardware uses a different projection for touch than for rendering.
The Pufferfish property group will automatically set up different projections as required by the hardware, it only needs to be placed in the rendering slot to do that.
Often you need square aspect for projection on a 16:9 display, but touch data comes in 0..1 TUIO range relative to the 16:9 display, so it must be stretched. The StretchedTouchSquareRender does the right thing without having to setup projection twice.
It is possible to apply the sphere mapping at a per layer basis as a camera override.
To do this, in the camera node, set the projection property to spherical projection. This will give you most of the options found in previs scene screen render options.
Unfortunately, touch is not supported in this mode, as the magic that makes spherical touch work happens while processing the render setup, and not on a per layer basis. Also the RenderOffsetAuto feature does not work (see below, reprojection).
All property groups provide SphereAngleOut and LensAngleOut as a output properties.
For many projection models the values are the same as the input property LensAngle. The purpose is to have the output property be correct while switching the projection model, so that bindings are not broken.
This is the traditional world-map projection that plots degrees of longitude and latitude at equal distances, putting a full world map in a 2:1 aspect ratio. This should not be confused with the Mercator projections used by map services like google maps, which prioritizes the pole regions.
LensAngleOut is meaningless and the same as SphereAngle.
This projection has the north-pole at the center and the south pole stretched all around the edge of a circular image.
LensAngleOut is set to the same as SphereAngle.
Fisheye is a variant of the equirectangular projection that is based on the fact that many spherical displays are implemented by a video projector projecting through a fisheye lens on a screen. In this configuration, the projector is usually purposefully located below or above the center of the sphere, along the optical axis, for mechanical reasons or to achieve a better pixel-distribution for the projected image.
While there is no reason to move the projector / lens off the optical axis, that of course happens in practice and can be compensated. With these factors alone it is often possible to calibrate a display quite satisfyingly, although there are usually further lens distortions not captured by this model.
LensAngleOut is calculated depending on SphereAngle and LensCenter, and reflects the minimum required lens angle to fully cover the screen. The input property LensAngle must be set to the actual lens angle, which will usually be slightly larger.
Specialized nodes that simplify setting up directly supported products. The parameters can be copied from the device datasheet.
Some projections have the "aspect" property.
These projections create an image that is circular in some way. For projection you want this image to be centered on the screen, which may be 16:9 and not 1:1. But the touch-device will usually create touch in a TUIO 0..1 coordinate system which is then spread to screen pixels in the non-1:1 aspect ratio, so for touch you might prefer the input to be stretched.
This property group allows to control this:
The default mode for rendering is FullSphere, rendering the 3d scene in 6 directions from the center at 90° field of view. This is optimal for displays near the full 360° as it minimizes distortions, but it requires 6 render steps, which is costly on the CPU and GPU.
For display with a smaller angle it is enough to render the scene once and modify the FOV angle, the OneFace option. One can easily imagine how things at the edge get distorted a lot. As we approach 180° for a virtual camera in the center of the sphere, distortions become infinite. If we double the resolution, we can go from 90° to 127° while maintaining the same quality for the pixels at the pole. This is a big performance win because we render only into one target of twice the resolution, instead of 6 targets, so it's less render calls and less total pixels. This is worth doing until about 150° before distortions become too bad, and it breaks totally apart near 180°.
But that's with the camera at the center of the sphere. If we put the camera at the bottom of the sphere, we can render close to 360° without extreme distortions for latitude: The only real waste is through the regions of the south-pole being stretched across the edge of the screen, making this impractical beyond 180°.
The RenderOffset property allows controlling the position of the virtual camera independent from the position of the physical projector, So one can use optimal virtual camera position for minimizing distortion while rendering and a different position for the physical projector to match the device with it's practical limitations. A value of 100% moves the camera to the bottom of the sphere, which seems to be the optimal position for improving pixel utilization.
The OneFaceParallel option changes the camera to a orthogonal projection from outside the sphere on the sphere, which may be another options to choose. In this case the RenderOffsetY is not used.
Both off-center options need to know how large the sphere is, so the RenderRadius must be specified. For a camera at the center of the sphere, the distance does not matter, you can scale the sphere and object on top of the sphere as large as you like, as it will always project correctly to the center of the sphere. But with an off-center virtual camera, the projection is only correct for objects located on the sphere at the specified radius. As long as objects are reasonably near the radius this poses no problem. It may even look better than what happens if you correctly project something like a rectangle on a sphere. Check out the Reprojection feature to learn more about dealing with that.
Moving the virtual camera only changes how the rendered image looks. The (intended) distortion from the off-center virtual camera is corrected while the mapping is calculated. As long as all geometry is on the sphere, the distortion is not noticeable in the final image, except for hopefully better distribution of rendered pixels. The "DebugRendering" property may be used to see how the rendered image looks without the mapping applied, that can give some insight about pixel distribution and distortions coming from objects not being on the sphere.