Patch Editor
Render Pass Patches
Scene Render Pass Patch

Scene Render Pass Patch

The Scene Render Pass patch renders a scene object and any of its children. You can render up to four textures (render targets) with a single Scene Render Pass.

To specify how many textures the Scene Render Pass should render, right click on the patch and go to select Number of outputs.

Inputs

NameDescription

Size

Set the size of the texture output. For fixed sizing mode, this value is in pixels. For relative sizing mode, this value is a factor relative to the screen, camera or background.

Sizing Mode

Define whether the size of the output texture is fixed or relative to the screen, camera or background.

Color Channel

The selected combination of color channels. Choose from RGBA, RGB, RG or A.

Data Type

Modify the texture’s data type depending on the precision level needed. Choose Unsigned Byte or Half Float.

Default Color

The background color if no texture is connected to the Background input.

Background

Connect to the output from another render pass, a texture or visual shader network.

Scene Object

Connect to the output from a scene object, to render the object and any of its children. Every object in the Scene panel can be rendered by connecting a patch representing the Device scene object to the Scene Object input. Patches representing individual objects can also be connected.

Tags

Filter the scene objects rendered by the object’s tag. When one or more tags are specified, the Scene Render Pass patch will only render objects matching the specified tags. Leave this field blank and no filtering is applied.

Output

NameDescription

Texture

The rendered texture

Texture 2

Optional. The texture rendered to the second output

Texture 3

Optional. The texture rendered to the third output

Texture 4

Optional. The texture rendered to the fourth output

If a light is not the child of an object being rendered in the Scene panel, it won’t affect objects being rendered as part of the pipeline.

Example: single render target

In the example below we’ve already used the Shader Render Pass patch to add a blurred appearance to the effect. A Scene Render Pass patch renders 3D text in front of the blurred effect, unaffected by the blur:

Below, you can see a 3D text object listed in the Scene panel as 3dText0 and visible in the Viewport. It’s not yet visible in the Simulator because it hasn’t been connected to the render pipeline using patches:

A patch representing the Device object in the Scene panel and the Scene Render Pass patch will render the text.

The patch representing the Device object will pass all child objects to the Scene Render Pass patch. When connected to a patch representing the Screen Output property of the Device, the text will render in the scene. Here’s how the graph would look in the Patch Editor:

The Scene Render Pass has been added between the Shader Render Pass and Screen Output patch, to combine the children of the Device with the blur effect created by the Shader Render Pass patch.

Example: multiple render targets

In this example, we'll use multiple render targets to create a glow effect using a single Scene Render Pass patch and a blur processor.

First create a shader with two outputs. One should output a color and the other should output a brighter color. For example:

import <lights>

// Material parameters
struct PhongMaterialParameters {
  vec3 emission;
  vec3 ambientFactor;
  vec3 diffuseFactor;
  vec3 specularFactor;
  float shininess;
  float occlusion;
};

vec3 applyPhong(
    std::LightData light,
    vec3 normal,
    vec3 view,
    PhongMaterialParameters material) {
  vec3 reflected = -reflect(light.toLightDirection, normal);

  float LdotN = dot(light.toLightDirection, normal);
  float RdotV = max(dot(reflected, view), 0.0);

  float diffuseFactor = max(LdotN, 0.0);
  vec3 diffuse = material.diffuseFactor * (light.intensity * diffuseFactor);

  float specularFactor = pow(RdotV, material.shininess) * step(0.0, LdotN); // do not light backface
  vec3 specular = material.specularFactor * (light.intensity * specularFactor);

  return material.occlusion * diffuse + specular;
}


vec4 shade(
    optionalstd::Texture2d> diffuseTexture,
    optional<std::Texture2d> normalTexture,
    optional<td::Texture2d> specularTexture,
    float smoothness,
    optional<std::Texture2d> emissiveTexture) {
  float shininess = mix(1.0, 100.0, pow(smoothness * 0.01, 2.0)); // non-linear mapping from [0,100] to [1,100]

  // Attributes
  vec2 uv = std::getVertexTexCoord();
  optional<vec3> sampledNormal = normalTexture.sample(uv).xyz * 2.0 - 1.0;
  optional<vec3> mappedNormal = normalize(std::getTangentFrame() * sampledNormal);
  vec3 localNormal = mappedNormal.valueOr(std::getVertexNormal());
  vec4 localPosition = std::getVertexPosition();

  // Material parameters
  vec4 diffuseAndOpacity = diffuseTexture.sample(uv).valueOr(vec4(1.0));
  vec4 specularAndShininess = specularTexture.sample(uv).valueOr(vec4(1.0));
  PhongMaterialParameters material;
  material.emission = emissiveTexture.sample(uv).rgb.valueOr(vec3(0.0));
  material.ambientFactor = diffuseAndOpacity.rgb;
  material.diffuseFactor = diffuseAndOpacity.rgb;
  material.specularFactor = specularAndShininess.rgb;
  material.shininess = clamp(specularAndShininess.a * shininess, 1.0, 100.0);
  material.occlusion = 1.0;

  // Camera-space normal, position, and view
  vec3 csNormal = normalize(fragment(std::getNormalMatrix() * localNormal));
  vec4 csPosition = fragment(std::getModelViewMatrix() * localPosition);
  vec3 csView = normalize(-csPosition.xyz); // csCamera is at vec3(0,0,0)

  // color
  vec3 color = material.emission + material.ambientFactor * std::getAmbientLight().rgb;
  if (std::getActiveLightCount() > 0) color += applyPhong(std::getLightData0(csPosition.xyz), csNormal, csView, material);
  if (std::getActiveLightCount() > 1) color += applyPhong(std::getLightData1(csPosition.xyz), csNormal, csView, material);
  if (std::getActiveLightCount() > 2) color += applyPhong(std::getLightData2(csPosition.xyz), csNormal, csView, material);
  if (std::getActiveLightCount() > 3) color += applyPhong(std::getLightData3(csPosition.xyz), csNormal, csView, material);
  return vec4(color, diffuseAndOpacity.a);
}

void main(optional<std::Texture2d> diffuseTexture, optional<std::Texture2d> normalTexture, optional<std::Texture2d> specularTexture, float smoothness, optional<std::Texture2d> emissiveTexture, out vec4 Color, out vec4 BrightColor) {
  vec4 color = shade(diffuseTexture, normalTexture, specularTexture, smoothness, emissiveTexture);
  float brightness = dot(color.xyz, vec3(0.2126, 0.7152, 0.0722));
  Color = color;
  BrightColor = mix(vec4(0.0), color, step(1.0, brightness));
}
        

In this shader example we use the Phong material demonstrated here.

Next, add a 2D text object to your scene. Create a material for it and then assign the shader to the material.

In this example, the shader is called Glow:

Insert a Scene Render Pass patch and set the number of outputs to 2. Drag the Device scene object into the patch editor and connect it to the Scene Render Pass patch. The scene object is now being rendered by the render pass.

Render Target 1 will contain the result of Color from the above shader and Render Target 2 will contain the result of BrightColor. This is because Color is our first output and BrightColor is our second output.

To create a Glow effect we need to blur the BrightColor render target. We can do that by downloading a blur patch from the AR Library. Alternatively we can create our own blur, but we won’t be doing that in this tutorial.

Connect the patches as follows:

The result should look similar to this:

Example: Filtering objects by tag

Use the Tags field in the Scene Render Pass patch to select which objects from the same scene hierarchy to render to the screen. Before using this function you need to tag your objects. Learn how.

In the example below, we added two different text objects to the scene, with two different tags. We labelled:

  • The pink text, Text1.
  • The green text, Text2.

In the Tags field, entering Text1 renders the pink text to the screen. Entering Text2 renders green text to the screen.

Adding flexibility

The Tags field offers flexibility in a few different ways. You can:

  • Add multiple tags to the field separated by spaces. In the above example, adding Text1 and Text2, separated by a space, would render both text objects.
  • Adding the suffix, .subtree to a tag in this field renders the tagged object and all it's child objects. In the example above, using the tag Text1.subtree would render the pink text and any child objects.

The tags field also means you can use the same Scene Render Pass patch to render objects that live under both the Device and the Plane Tracker. You don’t need to specify which parent the object lives under (Device or Plane Tracker). If the tagged object exists, it will be rendered, wherever it is in the scene hierarchy.

Was this article helpful?