The Camera Depth Texture

In Spark AR Studio, the distance in pixels from the camera can be extracted as a texture, known as the camera depth texture. You can use the depth data contained within this texture to create effects that respond to depth, such as post-processing and lighting effects.

The camera depth texture can also be used to occlude AR effects based on the real world, making virtual objects accurately appear behind real world objects.

Camera depth is enabled for the back camera only.

Creating the camera depth texture

To extract the camera depth texture:

  1. Select the Camera in the Scene panel.
  2. In the Inspector, click + to the right of Camera Depth Extraction.

The Inspector with camera depth texture highlighted

In the Assets panel, you'll see a texture called cameraDepthTexture0:

The camera depth texture patch

Dragging the texture from the Assets Panel into the Patch Editor creates a camera depth patch that you can use as an input in the Patch Editor.

The patch outputs the usual RGBA values of a texture asset patch. The depth data itself is stored in the R value, which has its own separate port.

You can either extract the depth data from the R port directly, or use an Unpack patch to isolate the R value from the RGBA port. The patch also outputs information on Tracking Quality.

Depth Color Overlay Template

To help you get started we’ve provided a Depth Color Overlay template in the Spark AR Welcome Screen. This template blends a texture (animated and controlled by a block), with the camera depth texture. This creates a colorful sonar-like pulse effect:

The effect is created with patches and shaders. For example, the remapDepthShader is a shader code asset we programmed using Spark AR's shading language, SparkSL. If you're not familiar with Spark SL, you can reuse this asset in your own projects to normalize depth texture values.

Inside the Template

When inside the template, open the Patch Editor to see how the camera depth texture is being used in this effect. It’s helpful to focus on three key parts of the overall patch graph:

  • Camera Depth Utilities.
  • Color Overlay Block Outputs.
  • Render Pipeline.

The other parts of the graph expose depth tracking quality (0 is not tracking and 2 is high) and which camera is active (Front/Back). These properties control which instructions are displayed on screen, prompting the user to trigger the effect. We’ve also added an interactive slider via the Patch Editor.

Camera Depth Utilities

Patches in the camera depth utilities box

Let’s look at how some of the patches in Camera Depth Utilities are set up to extract and remap/normalize the depth data from cameraDepthTexture0. Normalizing depth values is a useful operation for many depth use cases.

In our setup, the orange remapDepthShader remaps the raw values from the R_F32 float depth texture to a 0-1 range, using the provided near and far distances as min and max values.

The Blur patch group adds some smoothing to the depth texture, another common visual effect. The output of this part of the graph is the Remapped Depth texture.

The Color Overlay

The purple patch representing the colorOverlay block outputs the Animation Speed, Color Gradient and Opacity properties of the texture. This information is then passed to the patches in the Render Pipeline box.

The Render Pipeline

The patches in the render pipeline box apply the colorful, animated gradient texture across the depth of the scene.

Patches in the render pipeline box

If we wanted to simply create a colorful, animated overlay, we could mix the Color Gradient, Animation Speed and Opacity properties of the texture with cameraTexture0 and output this to the Device. Our graph would then look like this:

Instead, because we want to apply this texture across the depth of the scene, we also need to input the Remapped Depth texture.

We do this by using the Blend patch to create a new texture (Remapped Depth + texture + cameraTexture0). We then Mix this new texture with the cameraTexture0:

Configuring Camera Depth Texture Availability

The camera depth texture can be configured in two different ways. First, select cameraDepthTexture0 in the Assets panel and then select one of the options from the Availability dropdown in the Inspector:

  • Only supported devices: your effect will only be delivered to devices that can compute depth estimation.
  • All devices: your effect will be delivered to all devices. Fallback depth data is used if the device can’t estimate depth.


The camera depth capability can only be used for an Instagram sharing experience, on compaitble devices.

On iOS depth only works on the following phones:

  • iPhone 12 Pro.
  • iPhone 12 Pro Max.
  • iPhone 13 Pro.
  • iPhone 13 Pro Max.

On Android camera depth works on:

Note that on Android it takes around 3 seconds of initialization time to get accurate depth from the device, or longer if the user keeps the device completely still.

Was this article helpful?