Multiplane tracking enables you to create effects where multiple planes, both vertical and horizontal, are identified and tracked in the scene.
There are two ways to access this capability:
- Use the patch editor to create effects that place objects from the Scene panel onto vertical and horizontal planes. This is the best method for simple effects where only one or two objects are being placed.
- Use the scripting API to create effects that place instances of dynamically instantiated blocks onto vertical and horizontal planes. This is the best method for effects where many different points need to be tracked, or multiple instances of an object need to be placed.
Spark AR Studio includes two templates to help you get started with this capability. One makes use of the Patch Editor and the other uses the scripting API. Both demonstrate best practice for instructions and plane visualizations that help users understand your effect and place objects accurately.
Patch template
The patch template demonstrates how to place a single object into a user’s environment using patches.
To get started, import an asset to the Assets panel (try the Asset Library if you are looking to download a 3D model). Once it’s there, drag it onto the dragHere object in the Scene panel, then delete the deleteMe object:

The patch graph contains some distinct areas of functionality. These have been split into sections and separated with comment boxes:
- The Screen Tap, Hit Test and Point Trackable patches are used to identify and track a selected plane following a screen tap. Once the tap is made, the patches calculate position/rotation and adjust our 3D model accordingly. This part of the graph also controls the object’s visibility.
- Your 3D object will not be visible until it is placed onto a plane. The patch graph contains a debug pulse which adjusts the visibility manually and allows you to see your object while making changes in Studio. Remember to turn it off before testing or publishing your effect!
- Your 3D object can be rotated and scaled manually thanks to the Screen Pinch patch in the Scale group. This group also contains the Animation patch that makes your object bounce into the scene, rather than suddenly appear.
- The render pipeline handles the output from the blocks containing the instruction and tracking visualization functionality. Their output is merged with the device camera texture and our 3D object to create the images that the user will see on their screen.
Patch-based multiplane implementations are an easy way to create multiplane world effects without having to write code. However, there are some limitations:
- Patches support a single object only, though that object can still be anchored to both vertical and horizontal planes.
- In patches, the tracking state is not exposed. This means that you will need to use the API to instruct users to move their phone to initiate tracking, and to hide the instructions when tracking is initialized. The patches template includes a pre-built example.
- In patches, detected surfaces aren’t exposed, so the API is needed to detect and dynamically instantiate the plane visualization. The patches template includes an example for this as well.
- Patches don’t support drag to move; they use tap-to-place only.
Scripting template
The scripting template uses dynamic instantiation to create objects in the scene and attach them to instantiated planes. Because the objects are dynamically instantiated, they must be wrapped in a block.
This is how the scripting template looks when used on a device:

To get started with customizing the scripting template:
- Replace objectTexture1, objectTexture2 and objectTexture3 in the Assets panel with textures that represent the objects being placed into the scene. These textures populate the UI Picker that is used to select an object. Make sure the textures keep their name by using the Replace option.
- Next, you’ll add your 3D objects. Double click on the object1 block to open it, then replace the 3D asset by dragging a new asset to the dragHere node and removing deleteMe from the Scene panel. Remember that object1 should match objectTexture1 from the previous step.
- Save and close the block.
- Do the same for object2 and object3.
For best results, you’ll want to scale the object in each block in a similar way to the other objects in the scene. Each block contains a hidden cube that can be used for size reference.
If your object has animation, you may want to use a second hidden cube to define the full bounds of your animation. This will help scale the placement indicator appropriately.
The scripts use findUsingPattern
to search for picker textures, and ('object'+index)
to select objects. This means that you can easily extend the effect to include more options by adding an additional texture and a corresponding block — just be sure to stick to the existing naming conventions.
There are also three optional behaviors in the scripting template that can be enabled or disabled in the script. These are set in the Main.js file:
- Camera scaling which is on by default, maintains the same object scale in screen space during pre-placement and movement. This means the object is actually being scaled up when moving further away and down when moving closer. This helps to mitigate some of the artifacts from the tracking algorithms, as well as making it easier to see and manipulate objects. This feature can be easily disabled by switching
CAMERA_DISTANCE_SCALING
to false. - Auto-rotate, which is off by default, will automatically align the object’s Y rotation to the surface it’s on when placing and dragging. This is helpful for effects that place something flat against a wall, like a poster. This can be desirable in some instances, but may not apply to all experience types. Switch
AUTO_ROTATE
to true to enable this feature. - Auto-place, which is off by default, will skip the pre-placement ghosting step and immediately place the object once selected. While the pre-placement mode offers extra precision when placing objects, the auto-place mode has less friction at the increased risk of users placing objects on unintended surfaces. Switch
AUTO_PLACE
to true to enable this feature.
Instructions and UX considerations
The multiplane templates include features designed to make your effects easy to use and understand for users. This is particularly important for world AR effects, where a range of different options and interactions are usually available.
The following UX features can be found in both the patch and script templates:
- Tracking animation — The templates include an animated instruction telling the user to move the phone to begin the experience. This allows tracking to initialize and the first planes to be detected.
- Grid planes — Once tracking is initialized, the scene will populate with grids that visualize planes detected with a high confidence level. Estimated planes and infinite planes are also used, but aren’t visualized as a grid.
- Manipulations — If we are enabling users to place objects into their environment, it’s best practice to give them the option to move and rotate them afterwards. The included manipulations are drag/tap to move (tap only for the patches template), pinch to scale, and two-finger rotation. As it can sometimes be difficult to manipulate small or distant objects, the scripting template enables users to select an object with a tap and then perform their chosen gesture anywhere on screen.
The scripting template has some additional UX features that aren’t available in the patch template:
- Picker UI — The native picker UI is used to select new objects to add to the scene, as well as delete existing ones.
- Placement guide — When an object is ready to be placed, it will appear ghosted with a Tap to place instruction. The object will be projected onto the nearest detected surface, and a tap will lock it into place.
- Selection state — When an object is tapped, it will enter a selected state where it can be manipulated or deleted. Objects can also be directly manipulated, during which they will be temporarily selected, and deselected on release. The grid and object placement indicator will also highlight while the object is selected.
- Fallback errors — If the tracking state is not ideal then additional instructions will be shown to the user. These can be triggered by a lack of light, a lack of feature points in the scene or excessive device motion.
Experiment with the script and feature sets found in the two templates. Use them as the foundation for your own effect by swapping out assets or borrowing functionality and code snippets to create your own project.
Previewing and compatibility
As with other capabilities, multiplane effects can be previewed on Instagram, Facebook and the Spark AR Players. Please note that multiplane tracking has the following device requirements:
- iOS — OS version - iOS 11+, iPhone 6S and above
- Android — OS version - Android 7.0; most high-end devices
Android doesn’t support estimated planes or infinite planes, so the experience on Android devices is based purely on high-confidence planes.
Scripting API
Scripted multiplane effects are made using Spark’s WorldTracking API. This API is based on the concept of trackables.
A trackable is something that the underlying tracking model can identify and track. Typically, trackables will be one of two types:
- Plane trackables — the planes detected by the tracking model.
- Point trackables — a single point in 3D space, added by creators, to which virtual content can be attached.
The multiplane scripting template contains many examples. The Object.js file includes everything related to creating, placing and destroying the objects themselves. TrackingViz.js includes the behavior for spawning the plane visualization.
See reference docs for the WorldTracking API for more info and examples.