Everything you need to know to create an interactive augmented reality experience.
Here are some examples of how you can add simple logic and interactivity to your effect, using the Patch Editor.
Learn how to:
The simplest thing you can do is bind the movement of a scene object to a face tracker, so that the object moves with the face. To do this, insert a face tracker patch and then go to your object's properties and click the dot next to Rotation. You should now see a face tracker and object patch in the Patch Editor.
Click the Rotation port on the face tracker patch and drag to the port on the object patch. Once the patches are connected, click Run to see it in action. Whenever your head moves, your object should move with it.
You can set an animation to begin and end when triggered by specific actions on a face. In the example below, the object is set to appear if someone opens their mouth. Then, if they lean their head to the right, the object changes position. If they close their mouth, the object disappears.
By connecting the Face Tracker, Head Rotation and Mouth Open patches via the Tracking Data ports, we're telling Spark AR Studio that it should be using the information from the face tracker to look for an open mouth or a leaned head.
Both Mouth Open and Head Rotation are boolean signals, which means it can either be happening or not happening. Patches like Mouth Openness have scalar signals.
If you want to use a boolean signal to start an animation, you'll need to use the Pulse patch to transform the signal into a discrete event.
You can use logic to make your animation react to a specific set of conditions. For example, you can create a patch graph that makes a ball drop from top to bottom if the head is tilted in either direction. To do this, us the Or patch to indicate that the animation should occur if either or the other action happens.
Here, the Or patch is placed after Head Rotation so that it can take both the inputs from the face tracker and trigger the movement if the head is leaned left or right.
You can use interactions such as a tap to make your effect respond to specific actions on the screen. In this example, we've used the Object Tap patch to make an object change position when someone taps directly on it. You also use Screen Tap, Pan, Pinch or Rotate to trigger or control interactivity in your effect.
The Scale output in Screen Pinch is a number that represents the scale of the pinch, starting at one. The Pack patch unpacks this information into vectors and translates it into a single value. This value is then applied to the scale axes of the null object.
As a result, the object will get bigger or smaller when someone pinches the screen.
The 2D Offset port in the Screen Pan patch determines the distance the end position of the user's fingers are on the device screen, from their initial position. It returns an X and Y value, as the device screen is a 2D surface.
The Divide patch manipulates the input from the Screen Pan patch. In this case, we've divided the input by 10. If we didn't do this, the object would move far too much.
The Divide patch can't be directly connected to the 3D Position port in the object, because it's a Vec2 signal, and placer is a Vec3 object. The Unpack and Pack patches pack the information from the Divide patch into a signal that can be received by a Vec3 object.
The output of the Pack patch is connected to a port representing the Position of a 3D object.
The Screen Rotate patch detects the rotation of the screen. We've set the value of to the Multiply patch to -1. This inverts the direction of the rotation, so the user can rotate the object in a more natural way.
The output of the Pack patch is connected to a port representing the Rotation of the 3D object.
In the example patch graph below, each facial gesture triggers a different plane to become visible, creating an effect that can cycle through interactions that are tied to specific facial gestures.
You can also use face gestures to control aspects of your effect. Here, we've used Smile to trigger different hats to appear, but you could also use Blink, Eyebrows Lowered, Eyebrows Raised, Right Eye Closed, Left Eye Closed or Smile. These patches must be connected to a Face Tracker patch to work properly.
We've also used Counter to control when each hat appears. Counter allows you to track inputs, in this case smiles, and their count. We've set a maximum count to 3 here, which corresponds to three hat options we've added to the effect. Each hat is matched with a count number from 1 to 3, which triggers whether it's visible in the scene or not.
The Runtime patch tracks the number of seconds that have passed since your effect started to run. One way you can use Runtime is to control how long something appears on the screen.
Here, Runtime connected with Offset tells the effect to check how long the effect has been running and compare against the offset we define. Here, we've used Less Than to define the offset as 3 seconds. This means that the text will only be visible when the runtime is less than 3 seconds.
For this example, we've also used Screen Tap to reset the timer, so that the text reappears when someone taps the screen. After 3 seconds, it will disappear again.
It's a good idea to add instructions when people need to interact with an effect to make it work. Find out more about adding Instructions.
This graph causes 2 different instructions to play, telling the user:
These are particularly useful instructions for effects that use the back camera and touch interactions.
The Camera Info patch signals whether the front or back camera is active.
The Option Picker contains two instruction tokens. Flip is the first instruction in the Option picker, corresponding to the 0 value in the If Then Else patch. Tap is the second option, corresponding to the 1 value.
We've used the If Then Else patch to tell the Option picker to play the Flip instruction if the front camera is active (0), and the Tap instruction when the back camera is active (1).
The Option Picker is then connected to the Token port in the Instruction patch - to determine which instruction shows.
We've used the Pulse, Animation and Switch patches to determine when the instructions are shown in the effect, and for how long. The Pulse patch detects when the effect starts to play, or the camera switches. The Duration input in the Animation patch tells the instruction to play for 5 seconds, based on a signal from the Pulse patch.
The Completed port is connected to the Turn Off port in the Switch patch, telling the instructions to turn off after 5 seconds. Switch is then connected to the Enabled port in the Instructions patch, to determine when instructions show in the effect.
Find inspiration, see examples, get support, and share your work with a network of creators.Join Community