FaceTrackingModule Overview
FaceTrackingModule Overview
FaceTrackingModule Overview

FaceTrackingModule

Enables the tracking of faces in three-dimensional space and exposes classes that describe key points of a detected face.

Importing this module automatically enables the Face Tracking capability within the project's Properties.

For two-dimensional face tracking, see the FaceTracking2D module.

Example

//============================================================================
// The following example demonstrates how to control the rotation and scale
// of an object using face rotation and mouth openness.
//
// Project setup:
// - Insert a plane to the Scene
//
// Required project capabilities:
// - Face Tracking (auto added on FaceTracking module import)
//
//============================================================================


// Load in the required modules
const Scene = require('Scene');
const FaceTracking = require('FaceTracking');

(async function() { // Enable async/await in JS [part 1]

  // Locate the plane in the scene
  const plane = await Scene.root.findFirst('plane0');

  // Create a reference to a detected face
  const face = FaceTracking.face(0);


  //==========================================================================
  // Control the rotation of the plane with the rotation of the face
  //==========================================================================

  // Create references to the transforms of the plane and face
  const planeTransform = plane.transform;
  const faceTransform = face.cameraTransform;

  // Bind the rotation of the face to the rotation of the plane
  planeTransform.rotationX = faceTransform.rotationX;
  planeTransform.rotationY = faceTransform.rotationY;
  planeTransform.rotationZ = faceTransform.rotationZ;


  //==========================================================================
  // Control the scale of the plane with mouth openness
  //==========================================================================

  // Create a reference to the mouth openness and amplify the signal using
  // the mul() and add() methods
  const mouthOpenness = face.mouth.openness.mul(4).add(1);

  // Bind the mouthOpenness signal to the x and y-axis scale signal of
  // the plane
  planeTransform.scaleX = mouthOpenness;
  planeTransform.scaleY = mouthOpenness;

})(); // Enable async/await in JS [part 2]

Properties

PropertyDescription
count
(get) count: ScalarSignal
(set) (Not Available)


The number of faces currently tracked in the scene, as a ScalarSignal.

Methods

MethodDescription
face
face(index: number): Face


Returns a Face object from the array of detected faces.
* index - the index of the Face object to retrieve from the array.

Classes

ClassDescription
CheekExposes key points of the cheek of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
ChinExposes key points of the chin of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
EyeExposes details and key points of the eye of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
EyebrowExposes key points of the eyebrow of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
FaceExposes details and key points of a three-dimensionally tracked face.
ForeheadExposes key points of the forehead of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
MouthExposes details and key points of the mouth of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.
NoseExposes key points of the nose of a detected Face object.
Key points are returned in the detected face's local coordinate system. Use Face.cameraTransform.applyToPoint() to convert the point to the camera's coordinate system.

Was this article helpful?