In our JavaScript app, after importing TensorFlow.js, we can load our model. Note that the following code is in JavaScript. It has a similar syntax to Python:
import * as tf from '@tensorflow/tfjs';
const model = await tf.loadModel(MOBILENET_MODEL_PATH);
We will also use a library called face-api.js to extract faces:
import * as faceapi from 'face-api.js';
await faceapi.loadTinyFaceDetectorModel(DETECTION_MODEL_PATH)
Once both models are loaded, we can start processing images from the user:
const video = document.getElementById('video');
const detection = await faceapi.detectSingleFace(video, new faceapi.TinyFaceDetectorOptions())
if (detection) {
const faceCanvases = await faceapi.extractFaces(video, [detection])
const values = await predict(faceCanvases[0])
}
Here, we grab a frame from the video element displaying the webcam of the user. The face-api.js library will attempt to detect a face in this frame. If it detects a frame, the part of the...