For our example app, we will work with bitmap images. You can see bitmaps as a matrix of raw pixels. They are compatible with most of the image libraries on Android. We obtain this bitmap from the view that displays the video feed from the camera, called textureView:
Bitmap bitmap = textureView.getBitmap(previewSize.getHeight() / 4, previewSize.getWidth() / 4)
We do not capture the bitmap at full resolution. Instead, we divide its dimensions by 4 (this number was picked by trial and error). Choosing a size that's too large would result in very slow face detection, reducing the inference time of our pipeline.
We then proceed to create vision.Frame from the bitmap. This step is necessary to pass the image to faceDetector:
Frame frame = new Frame.Builder().setBitmap(bitmap).build();
faces = faceDetector.detect(frame);
Then, for each face in faces, we can crop the face of the user in the bitmap. Provided in the GitHub repository, the cropFaceInBitmap helper function...