TITLE: Detecting Faces with MediaPipe Face Detector (JavaScript) DESCRIPTION: This snippet initializes the MediaPipe Face Detector task by loading the necessary WASM files and a pre-trained model. It then performs face detection on an HTML image element, returning the detected face locations and presence. Requires the @mediapipe/tasks-vision library. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_0 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const faceDetector = await FaceDetector.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite" ); const image = document.getElementById("image") as HTMLImageElement; const detections = faceDetector.detect(image); ``` ---------------------------------------- TITLE: Detecting Faces in Static Images with MediaPipe Face Detection (Python) DESCRIPTION: This Python snippet demonstrates how to perform face detection on a list of static image files using MediaPipe. It initializes the `FaceDetection` model, processes each image, converts color formats, draws detected faces, and saves the annotated images. It requires OpenCV (`cv2`) and MediaPipe (`mp_face_detection`, `mp_drawing`). SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_1 LANGUAGE: Python CODE: ``` IMAGE_FILES = [] with mp_face_detection.FaceDetection( model_selection=1, min_detection_confidence=0.5) as face_detection: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) # Convert the BGR image to RGB and process it with MediaPipe Face Detection. results = face_detection.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # Draw face detections of each face. if not results.detections: continue annotated_image = image.copy() for detection in results.detections: print('Nose tip:') print(mp_face_detection.get_key_point( detection, mp_face_detection.FaceKeyPoint.NOSE_TIP)) mp_drawing.draw_detection(annotated_image, detection) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) ``` ---------------------------------------- TITLE: Importing MediaPipe Face Detection and Drawing Utilities in Python DESCRIPTION: This snippet imports the essential libraries for utilizing MediaPipe's Face Detection solution in Python. It includes `cv2` for image processing, the main `mediapipe` library, and specific modules for face detection and drawing utilities to facilitate visualization of detection results. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_0 LANGUAGE: Python CODE: ``` import cv2 import mediapipe as mp mp_face_detection = mp.solutions.face_detection mp_drawing = mp.solutions.drawing_utils ``` ---------------------------------------- TITLE: Real-time Face Detection from Webcam with MediaPipe Face Detection (Python) DESCRIPTION: This Python snippet shows how to perform real-time face detection from a webcam feed using MediaPipe. It captures video frames, processes them with the `FaceDetection` model, draws annotations, and displays the output. It handles frame reading, color conversion, and performance optimization by marking images as non-writeable. It requires OpenCV (`cv2`) and MediaPipe. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_2 LANGUAGE: Python CODE: ``` cap = cv2.VideoCapture(0) with mp_face_detection.FaceDetection( model_selection=0, min_detection_confidence=0.5) as face_detection: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_detection.process(image) # Draw the face detection annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.detections: for detection in results.detections: mp_drawing.draw_detection(image, detection) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Face Detection', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() ``` ---------------------------------------- TITLE: Detecting and Drawing Face Landmarks on Static Images using MediaPipe Face Mesh (Python) DESCRIPTION: This Python example demonstrates how to use MediaPipe Face Mesh to detect and draw face landmarks on static images. It initializes the `FaceMesh` model with `static_image_mode=True` and processes a list of images, converting them to RGB, then drawing the face tessellation, contours, and irises on the annotated image. It requires OpenCV (`cv2`) and MediaPipe (`mediapipe`) as dependencies. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_0 LANGUAGE: Python CODE: ``` import cv2 import mediapipe as mp mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_face_mesh = mp.solutions.face_mesh # For static images: IMAGE_FILES = [] drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) with mp_face_mesh.FaceMesh( static_image_mode=True, max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5) as face_mesh: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) # Convert the BGR image to RGB before processing. results = face_mesh.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # Print and draw face mesh landmarks on the image. if not results.multi_face_landmarks: continue annotated_image = image.copy() for face_landmarks in results.multi_face_landmarks: print('face_landmarks:', face_landmarks) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_contours_style()) mp_drawing.draw_landmarks( image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_iris_connections_style()) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) ``` ---------------------------------------- TITLE: Stylizing Faces with MediaPipe Face Stylizer (JavaScript) DESCRIPTION: This snippet initializes the MediaPipe Face Stylizer task by loading the necessary WASM files and a pre-trained model. It then applies a stylization effect to an HTML image element, transforming the appearance of faces within the image. Requires the @mediapipe/tasks-vision library. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_2 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const faceStylizer = await FaceStylizer.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/face_stylizer/blaze_face_stylizer/float32/1/blaze_face_stylizer.task" ); const image = document.getElementById("image") as HTMLImageElement; const stylizedImage = faceStylizer.stylize(image); ``` ---------------------------------------- TITLE: HTML Setup for MediaPipe Face Detection Web Application DESCRIPTION: This HTML snippet provides the basic structure for a web application using MediaPipe Face Detection. It includes necessary script imports for MediaPipe utilities (camera, control, drawing, face detection) from CDN, and defines `video` and `canvas` elements for input and output display. These elements are crucial for real-time video processing and rendering. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_3 LANGUAGE: HTML CODE: ```
``` ---------------------------------------- TITLE: Real-time Face Detection in Browser with MediaPipe Face Detection (JavaScript) DESCRIPTION: This JavaScript module implements real-time face detection using MediaPipe in a web browser. It sets up a `FaceDetection` instance, configures options like model and confidence, and defines an `onResults` callback to draw detections on a canvas. A `Camera` utility streams video frames to the detection model, enabling live processing and visualization. It relies on the HTML structure provided previously. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_4 LANGUAGE: JavaScript CODE: ``` ``` ---------------------------------------- TITLE: Configuring MediaPipe Face Detection with Camera Input and OpenGL Rendering in Java DESCRIPTION: This snippet demonstrates the complete setup for real-time face detection using MediaPipe on Android. It initializes FaceDetection with specific options, configures CameraInput to feed frames, and sets up a SolutionGlSurfaceView for OpenGL rendering of the detection results, including logging nose tip coordinates. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_5 LANGUAGE: Java CODE: ``` // For camera input and result rendering with OpenGL. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(false) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // Initializes a new CameraInput instance and connects it to MediaPipe Face Detection Solution. CameraInput cameraInput = new CameraInput(this); cameraInput.setNewFrameListener( textureFrame -> faceDetection.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceDetection.getGlContext(), faceDetection.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceDetectionResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseTip.getX(), noseTip.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceDetectionResult); glSurfaceView.requestRender(); }); // The runnable to start camera after the GLSurfaceView is attached. glSurfaceView.post( () -> cameraInput.start( this, faceDetection.getGlContext(), CameraInput.CameraFacing.FRONT, glSurfaceView.getWidth(), glSurfaceView.getHeight())); ``` ---------------------------------------- TITLE: Real-time Face Mesh Detection with MediaPipe JavaScript DESCRIPTION: This JavaScript code initializes MediaPipe Face Mesh for real-time face landmark detection. It sets up a video stream as input, processes frames using the `FaceMesh` class, and draws the detected landmarks on a canvas. The `onResults` function handles the output, clearing the canvas and drawing the image along with multi-face landmarks using `drawConnectors` from the MediaPipe drawing utilities. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_3 LANGUAGE: JavaScript CODE: ``` const videoElement = document.getElementsByClassName('input_video')[0];\nconst canvasElement = document.getElementsByClassName('output_canvas')[0];\nconst canvasCtx = canvasElement.getContext('2d');\n\nfunction onResults(results) {\n canvasCtx.save();\n canvasCtx.clearRect(0, 0, canvasElement.width, canvasElement.height);\n canvasCtx.drawImage(\n results.image, 0, 0, canvasElement.width, canvasElement.height);\n if (results.multiFaceLandmarks) {\n for (const landmarks of results.multiFaceLandmarks) {\n drawConnectors(canvasCtx, landmarks, FACEMESH_TESSELATION,\n {color: '#C0C0C070', lineWidth: 1});\n drawConnectors(canvasCtx, landmarks, FACEMESH_RIGHT_EYE, {color: '#FF3030'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_RIGHT_EYEBROW, {color: '#FF3030'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_RIGHT_IRIS, {color: '#FF3030'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_LEFT_EYE, {color: '#30FF30'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_LEFT_EYEBROW, {color: '#30FF30'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_LEFT_IRIS, {color: '#30FF30'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_FACE_OVAL, {color: '#E0E0E0'});\n drawConnectors(canvasCtx, landmarks, FACEMESH_LIPS, {color: '#E0E0E0'});\n }\n }\n canvasCtx.restore();\n}\n\nconst faceMesh = new FaceMesh({locateFile: (file) => {\n return `https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}`;\n}});\nfaceMesh.setOptions({\n maxNumFaces: 1,\n refineLandmarks: true,\n minDetectionConfidence: 0.5,\n minTrackingConfidence: 0.5\n});\nfaceMesh.onResults(onResults);\n\nconst camera = new Camera(videoElement, {\n onFrame: async () => {\n await faceMesh.send({image: videoElement});\n },\n width: 1280,\n height: 720\n});\ncamera.start(); ``` ---------------------------------------- TITLE: Detecting Face Landmarks with MediaPipe Face Landmarker (JavaScript) DESCRIPTION: This snippet initializes the MediaPipe Face Landmarker task, loading the WASM files and a pre-trained model. It then detects facial landmarks on an HTML image element, which can be used for localizing key points and rendering visual effects. Requires the @mediapipe/tasks-vision library. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_1 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const faceLandmarker = await FaceLandmarker.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task" ); const image = document.getElementById("image") as HTMLImageElement; const landmarks = faceLandmarker.detect(image); ``` ---------------------------------------- TITLE: Initializing MediaPipe Face Detection for Video Input (Java) DESCRIPTION: This Java snippet demonstrates the complete setup for MediaPipe Face Detection on Android, enabling video input processing and OpenGL rendering. It initializes the FaceDetection model for video mode, sets up a VideoInput to feed frames, configures a SolutionGlSurfaceView for result visualization, and includes an ActivityResultLauncher to select and process a video from the device's media store. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_7 LANGUAGE: Java CODE: ``` // For video input and result rendering with OpenGL. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(false) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // Initializes a new VideoInput instance and connects it to MediaPipe Face Detection Solution. VideoInput videoInput = new VideoInput(this); videoInput.setNewFrameListener( textureFrame -> faceDetection.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceDetection.getGlContext(), faceDetection.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceDetectionResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseTip.getX(), noseTip.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceDetectionResult); glSurfaceView.requestRender(); }); ActivityResultLauncher videoGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null) { if (result.getResultCode() == RESULT_OK) { glSurfaceView.post( () -> videoInput.start( this, resultIntent.getData(), faceDetection.getGlContext(), glSurfaceView.getWidth(), glSurfaceView.getHeight())); } } }); Intent pickVideoIntent = new Intent(Intent.ACTION_PICK); pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI, "video/*"); videoGetter.launch(pickVideoIntent); ``` ---------------------------------------- TITLE: Processing Image Input with MediaPipe Face Detection in Android DESCRIPTION: This snippet demonstrates how to initialize MediaPipe Face Detection for static image mode, set up listeners for processing results and errors, and integrate with an ActivityResultLauncher to select and send images from the device gallery for detection. It shows how to draw detection results on a custom ImageView. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_detection.md#_snippet_6 LANGUAGE: Java CODE: ``` // For reading images from gallery and drawing the output in an ImageView. FaceDetectionOptions faceDetectionOptions = FaceDetectionOptions.builder() .setStaticImageMode(true) .setModelSelection(0).build(); FaceDetection faceDetection = new FaceDetection(this, faceDetectionOptions); // Connects MediaPipe Face Detection Solution to the user-defined ImageView // instance that allows users to have the custom drawing of the output landmarks // on it. See mediapipe/examples/android/solutions/facedetection/src/main/java/com/google/mediapipe/examples/facedetection/FaceDetectionResultImageView.java // as an example. FaceDetectionResultImageView imageView = new FaceDetectionResultImageView(this); faceDetection.setResultListener( faceDetectionResult -> { if (faceDetectionResult.multiFaceDetections().isEmpty()) { return; } int width = faceDetectionResult.inputBitmap().getWidth(); int height = faceDetectionResult.inputBitmap().getHeight(); RelativeKeypoint noseTip = faceDetectionResult .multiFaceDetections() .get(0) .getLocationData() .getRelativeKeypoints(FaceKeypoint.NOSE_TIP); Log.i( TAG, String.format( "MediaPipe Face Detection nose tip coordinates (pixel values): x=%f, y=%f", noseTip.getX() * width, noseTip.getY() * height)); // Request canvas drawing. imageView.setFaceDetectionResult(faceDetectionResult); runOnUiThread(() -> imageView.update()); }); faceDetection.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Detection error:" + message)); // ActivityResultLauncher to get an image from the gallery as Bitmap. ActivityResultLauncher imageGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null && result.getResultCode() == RESULT_OK) { Bitmap bitmap = null; try { bitmap = MediaStore.Images.Media.getBitmap( this.getContentResolver(), resultIntent.getData()); // Please also rotate the Bitmap based on its orientation. } catch (IOException e) { Log.e(TAG, "Bitmap reading error:" + e); } if (bitmap != null) { faceDetection.send(bitmap); } } }); Intent pickImageIntent = new Intent(Intent.ACTION_PICK); pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI, "image/*"); imageGetter.launch(pickImageIntent); ``` ---------------------------------------- TITLE: Real-time Face Mesh Detection with Webcam in Python DESCRIPTION: This Python snippet demonstrates how to perform real-time face mesh detection using MediaPipe and OpenCV. It captures video from a webcam, processes each frame to detect facial landmarks, and then draws the tessellation, contours, and iris connections on the image before displaying it. It handles frame reading, color conversion, and display. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_1 LANGUAGE: Python CODE: ``` drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1) cap = cv2.VideoCapture(0) with mp_face_mesh.FaceMesh( max_num_faces=1, refine_landmarks=True, min_detection_confidence=0.5, min_tracking_confidence=0.5) as face_mesh: while cap.isOpened(): success, image = cap.read() if not success: print("Ignoring empty camera frame.") # If loading a video, use 'break' instead of 'continue'. continue # To improve performance, optionally mark the image as not writeable to # pass by reference. image.flags.writeable = False image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) results = face_mesh.process(image) # Draw the face mesh annotations on the image. image.flags.writeable = True image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if results.multi_face_landmarks: for face_landmarks in results.multi_face_landmarks: mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_contours_style()) mp_drawing.draw_landmarks( image=image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_iris_connections_style()) # Flip the image horizontally for a selfie-view display. cv2.imshow('MediaPipe Face Mesh', cv2.flip(image, 1)) if cv2.waitKey(5) & 0xFF == 27: break cap.release() ``` ---------------------------------------- TITLE: HTML Setup for MediaPipe Face Mesh JavaScript DESCRIPTION: This HTML snippet sets up the basic page structure for a MediaPipe Face Mesh application. It includes necessary script imports from the MediaPipe CDN for camera utilities, control utilities, drawing utilities, and the Face Mesh solution itself. It also defines a video element for input and a canvas element for output, which are essential prerequisites for the JavaScript logic. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_2 LANGUAGE: HTML CODE: ``` \n\n\n \n \n \n \n \n\n\n\n
\n \n \n
\n\n ``` ---------------------------------------- TITLE: Processing Static Images with MediaPipe Face Mesh (Java) DESCRIPTION: This snippet illustrates how to use MediaPipe Face Mesh for processing static images from the device gallery. It configures `FaceMeshOptions` for static image mode, connects the solution to a `FaceMeshResultImageView` for custom drawing, and uses an `ActivityResultLauncher` to select an image from the gallery, convert it to a Bitmap, and send it for processing. It also includes result and error listeners. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_5 LANGUAGE: Java CODE: ``` // For reading images from gallery and drawing the output in an ImageView. FaceMeshOptions faceMeshOptions = FaceMeshOptions.builder() .setStaticImageMode(true) .setRefineLandmarks(true) .setMaxNumFaces(1) .setRunOnGpu(true).build(); FaceMesh faceMesh = new FaceMesh(this, faceMeshOptions); // Connects MediaPipe Face Mesh Solution to the user-defined ImageView instance // that allows users to have the custom drawing of the output landmarks on it. // See mediapipe/examples/android/solutions/facemesh/src/main/java/com/google/mediapipe/examples/facemesh/FaceMeshResultImageView.java // as an example. FaceMeshResultImageView imageView = new FaceMeshResultImageView(this); faceMesh.setResultListener( faceMeshResult -> { int width = faceMeshResult.inputBitmap().getWidth(); int height = faceMeshResult.inputBitmap().getHeight(); NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1); Log.i( TAG, String.format( "MediaPipe Face Mesh nose coordinates (pixel values): x=%f, y=%f", noseLandmark.getX() * width, noseLandmark.getY() * height)); // Request canvas drawing. imageView.setFaceMeshResult(faceMeshResult); runOnUiThread(() -> imageView.update()); }); faceMesh.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message)); // ActivityResultLauncher to get an image from the gallery as Bitmap. ActivityResultLauncher imageGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null && result.getResultCode() == RESULT_OK) { Bitmap bitmap = null; try { bitmap = MediaStore.Images.Media.getBitmap( this.getContentResolver(), resultIntent.getData()); // Please also rotate the Bitmap based on its orientation. } catch (IOException e) { Log.e(TAG, "Bitmap reading error:" + e); } if (bitmap != null) { faceMesh.send(bitmap); } } }); Intent pickImageIntent = new Intent(Intent.ACTION_PICK); pickImageIntent.setDataAndType(MediaStore.Images.Media.INTERNAL_CONTENT_URI, "image/*"); imageGetter.launch(pickImageIntent); ``` ---------------------------------------- TITLE: Configuring MediaPipe Face Mesh for Camera Input (Java) DESCRIPTION: This snippet demonstrates how to set up MediaPipe Face Mesh for real-time camera input on Android. It configures `FaceMeshOptions` for GPU processing and real-time mode, initializes `CameraInput` to feed frames to the solution, and sets up a `SolutionGlSurfaceView` for OpenGL rendering of results. It also includes error and result listeners for logging and rendering updates. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_4 LANGUAGE: Java CODE: ``` // For camera input and result rendering with OpenGL. FaceMeshOptions faceMeshOptions = FaceMeshOptions.builder() .setStaticImageMode(false) .setRefineLandmarks(true) .setMaxNumFaces(1) .setRunOnGpu(true).build(); FaceMesh faceMesh = new FaceMesh(this, faceMeshOptions); faceMesh.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message)); // Initializes a new CameraInput instance and connects it to MediaPipe Face Mesh Solution. CameraInput cameraInput = new CameraInput(this); cameraInput.setNewFrameListener( textureFrame -> faceMesh.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facemesh/src/main/java/com/google/mediapipe/examples/facemesh/FaceMeshResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceMesh.getGlContext(), faceMesh.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceMeshResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceMesh.setResultListener( faceMeshResult -> { NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1); Log.i( TAG, String.format( "MediaPipe Face Mesh nose normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseLandmark.getX(), noseLandmark.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceMeshResult); glSurfaceView.requestRender(); }); // The runnable to start camera after the GLSurfaceView is attached. glSurfaceView.post( () -> cameraInput.start( this, faceMesh.getGlContext(), CameraInput.CameraFacing.FRONT, glSurfaceView.getWidth(), glSurfaceView.getHeight())); ``` ---------------------------------------- TITLE: Running Face Detection with MediaPipe on Coral TPU (Shell) DESCRIPTION: This command executes the MediaPipe face detection model on a desktop with a Coral TPU. It uses the `face_detection_desktop_live.pbtxt` calculator graph configuration file to define the processing pipeline. The `GLOG_logtostderr=1` prefix ensures that logs are output to standard error. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/examples/coral/README.md#_snippet_13 LANGUAGE: Shell CODE: ``` GLOG_logtostderr=1 ./face_detection_tpu --calculator_graph_config_file \n mediapipe/examples/coral/graphs/face_detection_desktop_live.pbtxt ``` ---------------------------------------- TITLE: Configuring and Running MediaPipe Face Mesh with Video Input - Android Java DESCRIPTION: This snippet demonstrates how to initialize MediaPipe Face Mesh for video input on Android. It configures `FaceMeshOptions` for GPU usage and landmark refinement, sets up `VideoInput` to feed frames to the Face Mesh solution, and integrates `SolutionGlSurfaceView` for OpenGL rendering of results. It also shows how to listen for results, render them, and handle video selection from the device's media store using `ActivityResultLauncher`. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/face_mesh.md#_snippet_6 LANGUAGE: Java CODE: ``` // For video input and result rendering with OpenGL. FaceMeshOptions faceMeshOptions = FaceMeshOptions.builder() .setStaticImageMode(false) .setRefineLandmarks(true) .setMaxNumFaces(1) .setRunOnGpu(true).build(); FaceMesh faceMesh = new FaceMesh(this, faceMeshOptions); faceMesh.setErrorListener( (message, e) -> Log.e(TAG, "MediaPipe Face Mesh error:" + message)); // Initializes a new VideoInput instance and connects it to MediaPipe Face Mesh Solution. VideoInput videoInput = new VideoInput(this); videoInput.setNewFrameListener( textureFrame -> faceMesh.send(textureFrame)); // Initializes a new GlSurfaceView with a ResultGlRenderer instance // that provides the interfaces to run user-defined OpenGL rendering code. // See mediapipe/examples/android/solutions/facemesh/src/main/java/com/google/mediapipe/examples/facemesh/FaceMeshResultGlRenderer.java // as an example. SolutionGlSurfaceView glSurfaceView = new SolutionGlSurfaceView<>( this, faceMesh.getGlContext(), faceMesh.getGlMajorVersion()); glSurfaceView.setSolutionResultRenderer(new FaceMeshResultGlRenderer()); glSurfaceView.setRenderInputImage(true); faceMesh.setResultListener( faceMeshResult -> { NormalizedLandmark noseLandmark = result.multiFaceLandmarks().get(0).getLandmarkList().get(1); Log.i( TAG, String.format( "MediaPipe Face Mesh nose normalized coordinates (value range: [0, 1]): x=%f, y=%f", noseLandmark.getX(), noseLandmark.getY())); // Request GL rendering. glSurfaceView.setRenderData(faceMeshResult); glSurfaceView.requestRender(); }); ActivityResultLauncher videoGetter = registerForActivityResult( new ActivityResultContracts.StartActivityForResult(), result -> { Intent resultIntent = result.getData(); if (resultIntent != null) { if (result.getResultCode() == RESULT_OK) { glSurfaceView.post( () -> videoInput.start( this, resultIntent.getData(), faceMesh.getGlContext(), glSurfaceView.getWidth(), glSurfaceView.getHeight())); } } }); Intent pickVideoIntent = new Intent(Intent.ACTION_PICK); pickVideoIntent.setDataAndType(MediaStore.Video.Media.INTERNAL_CONTENT_URI, "video/*"); videoGetter.launch(pickVideoIntent); ``` ---------------------------------------- TITLE: Compiling Coral Face Detection Example for ARMHF DESCRIPTION: This `make` command compiles the MediaPipe face detection example for ARMHF platforms (e.g., Raspberry Pi) by executing the build process within a Docker container. It encapsulates the cross-compilation steps into a single, convenient command. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/examples/coral/README.md#_snippet_8 LANGUAGE: bash CODE: ``` make -C mediapipe/examples/coral \ PLATFORM=armhf \ DOCKER_COMMAND="make -C mediapipe/examples/coral BAZEL_TARGET=mediapipe/examples/coral:face_detection_tpu build" \ docker ``` ---------------------------------------- TITLE: Compiling Face Detection for Coral USB with Bazel DESCRIPTION: This command compiles the MediaPipe face detection example for Coral USB devices using Bazel. It includes flags for optimized compilation, portable Darwin support, GPU disablement, enabling Edge TPU USB support, and linking the `libusb-1.0.so` library, which is a prerequisite for USB device communication. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/examples/coral/README.md#_snippet_0 LANGUAGE: bash CODE: ``` bazel build \ --compilation_mode=opt \ --define darwinn_portable=1 \ --define MEDIAPIPE_DISABLE_GPU=1 \ --define MEDIAPIPE_EDGE_TPU=usb \ --linkopt=-l:libusb-1.0.so \ mediapipe/examples/coral:face_detection_tpu build ``` ---------------------------------------- TITLE: Importing MediaPipe and Face Mesh Solution DESCRIPTION: This Python code imports the MediaPipe library, aliasing it as `mp`, and then accesses the `face_mesh` solution from `mp.solutions`. This prepares the environment to utilize MediaPipe's pre-built face mesh capabilities. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/python.md#_snippet_2 LANGUAGE: Python CODE: ``` import mediapipe as mp mp_face_mesh = mp.solutions.face_mesh ``` ---------------------------------------- TITLE: Cross-Compiling Face Detection for ARM32 in Docker DESCRIPTION: This Bazel command, executed within the ARM32 Docker environment, cross-compiles the MediaPipe face detection example. It specifies the crosstool, GCC compiler, ARMv7a CPU architecture, and enables Coral USB Edge TPU support, along with linking `libusb`. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/examples/coral/README.md#_snippet_3 LANGUAGE: bash CODE: ``` bazel build \ --crosstool_top=@crosstool//:toolchains \ --compiler=gcc \ --cpu=armv7a \ --define darwinn_portable=1 \ --define MEDIAPIPE_DISABLE_GPU=1 \ --define MEDIAPIPE_EDGE_TPU=usb \ --linkopt=-l:libusb-1.0.so \ mediapipe/examples/coral:face_detection_tpu build ``` ---------------------------------------- TITLE: Cross-Compiling Face Detection for ARM64 in Docker DESCRIPTION: This Bazel command, run inside the ARM64 Docker environment, cross-compiles the MediaPipe face detection example. It configures the build with the appropriate crosstool, GCC compiler, aarch64 CPU architecture, and includes support for Coral USB Edge TPU and `libusb` linking. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/examples/coral/README.md#_snippet_4 LANGUAGE: bash CODE: ``` bazel build \ --crosstool_top=@crosstool//:toolchains \ --compiler=gcc \ --cpu=aarch64 \ --define darwinn_portable=1 \ --define MEDIAPIPE_DISABLE_GPU=1 \ --define MEDIAPIPE_EDGE_TPU=usb \ --linkopt=-l:libusb-1.0.so \ mediapipe/examples/coral:face_detection_tpu build ``` ---------------------------------------- TITLE: Building MediaPipe Face Detection AAR (Bazel) DESCRIPTION: This specific Bazel command builds the `mediapipe_face_detection.aar` file, referencing the target defined in the previous step. It uses the same optimization and linking flags as the generic build command, ensuring the AAR is optimized for Android. The output confirms the successful generation of the AAR. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/android_archive_library.md#_snippet_2 LANGUAGE: bash CODE: ``` bazel build -c opt --strip=ALWAYS \ --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \ --fat_apk_cpu=arm64-v8a,armeabi-v7a \ --legacy_whole_archive=0 \ --features=-legacy_whole_archive \ --copt=-fvisibility=hidden \ --copt=-ffunction-sections \ --copt=-fdata-sections \ --copt=-fstack-protector \ --copt=-Oz \ --copt=-fomit-frame-pointer \ --copt=-DABSL_MIN_LOG_LEVEL=2 \ --linkopt=-Wl,--gc-sections,--strip-all \ //mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:mediapipe_face_detection.aar ``` ---------------------------------------- TITLE: Defining MediaPipe AAR Target for Face Detection (Bazel) DESCRIPTION: This snippet defines a `mediapipe_aar` target in a Bazel BUILD file. It specifies the name of the AAR and lists the required calculator dependencies, in this case, `mobile_calculators` for MediaPipe Face Detection. This custom target is essential for generating an AAR that includes only the necessary MediaPipe components for a specific project. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/android_archive_library.md#_snippet_0 LANGUAGE: Bazel CODE: ``` load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar") mediapipe_aar( name = "mediapipe_face_detection", calculators = ["//mediapipe/graphs/face_detection:mobile_calculators"], ) ``` ---------------------------------------- TITLE: Adding MediaPipe Android Solution APIs to Gradle Dependencies DESCRIPTION: This snippet demonstrates how to add MediaPipe Android Solution APIs to an Android Studio project's `build.gradle` file. It includes the core solution library and optional libraries for Face Detection, Face Mesh, and Hands, allowing developers to easily incorporate these functionalities into their Android applications. These dependencies are fetched from Google's Maven Repository. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/android_solutions.md#_snippet_0 LANGUAGE: Gradle CODE: ``` dependencies { // MediaPipe solution-core is the foundation of any MediaPipe Solutions. implementation 'com.google.mediapipe:solution-core:latest.release' // Optional: MediaPipe Face Detection Solution. implementation 'com.google.mediapipe:facedetection:latest.release' // Optional: MediaPipe Face Mesh Solution. implementation 'com.google.mediapipe:facemesh:latest.release' // Optional: MediaPipe Hands Solution. implementation 'com.google.mediapipe:hands:latest.release' } ``` ---------------------------------------- TITLE: Detecting Holistic Body Landmarks with MediaPipe Holistic Landmarker (JavaScript) DESCRIPTION: This snippet initializes the MediaPipe Holistic Landmarker task, loading the WASM files and a pre-trained model. It combines pose, face, and hand landmark detection to provide a complete set of human body landmarks from an HTML image element. Requires the @mediapipe/tasks-vision library. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/mediapipe/tasks/web/vision/README.md#_snippet_5 LANGUAGE: JavaScript CODE: ``` const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm" ); const holisticLandmarker = await HolisticLandmarker.createFromModelPath(vision, "https://storage.googleapis.com/mediapipe-models/holistic_landmarker/holistic_landmarker/float16/1/hand_landmark.task" ); const image = document.getElementById("image") as HTMLImageElement; const landmarks = holisticLandmarker.detect(image); ``` ---------------------------------------- TITLE: Configuring Camera Facing and Starting Camera (Java) DESCRIPTION: This code determines the camera facing direction (front or back) based on application metadata and then initializes the camera using `cameraHelper.startCamera()`. It sets up the camera for preview without directly displaying the output to a `SurfaceTexture`. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/hello_world_android.md#_snippet_25 LANGUAGE: Java CODE: ``` CameraHelper.CameraFacing cameraFacing = applicationInfo.metaData.getBoolean("cameraFacingFront", false) ? CameraHelper.CameraFacing.FRONT : CameraHelper.CameraFacing.BACK; cameraHelper.startCamera(this, cameraFacing, /*unusedSurfaceTexture=*/ null); ``` ---------------------------------------- TITLE: Adding Camera Facing Metadata to AndroidManifest (XML) DESCRIPTION: This XML snippet adds a `meta-data` tag within the `` block of `AndroidManifest.xml`. It defines a placeholder `cameraFacingFront` which allows specifying the default camera (front or back) via build-time configuration, avoiding code changes. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/hello_world_android.md#_snippet_20 LANGUAGE: XML CODE: ``` ... ``` ---------------------------------------- TITLE: Configuring Camera Facing in Bazel BUILD File (Bazel) DESCRIPTION: This Bazel snippet modifies the `manifest_values` attribute within the `helloworld` android binary rule in the `BUILD` file. It sets `cameraFacingFront` to `False`, indicating that the back camera should be used by default for the application. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/getting_started/hello_world_android.md#_snippet_21 LANGUAGE: Bazel CODE: ``` manifest_values = { "applicationId": "com.google.mediapipe.apps.basic", "appName": "Hello World", "mainActivity": ".MainActivity", "cameraFacingFront": "False", }, ``` ---------------------------------------- TITLE: Processing Static Images with MediaPipe Holistic in Python DESCRIPTION: This snippet demonstrates how to use MediaPipe Holistic to process a list of static images. It initializes the Holistic model with segmentation and face landmark refinement enabled, then processes each image to detect pose, face, and hand landmarks, and applies segmentation. It outputs annotated images and prints nose coordinates. SOURCE: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/holistic.md#_snippet_0 LANGUAGE: Python CODE: ``` import cv2 import mediapipe as mp import numpy as np mp_drawing = mp.solutions.drawing_utils mp_drawing_styles = mp.solutions.drawing_styles mp_holistic = mp.solutions.holistic # For static images: IMAGE_FILES = [] BG_COLOR = (192, 192, 192) # gray with mp_holistic.Holistic( static_image_mode=True, model_complexity=2, enable_segmentation=True, refine_face_landmarks=True) as holistic: for idx, file in enumerate(IMAGE_FILES): image = cv2.imread(file) image_height, image_width, _ = image.shape # Convert the BGR image to RGB before processing. results = holistic.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) if results.pose_landmarks: print( f'Nose coordinates: (' f'{results.pose_landmarks.landmark[mp_holistic.PoseLandmark.NOSE].x * image_width}, ' f'{results.pose_landmarks.landmark[mp_holistic.PoseLandmark.NOSE].y * image_height})' ) annotated_image = image.copy() # Draw segmentation on the image. # To improve segmentation around boundaries, consider applying a joint # bilateral filter to "results.segmentation_mask" with "image". condition = np.stack((results.segmentation_mask,) * 3, axis=-1) > 0.1 bg_image = np.zeros(image.shape, dtype=np.uint8) bg_image[:] = BG_COLOR annotated_image = np.where(condition, annotated_image, bg_image) # Draw pose, left and right hands, and face landmarks on the image. mp_drawing.draw_landmarks( annotated_image, results.face_landmarks, mp_holistic.FACEMESH_TESSELATION, landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles .get_default_face_mesh_tesselation_style()) mp_drawing.draw_landmarks( annotated_image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS, landmark_drawing_spec=mp_drawing_styles. get_default_pose_landmarks_style()) cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image) # Plot pose world landmarks. mp_drawing.plot_landmarks( results.pose_world_landmarks, mp_holistic.POSE_CONNECTIONS) ```