- ML Pipeline
- Example Apps
Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign language recognition, and full-body gesture control. For example, it can form the basis for yoga, dance, and fitness applications. It can also enable the overlay of digital content and information on top of the physical world in augmented reality.
MediaPipe Pose is a ML solution for high-fidelity upper-body pose tracking, inferring 25 2D upper-body landmarks from RGB video frames utilizing our BlazePose research. Current state-of-the-art approaches rely primarily on powerful desktop environments for inference, whereas our method achieves real-time performance on most modern mobile phones, desktops/laptops, in python and even on the web. A variant of MediaPipe Pose that performs full-body pose tracking on mobile phones will be included in an upcoming release of ML Kit.
|Fig 1. Example of MediaPipe Pose for upper-body pose tracking.|
The solution utilizes a two-step detector-tracker ML pipeline, proven to be effective in our MediaPipe Hands and MediaPipe Face Mesh solutions. Using a detector, the pipeline first locates the pose region-of-interest (ROI) within the frame. The tracker subsequently predicts the pose landmarks within the ROI using the ROI-cropped frame as input. Note that for video use cases the detector is invoked only as needed, i.e., for the very first frame and when the tracker could no longer identify body pose presence in the previous frame. For other frames the pipeline simply derives the ROI from the previous frame’s pose landmarks.
The pipeline is implemented as a MediaPipe graph that uses a pose landmark subgraph from the pose landmark module and renders using a dedicated upper-body pose renderer subgraph. The pose landmark subgraph internally uses a pose detection subgraph from the pose detection module.
The detector is inspired by our own lightweight BlazeFace model, used in MediaPipe Face Detection, as a proxy for a person detector. It explicitly predicts two additional virtual keypoints that firmly describe the human body center, rotation and scale as a circle. Inspired by Leonardo’s Vitruvian man, we predict the midpoint of a person’s hips, the radius of a circle circumscribing the whole person, and the incline angle of the line connecting the shoulder and hip midpoints.
|Fig 2. Vitruvian man aligned via two virtual keypoints predicted by BlazePose detector in addition to the face bounding box.|
The landmark model currently included in MediaPipe Pose predicts the location of 25 upper-body landmarks (see figure below), each with
(x, y, z, visibility), plus two virtual alignment keypoints. Note that the
z value should be discarded as the model is currently not fully trained to predict depth, but this is something we have on the roadmap. The model shares the same architecture as the full-body version that predicts 33 landmarks, described in more detail in the BlazePose Google AI Blog and in this paper.
|Fig 3. 25 upper-body pose landmarks.|
- Android target: (or download prebuilt ARM64 APK)
- iOS target:
Please first see general instructions for desktop on how to build MediaPipe examples.
- Running on CPU
- Running on GPU
MediaPipe Python package is available on PyPI, and can be installed simply by
pip install mediapipe on Linux and macOS, as described below and in this colab. If you do need to build the Python package from source, see additional instructions.
# Activate a Python virtual environment. $ python3 -m venv mp_env && source mp_env/bin/activate # Install MediaPipe Python package (mp_env)$ pip install mediapipe # Run in Python interpreter (mp_env)$ python3 >>> import mediapipe as mp >>> pose_tracker = mp.examples.UpperBodyPoseTracker() # For image input >>> pose_landmarks, _ = pose_tracker.run(input_file='/path/to/input/file', output_file='/path/to/output/file') >>> pose_landmarks, annotated_image = pose_tracker.run(input_file='/path/to/file') # To print out the pose landmarks, you can simply do "print(pose_landmarks)". # However, the data points can be more accessible with the following approach. >>> [print('x is', data_point.x, 'y is', data_point.y, 'z is', data_point.z, 'visibility is', data_point.visibility) for data_point in pose_landmarks.landmark] # For live camera input # (Press Esc within the output image window to stop the run or let it self terminate after 30 seconds.) >>> pose_tracker.run_live() # Close the tracker. >>> pose_tracker.close()
Tip: Use command
deactivate to exit the Python virtual environment.
Please refer to these instructions.
- Google AI Blog: BlazePose - On-device Real-time Body Pose Tracking
- Paper: BlazePose: On-device Real-time Body Pose Tracking (presentation)
- Models and model cards