Pose To Detect

A computer marking your posture and creating a line skeleton for you body. Seems strange! Where could this be possibly used? Human Activity Recognition is one of the many applications of this Pose detection model. Try and think!

Working:-

Press the start button. The project will work with the values already set. You can alter the parameter values to play around and explore. The parameters are explained so that you understand before you change!

This Message will get remove in 3 sec

Instructions:
• Algorithm – Multi pose or single pose.
Input
• Architecture – Can be either MobileNetV1 or ResNet50. It determines which PoseNet architecture to load.
• inputResolution – A number or an Object of type {width: number, height: number}. Defaults to 257.  It specifies the size the image is resized and padded to before it is fed into the PoseNet model. The larger the value, the more accurate the model at the cost of speed. Set this to a smaller value to increase speed at the cost of accuracy. If a number is provided, the image will be resized and padded to be a square with the same width and height. If width and height are provided, the image will be resized and padded to the specified width and height.
• outputStride – Can be one of 8, 16, 32 (Stride 16, 32 are supported for the ResNet architecture and stride 8, 16, 32 are supported for the MobileNetV1 architecture. However if you are using stride 32 you must set the multiplier to 1.0). It specifies the output stride of the PoseNet model. The smaller the value, the larger the output resolution, and more accurate the model at the cost of speed. Set this to a larger value to increase speed at the cost of accuracy.
• Multiplier – Can be one of 1.0, 0.75, or 0.50 (The value is used only by the MobileNetV1 architecture and not by the ResNet architecture) .It is the float multiplier for the depth (number of channels) for all convolution ops. The larger the value, the larger the size of the layers, and more accurate the model at the cost of speed .Set this to a smaller value to increase speed at the cost of accuracy.
• quantBytes – This argument controls the bytes used for weight quantization. The available options are:
 • 4 bytes per float (no quantization). Leads to highest accuracy and original model size (~90MB).
  • 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction (~45MB).
  • 1 byte per float. Leads to lower accuracy and 4x model size reduction (~22MB).
**By default, PoseNet loads a MobileNetV1 architecture with a 0.75 multiplier. This is recommended for computers with mid-range/lower-end GPUs. A model with a 0.50 multiplier is recommended for mobile. The ResNet achitecture is recommended for computers with even more powerful GPUs.**
• Single Pose Detection:
• minPoseConfidence – Set the minimum confidence score for the pose. The value can range from 0 to 1.
• minPartConfidence – Set the minimum confidence score for the body part. The value can range from 0 to 1.
• Multi Pose Detection:
• maxPoseDetections – Set the maximum number of poses to detect. Defaults to 5.
• minPoseConfidence – Set the minimum confidence score for the pose. The value can range from 0 to 1.
• minPartConfidence – Set the minimum confidence score for the body part. The value can range from 0 to 1.
• nmsRadius – Non-maximum suppression part distance. It needs to be strictly positive. Two parts suppress each other if they are less than nmsRadius pixels away. Defaults to 30.
Output
• showVideo – Tick, if you want to see the video being captured by the camera.
• showSkeleton – Tick, if you want to see the line skeleton over the detected body.
• showPoints – Tick, if you want to see the points being marked on your different body parts
• showBoundingBox – Tick, if you want to see a box around your detected body pose.
• maxPoseDetections – Set the maximum number of poses to detect. Defaults to 5.

It is recommended to open this project in a laptop/desktop as it is currently unavailable in mobile view.