Body In Parts

This is a body segmentation project which can be used to segment an image into pixels that are and are not part of a person, and into pixels that belong to each of twenty-four body parts. 

Working:-

Press the start button. The project will work with the values already set. You can alter the parameter values to play around and explore. The parameters are explained so that you understand before you change!

Instructions:
◆ Camera – default or integrated webcam.
◆ flipHorizontal – Defaults to false. If the segmentation & pose should be flipped/mirrored horizontally. This should be set to true for videos where the video is by default flipped horizontally (i.e. a webcam), and you want the segmentation & pose to be returned in the proper orientation. or If the output should be flipped horizontally. Defaults to false.
◆ Algorithm – person or multi person.
Input
◆ Architecture – Can be either MobileNetV1 or ResNet50. It determines which BodyPix architecture to load.
◆ internalResolution – Defaults to 'medium’ in case of MobileNet and ‘low’ in case of ResNet. The internal resolution used by the model. The larger the internal resolution the more accurate the model at the cost of slower prediction times. Available values are 'low', 'medium', 'high' or a positive number.
◆ outputStride – TCan be one of 8, 16, 32 (Stride 16, 32 are supported for the ResNet architecture and stride 8, and 16 are supported for the MobileNetV1 architecture). It specifies the output stride of the BodyPix model. The smaller the value, the larger the output resolution, and more accurate the model at the cost of speed.A larger value results in a smaller model and faster prediction time but lower accuracy.
◆ Multiplier - Can be one of 1.0, 0.75, or 0.50 (The value is used only by the MobileNetV1 architecture and not by the ResNet architecture). It is the float multiplier for the depth (number of channels) for all convolution ops. The larger the value, the larger the size of the layers, and more accurate the model at the cost of speed.A smaller value results in a smaller model and faster prediction time but lower accuracy.
◆ quantBytes – This argument controls the bytes used for weight quantization. The available options are:
• 4 bytes per float (no quantization). Leads to highest accuracy and original model size.
• 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction.
• 1 byte per float. Leads to lower accuracy and 4x model size reduction.
◆ Estimate – Partmap OR Segmentation
Segmentation
◆ segmentationThreshold - Defaults to 0.7. Must be between 0 and 1. For each pixel, the model estimates a score between 0 and 1 that indicates how confident it is that part of a person is displayed in that pixel. This segmentationThreshold is used to convert these values to binary 0 or 1s by determining the minimum value a pixel's score must have to be considered part of a person. In essence, a higher value will create a tighter crop around a person but may result in some pixels being that are part of a person being excluded from the returned segmentation mask.
◆ Effect -mask or bokeh
If mask:
◆ Opacity – The opacity when drawing the mask on top of the image. Defaults to 0.7. Should be a float between 0 and 1.
◆ maskBlurAmount – How many pixels to blur the mask by. Defaults to 0. Should be an integer between 0 and 20.
◆ maskBackground - checkbox
If bokeh:
◆ backgroundBlurAmount – The opacity when drawing the mask on top of the image. Defaults to 0.7. Should be a float between 0 and 1.
◆ edgeBlurAmount – How many pixels to blur on the edge between the person and the background by. Defaults to 3. Should be an integer between 0 and 20.
◆ MultiPersonDecoding
◆ maxDetections - Defaults to 10. Maximum number of returned individual person detections per image.
◆ scoreThreshold - Defaults to 0.4. Only return individual person detections that have root part score greater or equal to this value.
◆ nmsRadius - Defaults to 20. Non-maximum suppression part distance in pixels. It needs to be strictly positive. Two parts suppress each other if they are less than nmsRadius pixels away.
◆ numKeypointForMatching - Set a value between 1 to 17.
◆ refineSteps - Default to 10. The number of refinement steps used when assigning the individual person segmentations. It needs to be strictly positive. The larger the higher the accuracy and slower the inference.
Part Map
◆ segmentationThreshold-Defaults to 0.5. Must be between 0 and 1. For each pixel, the model estimates a score between 0 and 1 that indicates how confident it is that part of a person is displayed in that pixel. This segmentationThreshold is used to convert these values to binary 0 or 1s by determining the minimum value a pixel's score must have to be considered part of a person. In essence, a higher value will create a tighter crop around a person but may result in some pixels being that are part of a person being excluded from the returned segmentation mask.
◆ Effect - Part map or pixilation or blur body part.
◆ Opacity - The opacity when drawing the mask on top of the image. Defaults to 0.9. Should be a float between 0 and 1.
◆ colorScale – rainbow or warm or spectral
◆ blurBodyPartAmount - How many pixels in the body part should blend into each other. Defaults to 3. Should be an integer between 1 and 20.
◆ bodyPartEdgeBlurAmount - How many pixels to blur on the edge between the person and the background by. Defaults to 3. Should be an integer between 0 and 20.
◆ showFps - checkbox

This Message will get remove in 3 sec

It is recommended to open this project in a laptop/desktop as it is currently unavailable in mobile view.