Integration manual, use the “posture similarity comparison” function to adapt to motion (action) recognition and detection “seconds”

This week ushered in a milestone function update of the “AI Motion Recognition Mini Program Plugin”: the “Pose Similarity Comparison” function. Using this feature can greatly improve the speed of your adaptation to motion (action) recognition and detection. The following will take you to experience the charm of this feature.

1. Make sure to upgrade the plugin version to v1.0.7.

//app.json
{
"plugins": {
        "aiSport": {
            "version": "1.0.7",
            "provider": "wx6130e578c4a26a1a"
        }
    }
}

2. API Introduction for Posture Similarity Comparison

The “posture similarity comparison” function is to partition and comprehensively compare the given two groups of “human body key points”, and give a score, which saves you the trouble of configuring detection rules when adapting motion (action) recognition detection. cumbersome. The function has three main objects placed in the calc namespace of the plug-in, namely: calc.PoseComparer, calc.PoseComparerResult, and calc.PoseComparerPartItem. For details, please refer to api-docs.

Third, take the attitude sample

Before performing pose comparison, you need to take a key point sample of a standard pose, and you can extract the sample through the “Motion Construction Debugging Tool” we provide you.

Fourth, perform sample comparison

//Sample posture human body key points
const sample =
 [{y:95.41808288282594,x:214.42673274576924,score:0.51611328125,name:"nose"},
 {y:84.61684727250136,x:221.80983627909686,score:0.7265625,name:"left_eye"},
 {y:87.59059985661885,x:202.12153237356293,score:0.59130859375,name:"right_eye"},
 {y:92.85449529945058,x:234.93538334278358,score:0.814453125,name:"left_ear"},
 {y:99.07546188234281,x:188.58581196413604,score:0.6806640625,name:"right_ear"},
 {y:149.86859452983884,x:271.3040866650822,score:0.7246093153953552,name:"left_shoulder"},
 {y:162.78905492065545,x:158.09624324078422,score:0.82666015625,name:"right_shoulder"},
 {y:236.41516213602512,x:280.8747980656871,score:0.728515625,name:"left_elbow"},
 {y:246.8062369181066,x:156.3188420992395,score:0.55859375,name:"right_elbow"},
 {y:305.46100866896046,x:286.61722490605007,score:0.6591796875,name:"left_wrist"},
 {y:313.80120003234475,x:152.9006975047454,score:0.70849609375,name:"right_wrist"},
 {y:304.5039375289,x:251.342317172392,score:0.87646484375,name:"left_hip"},
 {y:303.68360752741575,x:189.6796075527766,score:0.8740234375,name:"right_hip"},
 {y:431.38422581120494,x:237.66987231438497,score:0.70703125,name:"left_knee"},
 {y:430.01698132540423,x:189.6796075527766,score:0.8017578125,name:"right_knee"},
 {y:529.8258287888553,x:229.19295650242066,score:0.6884765625,name:"left_ankle"},
 {y:534.747908937738,x:201.71134233782658,score:0.578125,name:"right_ankle"}];

//Current frame action, after frame drawing and recognition, take the keypoints in the human body recognition result
const frame =
 [{y:154.06250001297832,x:258.7499999883252,score:0.728515625,name:"nose"},
 {y:143.12500001305142,x:254.37499998835446,score:0.56298828125,name:"left_eye"},
 {y:143.75001908653357,x:255.937499988344,score:0.69482421875,name:"right_eye"},
 {y:143.984394086532,x:229.99999998851743,score:0.43115234375,name:"left_ear"}
 ,{y:146.17187501303107,x:236.09374998847667,score:0.4919433891773224,name:"right_ear"},
 {y:201.4062690861481,x:205.9375190621646,score:0.51416015625,name:"left_shoulder"},
 {y:202.03125001265758,x:227.96874998853102,score:0.66259765625,name:"right_shoulder"},
 {y:281.25001908561427,x:234.6874999884861,score:0.26416015625,name:"left_elbow"},
 {y:270.6250190856853,x:254.06249998835656,score:0.278076171875,name:"right_elbow"},
 {y:246.09376908584932,x:289.06249998812257,score:0.1997070610523224,name:"left_wrist"},
 {y:238.43750001241418,x:300.62499998804526,score:0.50927734375,name:"right_wrist"},
 {y:321.5624618648858,x:218.59376906208004,score:0.58154296875,name:"left_hip"},
 {y:323.43750001184594,x:224.06249998855716,score:0.5615234375,name:"right_hip"},
 {y:453.43750001097675,x:217.34376906208837,score:0.6103515625,name:"left_knee"},
 {y:455.6250000109622,x:214.06249998862396,score:0.51416015625,name:"right_knee"},
 {y:572.5000000101808,x:215.31249998861563,score:0.403564453125,name:"left_ankle"},
 {y:593.1250000100429,x:216.0937499886104,score:0.52294921875,name:"right_ankle"}];

//Create a new comparator and perform the comparison
 const poseComparer = new AiSports. calc. PoseComparer();
 const result = poseComparer. compare(sample, frame);
 console. log(result);

// output result
//{items:
// [{key:"head",score:0.4327263684686711,summary:"head deflection similarity"},
// {key:"trunk",score:0.8407704975917485,summary:"trunk shape similarity"},
// {key:"left_hand",score:0.2155245751055277,summary:"Left hand similarity"},
// {key:"right_hand",score:0.21361728579451628,summary:"Left hand similarity"},
// {key:"left_foot",score:0.5147016736506456,summary:"Left foot similarity"},
// {key:"right_foot",score:0.5190758118853293,summary:"right foot similarity"}],
// score:0.5110266728697409 //Overall similarity score
//}

5. Application of similarity results

After obtaining similar results, you can directly perform the overall score or the score judgment of the specified partition according to the requirements of the movement (action) (it is recommended that the similarity ≥ 0.80 be regarded as a pass). If you have more detailed requirements, you can also configure some enhanced rules for re-inspection. For details, please refer to the relevant chapters of body-calc in the integration document.

Note: In the current similarity comparison, the confidence level is relatively higher in the front and rear perspectives, and the front and side views are slightly worse. We will optimize the side views in the future, so stay tuned.

AI motion recognition applet plug-in introduction:

This plug-in can provide your applet with AI capabilities of human body detection and motion recognition. The plug-in currently supports the functions of rope skipping, jacking jumps, push-ups, sit-ups, squats (squats), planks, horse-step squats, etc. Recognition detection timing, counting analysis, and more sports types are being enriched; the plug-in motion recognition engine provides motion recognition capabilities based on rule configuration, and you can add a new motion (action) recognition capability by configuring some simple rules , if it is a complex motion type, it can also be performed by code extension.