Face recognition liveness detection (open mouth and shake head recognition)

Table of Contents

1: Introduction

Two: Implementation idea analysis

According to the analysis of implementation ideas, coding is implemented step by step:

1. Click the recognition button to call the camera

2. CameraRules class, detect camera permissions

3. Initialize the page, create the camera page, create mouth opening data and shaking head data

4. Turn on recognition, face frame recognition

5. Facial part recognition, facial recognition determines whether a face is detected

6. After detecting the face, determine the location and action reminder

7. Determine whether the position is appropriate and whether to open the mouth.

8. After the judgment is completed after opening your mouth, verify whether you shake your head.

9. After shaking your head to judge, take a photo with a 3-second countdown.

10. After taking the photo, choose to retake or upload the picture.

11. Select Retake and repeat steps 5-9, and select Upload to recall the image data.

12. Data clean

Four: iFlytek SDK download and configuration

1. SDK download

2. Add system library

3. Set Bitcode

4. User privacy permission configuration

Five: Actual use of the project

1. Download demo

2. Introduce FBYFaceRecognitionViewController into the project

3. Add code in the click event of the item identification button

4. Image callback function

What knowledge do you need to learn to get started with the basics of network security? Network Security Learning Route This is an overview of the network security learning route from basic to advanced. Friends, please remember to bookmark it!


One: Introduction

Recently, after the identification of ID cards and bank cards, the project began to realize face recognition and living body recognition. Face recognition includes face storage, face search, face 1:N comparison, and face N:N comparison. In addition, Liveness recognition is used in the secure login function.

As we all know, Alipay uses the face++ service to implement face recognition. In actual projects, iFlytek’s face recognition SDK is used for secondary encapsulation to achieve live recognition. It mainly realizes the recognition of two living body movements: opening the mouth and shaking the head. As far as I know, iFlytek’s service is based on face++, the recognition rate is still very high, and both iOS and Android have packaged SDKs.

In practical applications, in order to ensure the security of users, many apps have implemented fingerprint login, gesture login, third-party login (QQ, WeChat, Alipay), and facial recognition login in addition to conventional account and password login. Next, I will share with you how to implement live detection of face recognition. This is the most basic implementation of facial recognition login.

In addition, these blog posts are all derived from technical summaries in my daily development. When time permits, I will share the iOS and Android versions respectively for technical points. I will try to attach demos for your reference. If there are other technologies If you need any help, please leave a message after the article and I will do my best to help you.

Two: Analysis of Implementation Ideas

Click the recognition button, call the camera 2. CameraRules class, detect camera permissions 3. Initialize the page, create the camera page, create mouth opening data and shaking head data 4. Turn on recognition, face frame recognition 5. Face part recognition, face Recognition and judgment whether a face is detected 6. After detecting the face, judge the position 7. Judge the position appropriately and judge whether to open the mouth 8. The judgment of opening the mouth is completed, verify whether the head is shaken 9. The judgment of shaking the head is completed, 3 seconds countdown to take pictures 10. The picture is completed, Choose to retake or upload the image 11. Select retake and repeat steps 5-9, select upload to call back the image data 12. Data clean three: implement source code analysis

According to the analysis of implementation ideas, coding is implemented step by step:

1. Click the recognition button to call the camera

if([CameraRules isCapturePermissionGranted]){[self setDeviceAuthorized:YES];}else{dispatch_async(dispatch_get_main_queue(), ^{NSString* info=@"No camera permission";[self showAlert:info];[ self setDeviceAuthorized:NO];});} 

2. CameraRules class, detect camera permissions

//Detect camera permissions
 + (BOOL)isCapturePermissionGranted{if([AVCaptureDevice respondsToSelector:@selector(authorizationStatusForMediaType:)]){AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];if(authStatus ==AVAuthorizationStatusRestricted || authStatus ==AVAuthorizationStatusDe nied){return NO;}else if(authStatus==AVAuthorizationStatusNotDetermined){dispatch_semaphore_t sema = dispatch_semaphore_create(0);__block BOOL isGranted=YES;[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {isGranted=granted;dispatch_semaphore_signal(sema);}];disp catch_semaphore_wait(sema , DISPATCH_TIME_FOREVER);return isGranted;}else{return YES;}}else{return YES;}
}

3. Initialize the page, create the camera page, create mouth opening data and shaking head data

 //Create a camera page, create mouth opening data and shaking head data [self faceUI]; [self faceCamera]; [self faceNumber]; 

4. Turn on recognition, face frame recognition

 float cx = (left + right)/2; float cy = (top + bottom)/2; float w = right - left; float h = bottom - top; float ncx = cy ; float ncy = cx ; CGRect rectFace = CGRectMake(ncx-w/2 ,ncy-w/2 , w, h);if(!isFrontCamera){rectFace=rSwap(rectFace);rectFace=rRotate90(rectFace, faceImg.height, faceImg.width);} BOOL isNotLocation = [self identifyYourFaceLeft:left right:right top:top bottom:bottom];if (isNotLocation==YES) {return nil;} 

5. Facial part recognition, facial recognition determines whether a face is detected

 for(id key in keys){id attr=[landmarkDic objectForKey:key];if(attr & amp; & amp; [attr isKindOfClass:[NSDictionary class]]){if(!isFrontCamera){p=pSwap (p);p=pRotate90(p, faceImg.height, faceImg.width);}if (isCrossBorder == YES) {[self delateNumber];return nil;}p=pScale(p, widthScaleBy, heightScaleBy);[arrStrPoints addObject:NSStringFromCGPoint(p)];}}

6. After detecting the face, determine the location and action reminder

 if (right - left < 230 || bottom - top < 250) {self.textLabel.text = @"Too far";[self delateNumber];isCrossBorder = YES;return YES;}else if ( right - left > 320 || bottom - top > 320) {self.textLabel.text = @"Too close";[self delateNumber];isCrossBorder = YES;return YES;}else{if (isJudgeMouth != YES ) {self.textLabel.text = @"Please repeat the action of opening your mouth";[self tomAnimationWithName:@"openMouth" count:2];if (left < 100 || top < 100 || right > 460 || bottom > 400) {isCrossBorder = YES;isJudgeMouth = NO;self.textLabel.text = @"Adjust the lower position first";[self delateNumber];return YES;}}else if (isJudgeMouth == YES & amp; & amp; isShakeHead != YES) {self.textLabel.text = @"Please repeat the shaking motion";[self tomAnimationWithName:@"shakeHead" count:4];number = 0;}else{takePhotoNumber + = 1 ;if (takePhotoNumber == 2) {[self timeBegin];}}isCrossBorder = NO;} 

7. Determine whether the position is appropriate and whether to open the mouth.

if (rightX & amp; & amp; leftX & amp; & amp; upperY & amp; & amp; lowerY & amp; & amp; isJudgeMouth != YES) {number + + ;if (number == 1 | | number == 300 || number == 600 || number ==900) {mouthWidthF = rightX - leftX < 0 ? abs(rightX - leftX) : rightX - leftX;mouthHeightF = lowerY - upperY < 0 ? abs(lowerY - upperY) : lowerY - upperY;NSLog(@"%d,%d",mouthWidthF,mouthHeightF);}else if (number > 1200) {[self delateNumber];[self tomAnimationWithName:@"openMouth" count :2];}mouthWidth = rightX - leftX < 0 ? abs(rightX - leftX) : rightX - leftX;mouthHeight = lowerY - upperY < 0 ? abs(lowerY - upperY) : lowerY - upperY;NSLog(@"%d ,%d",mouthWidth,mouthHeight);NSLog(@"Before opening mouth: width=%d, height=%d",mouthWidthF - mouthWidth,mouthHeight - mouthHeightF);if (mouthWidth & amp; & amp; mouthWidthF ) { if (mouthHeight - mouthHeightF >= 20 & amp; & amp; mouthWidthF - mouthWidth >= 15) {isJudgeMouth = YES;imgView.animationImages = nil;}}}

8. After the mouth opening judgment is completed, verify whether you shake your head

if ([key isEqualToString:@"mouth_middle"] & amp; & amp; isJudgeMouth == YES) {if (bigNumber == 0 ) {firstNumber = p.x;bigNumber = p.x;smallNumber = p.x;}else if (p.x > bigNumber) {bigNumber = p.x;}else if (p.x < smallNumber) {smallNumber = p.x;} if (bigNumber - smallNumber > 60) {isShakeHead = YES;[self delateNumber];}}

9. After shaking your head to judge, 3 seconds countdown to take pictures

if(timeCount >= 1){self.textLabel.text = [NSStringstringWithFormat:@"Photo taken after %ld s",(long)timeCount];}else{[theTimer invalidate];theTimer=nil;[ self didClickTakePhoto];} 

10. After taking the photo, choose to retake or upload the picture

-(void)didClickPhotoAgain
{[self delateNumber];[self.previewLayer.session startRunning];self.textLabel.text = @"Please adjust the position";[backView removeFromSuperview];isJudgeMouth = NO;isShakeHead = NO;}

11. Select Retake and repeat steps 5-9, and select Upload to recall the image data

-(void)didClickUpPhoto
{//Uploaded photos successfully [self.faceDelegate sendFaceImage:imageView.image];[self.navigationController popViewControllerAnimated:YES];
} 

12. Data clean

-(void)delateNumber
{number = 0;takePhotoNumber = 0;mouthWidthF = 0;mouthHeightF = 0;mouthWidth = 0;mouthHeight = 0;smallNumber = 0;bigNumber = 0;firstNumber = 0;imgView.animationImages = nil;imgView.image = [UIImage imageNamed :@"shakeHead0"];
} 

Four: iFlytek SDK download and configuration

1. SDK download

Because the iFlytek Face Recognition SDK is used in the project, you need to go to the iFlytek open platform to create an application and download the SDK.

2. Add system library

Add iflyMSC.framework in the lib directory in the development kit to the project. At the same time, please add other libraries that Demo depends on to the project. Follow the example below to add the iOS system libraries required by the SDK:

3. Set Bitcode

Just search for Bitcode in Targets – Build Settings, find the corresponding option, and set it to NO, as shown below:

4. User privacy permission configuration

Add the following image settings in Info.plist:

Five: Actual use of the project

1. Download demo

Download the demo and introduce the FBYFaceData folder in the demo into the project.

2. Introduce FBYFaceRecognitionViewController into the project

#import "FBYFaceRecognitionViewController.h"

3. Add code in the click event of the item identification button

-(void)pushToFaceStreamDetectorVC
{FBYFaceRecognitionViewController *faceVC = [[FBYFaceRecognitionViewController alloc]init];faceVC.faceDelegate = self;[self.navigationController pushViewController:faceVC animated:YES];
} 

4. Image callback function

-(void)sendFaceImage:(UIImage *)faceImage
{NSLog(@"Image uploaded successfully");
}

- (void)sendFaceImageError {NSLog(@"Image upload failed");
} 

6. What knowledge do you need to learn to get started with the basics of network security?

Cybersecurity learning route
This is an overview of the learning route outline for network security from basic to advanced. Friends, please remember to click and add it to your collection!


Original link: https://blog.csdn.net/Galaxy_0/article/details/129096089

Get a free safety learning packet!

+ V【zkaq222】or scan the code below otherwise you will not be able to pass. Get the safety learning information package for free! (Chat privately in the group to learn and make progress together) Tencent Documents-Online Documentsicon-default.png?t=N7T8https://docs.qq.com/doc/ DYmVETWlZemh0Ymdv