Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,357

LIVENESS DETECTION

Non-Final OA §103
Filed
Jun 12, 2023
Examiner
TALUKDER, MD K
Art Unit
2648
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
645 granted / 808 resolved
+17.8% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
841
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
18.2%
-21.8% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 808 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. It would be of great assistance to the office if all incoming papers pertaining to a filed application carried the following items: i. Application number (checked for accuracy, including series code and serial no.). ii. Group art unit number (copied from most recent Office communication). iii. Filing date. iv. Name of the examiner who prepared the most recent Office action. v. Title of invention. vi. Confirmation number (See MPEP § 503). Claim Status 3. Applicant elected claims 6-20 for further examination. Claims 1-5 are non-elect claims. 4. The Examiner has pointed out particular references contained in the prior art of record within the body of this action for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages, paragraph and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. 5. Claim interpretation: When multiple limitations are connected with “OR”, one of the limitations doesn’t have any patentable weight since both of the limitations are optional. CLAIM OBJECTION 6. Claims 16 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 18 is also objected since it depends on claim 17. Claim Rejection- 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-11, 13, 15, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over LI (Pub No. 2019/0095701) and further in view of Laporta (Pub No. 2017/0372312). Regarding claim 6, LI discloses an access system, comprising: processing circuitry (Fig. 6) configured to: obtain a plurality of images of an access object (Fig. 5: acquiring image-401 & Para. 4 & 7: Face video clips capture plurality images & Para. 40 & Fig. 2: multiple images of a human), determine corresponding eigenvectors according to the plurality of images of the access object (Fig. 5: Calculated eigenvector-404 & Para. 117-118: The face features are extracted and eigenvector of the faces are calculated), capture an action behavior of the access object according to a relative change between the determined eigenvectors (Para. 73-75: Capture eye, nose, lips/ face edge and compare similarity. Comparing the eigenvector with a reference eigenvector of the target object and determine similarity value- relative change) & (Fig. 5), determine the access object as a live body in response to capturing the action behavior of the access object (Fig. 5: Determine human face is a living human-407 & Para. 120-125: Living face/ human detection and face recognition result outputted), perform identity recognition on the access object when the access object is determined as the live body (Para. 42 & 125: “human face” to be determined and output the face recognition result- identity recognition) & (Fig. 5). LI does not explicitly disclose configure an access permission for the access object when the identity recognition is performed successfully, and control access by the access object according to the configured access permission. In a similar filed of endeavor, Laporta discloses perform identity recognition on the access object when the access object is determined as the live body (Para. 2 & 9: Authenticating transactions when verify live/ real person & verifying user personal identification) & (Fig. 10), configure an access permission for the access object when the identity recognition is performed successfully (Para. 7 & 9 & 43: When user verified, payment request processing initiating- access permission) & (Para. 55 & Abstract), and control access by the access object according to the configured access permission (Fig. 10 & Para. 7 & 9: User is verified and initiate completion of the payment transaction). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the invention to use the authenticating a real person for payment transaction of Laporta’s disclosure with the living body detecting system, as taught by Li. Doing so would have resulted in effectively authenticating a real person, verify biometric information for safe, secure and robust payment processing in a mobile payment processing system. Regarding claim 7, LI discloses the processing circuitry includes first processing circuitry and second processing circuitry (Fig. 6-7); the first processing circuitry is configured to obtain the plurality of images of the access object (Fig. 5 & Para. 4 & 7: Face video clips capture plurality images & Para. 40 & Fig. 2); and the second processing circuitry is configured to: determine the corresponding eigenvectors according to the plurality of images of the access object (Fig. 5: Calculated eigenvector-404 & Para. 117-118: The face features are extracted and eigenvector of the faces are calculated), capture the action behavior of the access object according to the relative change between the determined eigenvectors (Para. 73-75: Capture eye, nose, lips/ face edge and compare similarity. Comparing the eigenvector with a reference eigenvector of the target object and determine similarity value- relative change) & (Fig. 5), determine the access object as the live body in response to capturing the action behavior of the access object (Fig. 5: Determine human face is a living human-407 & Para. 120-125: Living face/ human detection and face recognition result outputted), perform identity recognition on the access object when the access object is determined as the live body (Para. 42 & 125: “human face” to be determined and output the face recognition result- identity recognition) & (Fig. 5). LI does not explicitly disclose configure an access permission for the access object when the identity recognition is performed successfully, and control access by the access object according to the configured access permission. In a similar filed of endeavor, Laporta discloses perform identity recognition on the access object when the access object is determined as the live body (Para. 2 & 9: Authenticating transactions when verify live/ real person & verifying user personal identification) & (Fig. 10), configure an access permission for the access object when the identity recognition is performed successfully (Para. 7 & 9 & 43: When user verified, payment request processing initiating- access permission) & (Para. 55 & Abstract), and control access by the access object according to the configured access permission (Fig. 10 & Para. 7 & 9: User is verified and initiate completion of the payment transaction). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the invention to use the authenticating a real person for payment transaction of Laporta’s disclosure with the living body detecting system, as taught by Li. Doing so would have resulted in effectively authenticating a real person, verify biometric information for safe, secure and robust payment processing in a mobile payment processing system. Regarding claim 8, Claim 8 corresponds to claim 7 and is analyzed accordingly. (Note: Using multiple processors is an obvious design choice. In this case, the functionality of the device has not changed, absent unexpected result). Regarding claim 9, LI discloses the processing circuitry is configured to control the access to a restricted area (Para. 20 & 42: Processor perform live body detection for security door). Regarding claim 10, LI discloses the processing circuitry is configured to control the access to perform a payment according to the configured access permission (Para. 78 & 103: face recognition payment system). Regarding claim 11, LI discloses the processing circuitry is configured to control the access to a service according to the configured access permission (Abstract & Para. 78 & 103: Access face recognition payment system). Regarding claim 13, LI discloses the processing circuitry is configured to: perform grayscale processing on the plurality of images to obtain a plurality of grayscale images (Para. 39: Grayscales in each area of face images); and input the plurality of grayscale images into a facial key point model to obtain the plurality of key points of a facial feature in the plurality of images (Para. 17: Face features point & Para. 39) & (Fig. 2). Regarding claim 15, LI discloses the processing circuitry is configured to: for each of the eigenvectors, compare the respective eigenvector with a normal structure interval; and add the respective eigenvector to a feature sequence when the respective eigenvector is within the normal structure interval (Fig. 5 & Para. 73-74 & Para. 122: add the eigenvector to a feature sequence within the normal structure interval). Regarding claim 19, LI discloses call a facial recognition model to perform the identity recognition when the access object is determined as the live body (Fig. 5). Regarding claim 20, LI discloses a method for providing system access, the method comprising: obtain a plurality of images of an access object (Fig. 5: acquiring image-401 & Para. 4 & 7: Face video clips capture plurality images & Para. 40 & Fig. 2: multiple images of a human), determine corresponding eigenvectors according to the plurality of images of the access object (Fig. 5: Calculated eigenvector-404 & Para. 117-118: The face features are extracted and eigenvector of the faces are calculated), capture an action behavior of the access object according to a relative change between the determined eigenvectors (Para. 73-75: Capture eye, nose, lips/ face edge and compare similarity. Comparing the eigenvector with a reference eigenvector of the target object and determine similarity value- relative change) & (Fig. 5), determining, by processing circuitry the access object as a live body in response to capturing the action behavior of the access object (Fig. 5: Determine human face is a living human-407 & Para. 120-125: Living face/ human detection and face recognition result outputted), perform identity recognition on the access object when the access object is determined as the live body (Para. 42 & 125: “human face” to be determined and output the face recognition result- identity recognition) & (Fig. 5). LI does not explicitly disclose configure an access permission for the access object when the identity recognition is performed successfully, and control access by the access object according to the configured access permission. In a similar filed of endeavor, Laporta discloses perform identity recognition on the access object when the access object is determined as the live body (Para. 2 & 9: Authenticating transactions when verify live/ real person & verifying user personal identification) & (Fig. 10), configure an access permission for the access object when the identity recognition is performed successfully (Para. 7 & 9 & 43: When user verified, payment request processing initiating- access permission) & (Para. 55 & Abstract), and control access by the access object according to the configured access permission (Fig. 10 & Para. 7 & 9: User is verified and initiate completion of the payment transaction). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the invention to use the authenticating a real person for payment transaction of Laporta’s disclosure with the living body detecting system, as taught by Li. Doing so would have resulted in effectively authenticating a real person, verify biometric information for safe, secure and robust payment processing in a mobile payment processing system. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over LI (Pub No. 2019/0095701) and further in view of Laporta (Pub No. 2017/0372312) and further in view of TSAI et al (Pub No. 2013/0051632). Regarding claim 12, LI discloses the processing circuitry is configured to: perform facial feature recognition on the plurality of images of the access object to obtain a plurality of key points of a facial feature in the plurality of images (Para. 116-117: Facial feature points). LI is silent regarding calculate a structure distance proportion of the facial feature according to the plurality of key points of the facial feature in each of the plurality of images to obtain the eigenvector corresponding to the respective image. TSAI et al discloses calculate a structure distance proportion of the facial feature according to the plurality of key points of the facial feature in each of the plurality of images to obtain the eigenvector corresponding to the respective image (Para. 41: Eigenvector & Claim 10: comparing distance measurements between facial features of a user in subsequent images to distance measurements between facial features in the images) & (Fig. 13). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the invention to use the facial feature detection system to identify, recognize and authenticate a person. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over LI (Pub No. 2019/0095701) and further in view of Laporta (Pub No. 2017/0372312), in view of TSAI et al (Pub No. 2013/0051632) and further in view of Chen et al (Pub No. 2017/00531565). Regarding claim 14, LI discloses the facial feature includes at least one of an eye or a mouth (Fig. 2) and eigenvector (Fig. 5). LI is silent regarding each of the eigenvectors corresponds to at least one of an eye aspect ratio or a mouth aspect ratio (Para. 4 & 53: Facial features ratio). Therefore, it would have been obvious to one of the ordinary skilled in the art before the effective filing date of the invention to use the facial feature detection system to identify, recognize and authenticate a person. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MD K TALUKDER whose telephone number is (571)270-3222. The examiner can normally be reached Mon-Thur from 10 am to 6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wesley Kim can be reached on 571-272-7867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MD K TALUKDER/ Primary Examiner, Art Unit 2648
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604637
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12601808
Beam Alignment Method and Related Device
2y 5m to grant Granted Apr 14, 2026
Patent 12602920
IMAGE RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12582302
APPARATUS, SYSTEMS AND METHODS FOR IN VIVO IMAGING
2y 5m to grant Granted Mar 24, 2026
Patent 12575733
STORAGE MEDIUM, IMAGE MANAGEMENT APPARATUS, READING TERMINAL, AND IMAGE MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
94%
With Interview (+13.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 808 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month