Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,976

A METHOD FOR DETECTING ANOMALIES IN IDENTITY VERIFICATION

Non-Final OA §101§102§103
Filed
Dec 29, 2023
Examiner
ADU-JAMFI, WILLIAM NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Raritex Trade Ltd.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
25
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: The end of Claim 4 is missing a period. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The method of claim 1 is directed to a process, which is one of the statutory categories of invention, and passes Step 1: Statutory Category- MPEP § 2106.03. However, the following elements of Claim 1 recite steps that can be performed in the human mind or with pen and paper, therefore failing Step 2A Prong One. These steps constitute mental processes because they describe acts of observation, evaluation, and judgement that a human can practically perform mentally. Receiving an identity verification request containing image data, wherein the image data contains face of the person to be verified; Generating an image data descriptor using a model configured to determine a set of visual features not associated with the person's face in the image data; Searching for image data descriptors similar to said generated image data descriptor among image data descriptors associated with other persons' verification requests; In response to finding at least one similar image data descriptor, marking said person's verification request as anomalous, otherwise marking the request as not anomalous. Claim 1 fails Step 2A Prong Two because the additional elements beyond the judicial exception, including a generic processor and model, do not integrate the judicial exception into a practical application. The processor is described only as performing ordinary data processing operations, such as receiving data, generating descriptors, and searching a dataset, which are generic computer functions that do not improve the functioning of a computer or any other technology (MPEP § 2106.05(a)). The additional elements are merely generic computer components used as a tool amounting to instructions for applying the abstract idea (MPEP § 2106.05(f)). The claim also does not impose meaningful limits on the computer components such that the method is tied to a particular machine; the processor and model may operate on any generic computing system (MPEP § 2106.05(b)). Claim 1 also fails Step 2B, as these generic elements are well-understood, routine, and conventional (WURC), adding nothing significantly more than the abstract idea itself (MPEP § 2106.05(d), MPEP § 2106.07(a)(III)). This can be seen in the specification, where it is stated that “it will be apparent, however, to one skilled in the art that embodiments of the disclosure can be practiced without these specific details,” which confirms that the implementation of the processor and model rely on activities that would be readily understood and routinely performed by a person of ordinary skill in the art. As claim 5 contains this identical ineligible subject matter, it is also rejected. Regarding Claim 2, the only additional element beyond the judicial exception is the recitation of a machine learning model with an EfficientNet or ResNet architecture. Claim 2 fails Step 2A Prong Two, as the additional element beyond the judicial exception does not integrate the judicial exception into a practical application. This additional element does not amount to an improvement in the functioning of a computer or any other technology or technical field because the claim language and the specification do not describe any improvement to how EfficientNet and ResNet function, nor does it provide any technical refinement to the underlying neural network architecture (MPEP § 2106.05(a)). Merely specifying a known algorithmic architecture for a general-purpose computer constitutes nothing more than mere instructions to apply an exception (MPEP § 2106.05(f)). Additionally, the claim remains capable of being performed on any generic processor, and the recited ML architecture does not create a particular machine (MPEP § 2106.05(b)). Finally, the use of EfficientNet and ResNet reflects routine and conventional ML practice, which constitutes insignificant extra solution activity because the limitation merely implements the abstract idea without adding technological meaning or transformation (MPEP § 2106.05(g)). Claim 2 also fails Step 2B because the additional elements are well-understood, routine, and conventional (WURC) and add nothing significantly more than the abstract idea itself. This is shown by the publication by Najla Mohamed Salh Kailani et al. (“Comparison between ResNet and EfficientNet for Image Classification – An Analytical Study of Performance and Efficiency”) which states that “Convolutional Neural Networks (CNNs) have driven major progress in this domain [image classification], with ResNet and Efficient Net emerging as two of the most influential architectures” (Kailani: Abstract). As claim 6 contains this identical ineligible subject matter, it is also rejected. Regarding Claim 3, the only additional element beyond the judicial exception is the recitation of a model that is trained using Contrastive Learning technique. Claim 3 fails Step 2A Prong Two, as the additional element beyond the judicial exception does not integrate the judicial exception into a practical application. The additional element does not amount to an improvement in the functioning of a computer or any other technology or technical field because the claim language and the specification do not describe any improvement to contrastive learning itself, nor does it describe a modification that enhances model training, computational efficiency, or system architecture (MPEP § 2106.05(a)). Merely specifying a known and widely used machine-learning training procedure for a general-purpose computer constitutes nothing more than mere instructions to apply an exception (MPEP § 2106.05(f)). Additionally, the claim remains capable of being performed on any conventional computing environment capable of executing ML frameworks, meaning that contrastive learning does not limit the claim to a particular machine (MPEP § 2106.05(b)). Finally, the use of contrastive learning reflects routine ML practice, which constitutes insignificant extra solution activity because the limitation merely implements the abstract idea without adding technological meaning or transformation (MPEP § 2106.05(g)). Claim 3 also fails Step 2B because the additional elements are WURC and add nothing significantly more than the abstract idea itself. This is shown by the publication by Wolf et al. (“Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance”) which states that “self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis” (Wolf: Abstract). As claim 7 contains this identical ineligible subject matter, it is also rejected. Regarding Claim 4, the only additional element beyond the judicial exception is that the similarity of the descriptors is determined using one of the following metrics: Euclidean Distance, Minkowski Distance, Cosine Similarity, Dot Product. This additional element fails Step 2A: Prong One, as it recites an abstract idea (mathematical concepts). Mathematical concepts are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations. The claim must recite (i.e. set forth or describe) a mathematical concept rather than include limitations that are merely based on math. The claim does not recite any additional elements beyond the judicial exception and therefore fails Step 2A: Prong Two and Step 2B. As claim 8 contains this identical ineligible subject matter, it is also rejected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3 and 5-7 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Ruo-yu (CN 112733117A). Regarding Claim 1, Ruo-yu teaches a method for detecting anomalies in identity verification performed by a processor (Ruo-yu: Description) comprising the following steps: “According to one embodiment, in order to detect "human flesh fraudster attacks", an authentication method is proposed, which involves using photos of users (especially their faces) and/or their official identification (ID) documents in the background The similarity is used to detect groups with similar backgrounds. In most "human flesh fraudster attacks" cases, the victims are organized and gathered in the same place to complete the fraudulent eKYC processing, so their facial photos and/or their official identification (ID) documents are included in the photos The background may be similar.” “As shown in FIG. 4, the exemplary computing device 400 includes a processor 404 for executing software routines. Although a single processor is shown for clarity, the computing device 400 may also include a multi-processor system. The processor 404 is connected to a communication facility 406 for communicating with other components of the computing device 400.” receiving an identity verification request containing image data, wherein the image data contains face of the person to be verified (Ruo-yu: Description); “At step 112, a series of new data associated with a particular digital due diligence process is received. Similar to step 104 for preparing training data, a face detection model is used to detect a face from each of a plurality of face photos newly submitted by the user.” generating an image data descriptor using a model configured to determine a set of visual features not associated with the person's face in the image data (Ruo-yu: Description); “The face detection model is used to detect human faces, and then the detected human faces are removed from each photo, so that only the background image is retained, as shown in Figure 1C.” “The method 300 also includes a step 306 of using the processing device to generate a background vector for each of the extracted first background image and the extracted second background image using the trained image similarity model.” searching for image data descriptors similar to said generated image data descriptor among image data descriptors associated with other persons' verification requests (Ruo-yu: Description); “The image similarity model uses historical background images for training; detects the existence of similar backgrounds from the generated background vectors” “Similar background data can be automatically generated from historical data without manual input/manual operation.” “In step 116, a clustering algorithm (eg, DBSCAN, K-means, spectral clustering) is used to detect similar background groups. The existence of a similar background group indicates that a "human flesh fraudster attack" has occurred.” in response to finding at least one similar image data descriptor, marking said person's verification request as anomalous, otherwise marking the request as not anomalous (Ruo-yu: Description). “If the presence of a similar background is detected, the processing device 204 triggers an authentication alarm signal.” Regarding Claim 2, Ruo-yu teaches the method according to Claim 1, wherein the model is a machine learning model with an EfficientNet or ResNet architecture (Ruo-yu: Description). “Therefore, different model architectures can be used to train the image similarity model-the backbone part can be ResNet, IR_SE, etc., and the head part can be softmax, triplet loss, arcFace, etc.” Regarding Claim 3, Ruo-yu teaches the method according to Claim 1, wherein the model is trained using Contrastive Learning technique (Ruo-yu: Description). “Therefore, different model architectures can be used to train the image similarity model-the backbone part can be ResNet, IR_SE, etc., and the head part can be softmax, triplet loss, arcFace, etc.” Explanation: Triplet loss is a form of contrastive learning. Regarding Claim 5, Ruo-yu teaches a method for detecting anomalies in identity verification using identification documents performed by a processor (Ruo-yu: Description) comprising the following steps: “According to one embodiment, in order to detect "human flesh fraudster attacks", an authentication method is proposed, which involves using photos of users (especially their faces) and/or their official identification (ID) documents in the background The similarity is used to detect groups with similar backgrounds. In most "human flesh fraudster attacks" cases, the victims are organized and gathered in the same place to complete the fraudulent eKYC processing, so their facial photos and/or their official identification (ID) documents are included in the photos The background may be similar.” “As shown in FIG. 4, the exemplary computing device 400 includes a processor 404 for executing software routines. Although a single processor is shown for clarity, the computing device 400 may also include a multi-processor system. The processor 404 is connected to a communication facility 406 for communicating with other components of the computing device 400.” Receiving an identity verification request comprising image data representing identification document of the person to be verified (Ruo-yu: Description); “The background image extraction device 202 can use the identity certification document alignment model to detect the identity certification document in each of the multiple identity certification document photos submitted by the user for a specific digital due diligence process.” Generating a descriptor of the identification document's image data using a model configured to determine at least one of:* a set of visual features not associated with the person's identification document represented in the identification document's image data; * a set of visual features not associated with the person's face or other identification document's data pictured on the identification document represented in the image data (Ruo-yu: Description); “Once the ID alignment model is used to detect the actual identity document in the photo of the identity document, the image of the actual identity document is removed, leaving the background of the photo of the identity document, as shown in Figure 1E.” “In step 114, using the trained image similarity model (ie, trained in step 110), a background vector is generated for each background extracted from the face photo and the photo of the identification document.” Searching for identification documents' image descriptors similar to said generated descriptor of the identification document's image data among identification documents' image data descriptors belonging to other identity verification requests (Ruo-yu: Description); “The image similarity model uses historical background images for training; detects the existence of similar backgrounds from the generated background vectors” “Similar background data can be automatically generated from historical data without manual input/manual operation.” “In step 116, a clustering algorithm (eg, DBSCAN, K-means, spectral clustering) is used to detect similar background groups. The existence of a similar background group indicates that a "human flesh fraudster attack" has occurred.” In response to finding at least one similar identification document's image descriptor, marking said identity verification request as anomalous, otherwise marking the request as not anomalous (Ruo-yu: Description). “If the presence of a similar background is detected, the processing device 204 triggers an authentication alarm signal.” Regarding Claim 6, Ruo-yu teaches the method according to Claim 5, and additional limitations are met as in the consideration of Claim 2 above. Regarding Claim 7, Ruo-yu teaches the method according to Claim 5, and additional limitations are met as in the consideration of Claim 3 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ruo-yu (CN 112733117A) in view of San-liang (CN117058769A). Regarding Claim 4, Ruo-yu teaches the method according to claim 1, but does not teach that the similarity of the descriptors is determined using one of the following metrics: Euclidean Distance, Minkowski Distance, Cosine Similarity, Dot Product. However, San-liang teaches using cosine similarity as the similarity-calculation metric, stating the method “calculates the cosine similarity between the high-order background features of the image to be identified and the high-order background feature library of face anomaly attack samples according to the formula” (Sanliang: Description). San-liang further emphasizes that the purpose of calculating similarity is to determine whether an input sample is anomalous, stating that “if the similarity is greater than the preset threshold, the image to be recognized is a facial abnormality attack sample” and “if the similarity is not greater than the preset threshold, the image to be recognized is a normal recognition sample” (Sanliang: Description). San-liang also teaches that similarity-based anomaly screening improves the robustness and accuracy of identity-verification systems, explaining that its method “improves the recognition accuracy and robustness of face anomaly attacks” and “can be widely used to assist face liveness detection and improve the ability to defend against fake face attacks” (Sanliang: Description). Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Ruo-yu’s authentication system to incorporate San-liang’s cosine similarity. San-liang teaches that similarity metrics, including cosine similarity, improve anomaly-detection reliability when comparing background-based descriptors, the same type of descriptors generated by Ruo-yu. San-liang’s teachings show that its similarity-metric computation directly enhances the decision step already present in Ruo-yu and is presented as a solution to the same problem. San-liang’s method requires no inventive modification to adapt its similarity calculation to Ruo-yu’s descriptor-comparison step. Regarding Claim 8, Ruo-yu teaches the method according to claim 5, and additional limitations are met as in the consideration of claim 4 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhu et. al (“Similarity Retrieval Based on Image Background Analysis”) teaches an improved portrait background similarity retrieval method that first segments and removes portraits, then uses an efficient retrieval model to find similar backgrounds, and finally verifies similarity, significantly boosting accuracy and efficiency over traditional methods by focusing on background features. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM ADU-JAMFI whose telephone number is (571)272-9298. The examiner can normally be reached M-T 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM ADU-JAMFI/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Dec 11, 2025
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month