Prosecution Insights
Last updated: April 19, 2026
Application No. 18/431,865

METHOD AND SYSTEM FOR IDENTIFYING SPOOFS OF IMAGES

Non-Final OA §102
Filed
Feb 02, 2024
Examiner
THOMAS, MIA M
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Identy Inc.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
606 granted / 703 resolved
+24.2% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
12 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
43.0%
+3.0% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 703 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Preliminary Amendment This Office Action is responsive to communications including a preliminary amendment filed on 02/02/2024. Claims 1-14 were originally pending in this application. Claims 1-14 have been amended herein. Claims 15-20 have been added herein. Claims 1-20 are now pending in view of the preliminary amendment. The Applicant respectfully submits that no new matter has been added by these amendments. An Office Action on the merits follows here below. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/31/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 13 and 14 are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by Rowe (US 20210110018 A1). Regarding Claim 1: Rowe discloses a method for differentiating a real object in an image from a spoof of the real object (Refer to para [002]; “Embodiments of the present disclosure relate generally to biometric recognition, authentication, and spoof detection, and more specifically to systems and methods for using machine learning for image-based spoof detection.”) the method comprising: obtaining an image comprising at least one object, wherein the at least one object comprises at least one biometric identifier (Refer to para [068]; “The 400 begins at step 402, at which the computer system 104 obtains an input-data set that includes a plurality of images captured of a biometric-authentication subject by the camera system 116.”) wherein the at least one object includes one or more of a finger, a fingerprint, a face, or a palm (Refer to para [133]; “one or more embodiments of the present systems and methods are also applicable to other non-contact biometric-authentication contexts such as fingerprint scanning, hand scanning, palm scanning, retinal scanning, iris scanning, periocular scanning, animal authentication, and/or the like.”) extracting information including three-dimensional information (Refer to para [069]; “In some cases, the biometric-authentication subject is related to a spoofing attempt and is a 2D or 3D representation of a face such as the face of the user 102, who is an authorized user of the computer system 104.”) and semantic information from the image, wherein the semantic information relates the at least one object in the image to the at least one biometric identifier and/or relates different objects in the image to each other (Refer to para [088 and 089]; “the pluralities of images used in training and in inference can reflect micro characteristics such as pulsatile motion, which relates to a heartbeat or pulse of a biometric-authentication subject as reflected by arteries and/or veins expanding and/or contracting as blood flows through them. Other micro characteristics could also be reflected in the respective pluralities of images of biometric-authentication subjects used for training and used during inference as well.”) merging the extracted three-dimensional information and the extracted semantic information to a combined information processing the combined information by a classifier (Refer to para [098 and 116]; “disparity data takes the form of a set of identified facial landmarks from the two images. Thus, in at least some such embodiments, after a disparity-data generator (e.g., the disparity-data generator 818) identifies (using, e.g., a deep network) a number of facial landmarks in both a right-camera image and a left-camera image, a combined set of identified landmarks (e.g., landmark coordinates) is input into a classification network (referred to herein as a disparity-data-processing network) that has been trained to predict whether such a set of landmarks corresponds with a real face or rather with a spoof attempt.”) and outputting, by the classifier, a data set which indicates whether the at least one object in the image is the real object or a spoof of the real object (Refer to para [101]; “In various different embodiments, the output of the network could take the form of a binary (real/spoof) value, a scalar confidence percentage, a quantized scored (e.g., 1-5), a result from a finite set of possible results such as {real, spoof, unsure}, an array of confidence scores corresponding to various spoof types, and/or one or more other forms. In some embodiments, an output may not be provided until a certainty determination is made, or until a certain level of confidence is reached, or the like.”). Regarding Claim 2: Rowe discloses the image is obtained by at least one image sensor of a computing device (Refer to para [132] wherein the camera inherently houses an image sensor; “As but one example, in at least one embodiment, the computer system 104 is used, by way of the camera system 116, to plurality of images of a biometric-authentication subject, and the processing of that plurality of images in order to obtain a spoof-detection result for the biometric-authentication subject is carried out at least in part on the server 108 and/or one or more other computing devices.”). Regarding Claim 3: Rowe discloses the image sensor comprises a 2D image sensor and/or a 3D image sensor (Refer to para [050]; “It is noted that example configurations in which one or both of the right-side camera 302 and the left-side camera 304 are arranged to capture images in the IR spectrum (perhaps only in the IR spectrum, or perhaps as part of a wideband spectrum) are helpful in detecting spoof attempts in which a 2D image of a face is displayed on a smartphone, tablet, or the like and then held up to the camera system 116 in an authentication attempt.”). Regarding Claim 4: Rowe discloses extracting the three-dimensional information comprises creating a depth map of the image (Refer to para [116]; “Such embodiments make use of latent information present in the two sets of landmarks without explicitly developing a disparity map, avoiding such intermediate calculations. In other embodiments, a disparity-data generator does perform a stereoscopic analysis on the identified landmarks, and generates a disparity map that is input into a disparity-data-processing network as disparity data (e.g., the disparity data 820).”). Regarding Claim 13: Rowe discloses each step of the method is carried out on a mobile device (Refer to para [067]; “The description of the method 400 as being carried out by the mobile device 104 is for simplicity of presentation—as stated above, in at least one embodiment, the instructions 208 include an application that is executing on the computer system 104 to carry out the method 400.”). Regarding Claim 14: Rowe discloses computing device comprising a processor (Refer to “ a processor 204” and para [024]; “As used in the present disclosure, a module includes both hardware and instructions. The hardware can include one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more graphical processing units (GPUs), one or more tensor processing units (TPUs), and/or one or more devices and/or components of any other type deemed suitable by those of skill in the art for a given implementation.”) an image sensor (Refer to para [029]; “The camera system 116 can, as described above, be coupled with the computer system 104 via a data connection such as a USB connection. In other embodiments, other types of connections can be used (e.g., Bluetooth). In some cases, the camera system 116 is an embedded component of the computer system 104 or of a peripheral device such as an external monitor. These are just a few examples, as other arrangements can be used as well. The camera system 116 can include a single camera or multiple cameras. Each camera in the camera system 116 can be arranged to be capable of capturing images and/or video. In at least one embodiment, the camera system 116 includes at least one camera capable of capturing images. In some embodiments, the camera system 116 includes multiple cameras capable of capturing images.”) and a storage device (“a data storage 206,” at para [038]) wherein the storage device includes computer readable instructions that, when executed by the processor (Refer to para [038]; “In at least one embodiment, the data storage 206 contains instructions 208 that are executable by the processor 204 for carrying out various functions described herein as being carried out on or by the computer system 104. As described below, in at least some embodiments, the instructions 208 include an application executable by the processor 204 to carry out such functions.”) cause the computing device to perform the method of claim 1 (For the sake of brevity, refer to the rejection of independent claim 1). Allowable Subject Matter Claims 5-12, 15-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art either singly or in combination does not teach, disclose or suggest at least the following claim limitation(s): “…the information extracted from the image is combined by stacking the information into a tensor having at least one channel.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIA M THOMAS whose telephone number is (571)270-1583. The examiner can normally be reached M-Th 8:30am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen (Steve) Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MIA M. THOMAS Primary Examiner Art Unit 2665 /MIA M THOMAS/Primary Examiner Art Unit 2665
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Mar 25, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602938
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597154
IMAGE ANALYSIS METHOD AND CAMERA APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12590529
BOREHOLE IMAGE INTERPRETATION AND ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12586220
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
2y 5m to grant Granted Mar 24, 2026
Patent 12579220
Visual Attribute Expansion via Multiple Machine Learning Models
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+15.7%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 703 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month