Prosecution Insights
Last updated: April 19, 2026
Application No. 18/463,579

SYSTEMS AND METHODS FOR SECURE BIOMETRIC-BASED ELECTRONIC TRANSACTIONS

Final Rejection §103
Filed
Sep 08, 2023
Examiner
PARK, EDWARD
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Fidelity Information Services LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
576 granted / 704 resolved
+19.8% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
731
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 704 resolved cases

Office Action

§103
DETAILED ACTION Contents Notice of Pre-AIA or AIA Status 2 Response to Amendment 2 Response to Arguments 2 Claim Rejections - 35 USC § 103 3 Allowable Subject Matter 15 Conclusion 16 Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to applicant’s amendment and remarks received on 12/16/25. Claims 1-20 are currently pending. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 13, 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimedinvention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tussy (US 11,157,606 B2) in view of Gottemukkula et al (US 9,390,327 B2). Regarding claim 1, Tussy discloses a computer implemented method for performing a facial biometric authentication, the method comprising: storing a validation image of a user (see col. 12, lines 44-56; user is next prompted to provide a plurality of images of his or her face using a camera 114 on the mobile device 112 (hereinafter, “enrollment images”) in step 510); receiving a vector representation of a raw image of the user, wherein the raw image was processed and a machine learning model was applied to the processed raw image to determine the vector representation of features of the processed raw image (see col. 14, lines 45-60, col. 7, lines 49-65; authentication routing 120 may perform facial recognition on the authentication images to obtain biometric information (“authentication biometrics”). In another embodiment, the mobile device 112 performs facial recognition to obtain the authentication biometrics and sends the authentication biometrics to the server 120……Facial recognition as used herein refers to a process that can analyze a face using an algorithm, mapping its facial features, and converting them to biometric data, such as numeric data. ); wherein the vector representation of the validation image was determined by a same type of machine learning model applied to processed raw images (see col. 7, lines 61-col. 8, lines 10, col. 11, lines 22-27, col. 15, lines 10-16; The facial recognition module 366 may process the image data to generate facial data (biometric information) and perform a compare function in relation to other facial data to determine a facial match as part of an identify determination…… FIG. 9, by using algorithms to process the characteristics of the face and light striking the face between the different images, the authentication server 120 can determine that the face in the authentication images is three-dimensional, i.e. not a representation on a printed picture or video screen. Where the mobile device 120 sends only the authentication biometrics 120 to the server, the server 120 may validate the realness or three-dimensional aspects of the user imaged by comparing the biometric results of the different images). Tussy does not teach determining a plurality of distances between the vector representation of the raw image and a vector representation of the validation image with a plurality of different algorithms; and outputting an approval or rejection of a biometric authentication based on the determined distances. Gottemukkula, in the same field of endeavor, teaches determining a plurality of distances between the vector representation of the raw image and a vector representation of the validation image with a plurality of different algorithms (see col. 17, lines 30-67, col. 4, lines 20-67, col. 5, lines 1-20); and outputting an approval or rejection of a biometric authentication based on the determined distances (see col. 17, 1-30, col. 2, lines 1-20, fig. 1). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy to utilize the cited limitations as suggested by Gottemukkula. The suggestion/motivation for doing so would have been to improve the efficiency and accuracy (see col. 1, lines 40-67). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy, while the teaching of Gottemukkula continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 13, Tussy discloses a system for performing a facial biometric authentication, the system comprising: a memory having processor-readable instructions stored therein (see col. 6, lines 30-45; memory); and at least one processor configured to access the memory and execute the processor-readable instructions to perform operations including (see col. 6, lines 30-45; processor configured to execute instructions stored in memory): storing a validation image of a user (see col. 12, lines 44-56; user is next prompted to provide a plurality of images of his or her face using a camera 114 on the mobile device 112 (hereinafter, “enrollment images”) in step 510); receiving a vector representation of a raw image of the user, wherein the raw image was processed and a machine learning model was applied to the processed raw image to determine the vector representation of features of the processed raw image (see col. 14, lines 45-60, col. 7, lines 49-65; authentication routing 120 may perform facial recognition on the authentication images to obtain biometric information (“authentication biometrics”). In another embodiment, the mobile device 112 performs facial recognition to obtain the authentication biometrics and sends the authentication biometrics to the server 120……Facial recognition as used herein refers to a process that can analyze a face using an algorithm, mapping its facial features, and converting them to biometric data, such as numeric data. ); wherein the vector representation of the validation image was determined by a same type of machine learning model applied to processed raw images (see col. 7, lines 61-col. 8, lines 10, col. 11, lines 22-27, col. 15, lines 10-16; The facial recognition module 366 may process the image data to generate facial data (biometric information) and perform a compare function in relation to other facial data to determine a facial match as part of an identify determination…… FIG. 9, by using algorithms to process the characteristics of the face and light striking the face between the different images, the authentication server 120 can determine that the face in the authentication images is three-dimensional, i.e. not a representation on a printed picture or video screen. Where the mobile device 120 sends only the authentication biometrics 120 to the server, the server 120 may validate the realness or three-dimensional aspects of the user imaged by comparing the biometric results of the different images). Tussy does not teach determining a plurality of distances between the vector representation of the raw image and a vector representation of the validation image with a plurality of different algorithms; and outputting an approval or rejection of a biometric authentication based on the determined distances. Gottemukkula, in the same field of endeavor, teaches determining a plurality of distances between the vector representation of the raw image and a vector representation of the validation image with a plurality of different algorithms (see col. 17, lines 30-67, col. 4, lines 20-67, col. 5, lines 1-20); and outputting an approval or rejection of a biometric authentication based on the determined distances (see col. 17, 1-30, col. 2, lines 1-20, fig. 1). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy to utilize the cited limitations as suggested by Gottemukkula. The suggestion/motivation for doing so would have been to improve the efficiency and accuracy (see col. 1, lines 40-67). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy, while the teaching of Gottemukkula continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claim 20, Tussy discloses a computer implemented method for performing a facial biometric authentication, the method comprising: capturing a raw image of a user (see col. 5, lines 64-67; a user 108 may have a mobile device 112 which may be used to access one or more of the user's accounts via authentication systems. A user 108 may have a mobile device 112 that can capture a picture of the user 108, such as an image of the user's face); determining, by a preprocessing module, a processed raw image by performing image processing on the raw image (see col. 10, lines 10-30; processor 208 may process image data to perform image recognition, such as in the case of, facial detection, item detection, facial recognition, item recognition, or bar/box code reading); applying, by a machine learning module, a machine learning model to the processed raw image to determine a vector representation of the processed raw image based on features of the processed raw image (see col. 10, lines 30-67; facial detection module 308 is provided to execute facial detection algorithms while a facial recognition module 321 includes software code that recognizes the face or facial features of a user, such as to create numeric values which represent one or more facial features (facial biometric information) that are unique to the user.); transmitting the vector representation of the processed raw image to a server (see col. 7, lines 10-25; The data including either the image(s), biometric information, or both are sent over the network 116 to the server 120.); and receiving from the server, based on the vector representation being compared to a benchmark vector, an approval or rejection of a biometric authentication (see col. 7, lines 1-50; Using image processing and image recognition algorithms, the server 120 processes the person's biometric information, such as facial data, and compares the biometric information with biometric data stored in the database 124 to determine the likelihood of a match….. By using facial recognition processing, an accurate identity match may be established. Based on this and optionally one or more other factors, access may be granted, or an unauthorized user may be rejected). Tussy does not teach determining a plurality of distances between the vector representation of the raw image and a vector representation of the validation image with a plurality of different algorithms (see col. 17, lines 30-67, col. 4, lines 20-67, col. 5, lines 1-20); and outputting an approval or rejection of a biometric authentication based on the determined distances (see col. 17, 1-30, col. 2, lines 1-20, fig. 1). Tussy does not teach wherein the comparison is based on a determination of a plurality of distances between the vector representation of the processed raw image and a vector representation of a validation image with a plurality of different algorithms. Gottemukkula, in the same field of endeavor, teaches wherein the comparison is based on a determination of a plurality of distances between the vector representation of the processed raw image and a vector representation of a validation image with a plurality of different algorithms (see col. 17, lines 30-67, col. 4, lines 20-67, col. 5, lines 1-20). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy to utilize the cited limitations as suggested by Gottemukkula. The suggestion/motivation for doing so would have been to improve the efficiency and accuracy (see col. 1, lines 40-67). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy, while the teaching of Gottemukkula continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claims 2-4, 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Tussy (US 11,157,606 B2) with Gottemukkula et al (US 9,390,327 B2), and further in view of Ali et al (IETDL: “Age-invariant face recognition system using combined shape and texture features”). Regarding claims 2-4, Tussy with Gottemukkula teaches all elements as mentioned above in claim 1. Tussy with Gottemukkula does not teach expressly performing a facial recognition algorithm on the raw image; performing a face cropping algorithm on the raw image; determining that the raw image has an approved designation; and upon performing the facial recognition algorithm and the face cropping algorithm on the raw image, saving the raw image as a first processed raw image; upon determining that the image has an approved designation, performing a gray scale algorithm and/or an image reshaping algorithm on the first processed raw image to generate a second processed raw image; and saving the second processed raw image; performing an invariant transformation on the second processed image; and saving an output of the invariant transformation as the processed raw image. Ali, in the same field of endeavor, teaches performing a facial recognition algorithm on the raw image; performing a face cropping algorithm on the raw image; determining that the raw image has an approved designation; and upon performing the facial recognition algorithm and the face cropping algorithm on the raw image, saving the raw image as a first processed raw image (see section 3; All the face images in the FG-NET database were properly normalised and pre-processed. The pre-processing stage comprised converting the colour input images into 8-bit grey-scale images, locating the eyes manually, normalizing (scaling and rotating) the images geometrically in such a way that the centres of the eyes were localised at predefined positions, cropping the face parts of the images and resising the cropped area to a standard size of 200 × 200 pixels, and finally, normalising the face images photometrically by eliminating their mean and scaling their pixels to unit variance [62]. A number of examples of the normalized images from the FG-NET database are shown in Fig. 9.); upon determining that the image has an approved designation, performing a gray scale algorithm and/or an image reshaping algorithm on the first processed raw image to generate a second processed raw image; and saving the second processed raw image (see section 3; All the face images in the FG-NET database were properly normalised and pre-processed. The pre-processing stage comprised converting the colour input images into 8-bit grey-scale images, locating the eyes manually, normalizing (scaling and rotating) the images geometrically in such a way that the centres of the eyes were localised at predefined positions, cropping the face parts of the images and resising the cropped area to a standard size of 200 × 200 pixels, and finally, normalising the face images photometrically by eliminating their mean and scaling their pixels to unit variance [62]. A number of examples of the normalized images from the FG-NET database are shown in Fig. 9.); performing an invariant transformation on the second processed image; and saving an output of the invariant transformation as the processed raw image (see section 1, 2; PC is robust against contrast variations, as well as illumination variations…. It has a number of crucial advantages, namely: it is computational effective and it is invariance to monotonic grey-level variations. Such advantages qualify it to be ideal for challenging image analysis tasks. A number of researchers have made efforts to improve the LBP using rotation invariant histogram bin selection, multi-resolution feature extraction and image information enrichment. Ojala et al. [28] examined the bins that were robust against rotation variations which resulted in rotation invariant patterns……… LBPV is robust because it exploits the complementary information of local contrast and spatial pattern ….. shape and texture features were extracted and normalised separately, and then, their feature vectors were simply concatenated and projected to a PCA subspace for dimensionality reduction). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy with Gottemukkula to utilize the cited limitations as suggested by Ali. The suggestion/motivation for doing so would have been to enable an overall verification accuracy above 93% (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy with Gottemukkula, while the teaching of Ali continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claims 14-16, the claims are analyzed as a system that implements the limitations of claims 2-4 (see rejection of claims 2-4). Claims 5-6, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Tussy (US 11,157,606 B2) with Gottemukkula et al (US 9,390,327 B2), and further in view of Sun et al (IEEE: “Deep Learning Face Representation from Predicting 10,000 Classes”). Regarding claims 5-6, Tussy with Gottemukkula teaches all elements as mentioned above in claim 1. Tussy with Gottemukkula does not teach expressly determine the vector representation of the features of the processed raw image includes a plurality of machine learning models each configured to generate a respective vector; convolutional neural networks. Sun, in the same field of endeavor, teaches determine the vector representation of the features of the processed raw image includes a plurality of machine learning models each configured to generate a respective vector (see abstract, section 3); convolutional neural networks (see abstract, section 3). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy with Gottemukkula to utilize the cited limitations as suggested by Sun. The suggestion/motivation for doing so would have been to enable an overall verification accuracy of 97.45% (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy with Gottemukkula, while the teaching of Sun continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Regarding claims 17-18, the claims are analyzed as a system that implements the limitations of claims 5-6 (see rejection of claims 5-6). Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tussy (US 11,157,606 B2) with Gottemukkula et al (US 9,390,327 B2), and further in view of Apampa et al (DBLP: “Ensuring Privacy of Biometric Factors in Multi-Factor Authentication Systems”). Regarding claims 7-8 Tussy with Gottemukkula teaches all elements as mentioned above in claim 1. Tussy with Gottemukkula does not teach expressly received vector representation of the raw image has been encrypted using an encryption algorithm; decrypting the vector representation of the raw image prior to determining the distances between the vector representation of the raw image and the vector representation of the validation image. Apampa, in the same field of endeavor, teaches received vector representation of the raw image has been encrypted using an encryption algorithm (see section 2-3); decrypting the vector representation of the raw image prior to determining the distances between the vector representation of the raw image and the vector representation of the validation image (see section 2-3). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy with Gottemukkula to utilize the cited limitations as suggested by Apampa. The suggestion/motivation for doing so would have been to preserve privacy of users’ biometrics (see abstract). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy with Gottemukkula, while the teaching of Apampa continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Tussy (US 11,157,606 B2) with Gottemukkula et al (US 9,390,327 B2), and further in view of Merkel et al (US 11,443,551 B2). Regarding claim 11, Tussy with Gottemukkula teaches all elements as mentioned above in claim 1. Tussy with Gottemukkula does not teach expressly storing a plurality of validation images of the user on a server. Merkel, in the same field of endeavor, teaches storing a plurality of validation images of the user on a server (see col. 1, lines 50-col. 2, lines 10). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Tussy with Gottemukkula to utilize the cited limitations as suggested by Merkel. The suggestion/motivation for doing so would have been to improve accuracy and efficiency of the recognition process (see col. 6, lines 50-67). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Tussy with Gottemukkula, while the teaching of Merkel continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Allowable Subject Matter Claims 9-10, 12, 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claims 9-10, 19, none of the references of record alone or in combination suggest or fairly teach wherein determining the plurality of distances between the vector representation of the raw image and the vector representation of the validation image includes: determining a first distance based on a Manhattan distance algorithm; determining a second distance based on a hamming algorithm; determining a third distance based on a Euclidian distance; and determining a fourth distance based on a Kullback–Leibler divergence algorithm. Regarding claim 12, none of the references of record alone or in combination suggest or fairly teach determining the plurality of distances between the vector representation of the raw image and the vector representation of the validation image includes: determining the plurality of distances between the vector representation of the raw image and the vector representation of each of the plurality of validation images, the plurality of distances being determined based on at least two of a Manhattan distance algorithm, a hamming algorithm, a Euclidian distance, or a Kullback-Leibler divergence algorithm; identifying which of the vector representations of the plurality of validation images has a closest set of distances to the vector representation of the raw image; and utilizing the identified vector representation to determine an approval score. Conclusion Claims 1-8, 11, 13-18, 20 are rejected. Claims 9-10, 12, 19 are objected to as being dependent upon a rejected base claim. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD PARK. The examiner’s contact information is as follows: Telephone: (571)270-1576 | Fax: 571.270.2576 | Edward.Park@uspto.gov For email communications, please notate MPEP 502.03, which outlines procedures pertaining to communications via the internet and authorization. A sample authorization form is cited within MPEP 502.03, section II. The examiner can normally be reached on M-F 9-6 CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer, can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD PARK/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Sep 08, 2023
Application Filed
Sep 12, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602911
SYSTEMS AND METHODS FOR HANDWRITING RECOGNITION USING OPTICAL CHARACTER RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12602815
WEAKLY PAIRED IMAGE STYLE TRANSFER METHOD BASED ON POSE SELF-SUPERVISED GENERATIVE ADVERSARIAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597173
AUTOMATIC GENERATION OF AN IMAGE HAVING AN ATTRIBUTE FROM A SUBJECT IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594023
METHOD AND DEVICE FOR PROVIDING ALOPECIA INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12592000
SYSTEMS AND METHODS FOR PROCESSING DIGITAL IMAGES TO ADAPT TO COLOR VISION DEFICIENCY
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.4%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 704 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month