Prosecution Insights
Last updated: April 18, 2026
Application No. 17/455,398

HISTORY BASED FACE SEARCHING

Final Rejection §103
Filed
Nov 17, 2021
Examiner
TAYLOR, MEREDITH IREENE DUPAI
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Corsight AI
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
33 granted / 49 resolved
+5.3% vs TC avg
Strong +54% interview lift
Without
With
+54.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
27 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 49 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments Applicant has amended claims 1, 6, and 11; cancelled claims 4, 9, and 14; with claims 1-3, 5-8, 10-13, and 15-20 currently pending. Applicant’s arguments filed 10/19/2025 have been fully considered but are not persuasive as detailed below. Applicant argues that the "consists essentially of" limitation significantly restricts what the history database can contain, limiting it to only these four specified components and excluding other elements that would materially affect the basic and novel characteristics of the claimed database structure.” When an applicant contends that additional steps or materials in the prior art are excluded by the recitation of “consisting essentially of,” applicant has the burden of showing that the introduction of additional steps or components would materially change the characteristics of the claimed invention. Applicant has not shown how having additional information in a database would materially change the characteristics of their claimed invention. The database is used for searching images which does not appear to the examiner to limit the database structure. It is well known in the art to query databases utilizing filters to limit a search to only pertinent data. Therefore this argument is not considered persuasive, and the limitation “consists essentially of” is construed as equivalent to “comprising” as recommended in MPEP 2111.03 III. Applicant additionally argues that “the combination of Dandekar, Kim, and Ong fails to teach or suggest "appearance metadata related to the person identifier and indicative of number of appearances of face detections belonging to the person identifier during a sequence of face detections recorded in a same camera"” Specifically that “Ong consolidates appearances across different locations and cameras, counting appearances across multiple video scenes from different cameras, which is fundamentally different from tracking appearances within a sequence from a single camera as required by the amended claims.” Examiner respectfully disagrees, ¶37 states that “video scene analysis for a single location and a single target person according to various embodiments. A video scene comprises video footage from one or more surveillance cameras of a single location at a particular date” (emphasis added). Therefore appearances from a single (same) camera is considered to be taught. Applicant argues that “Examiner has not established that one of ordinary skill in the art would have been motivated to modify Dandekar's face recognition system with Ong's multi-camera, multi-location appearance tracking to achieve the claimed single-camera sequence tracking.” Examiner respectfully disagrees. Although the technical problem solved in Ong may be distinct from the claimed invention, one of ordinary skill in the art would have been motivated to modify Dandekar and Kim with the teachings of Ong in order to make connections between people who may be associated with each other. Applicant further argues that the references fail to show certain features of the invention, (“The Examiner has alleged that Dandekar teaches the filtering parameter limitations recited in claims 2, 7, and 12. Office Action, page 5. However, Dandekar's7 filtering is fundamentally different from the claimed filtering of appearance metadata based on filtering parameters. Dandekar discloses filtering of PCA vectors based on index parameters to narrow down the search space before searching. Dandekar, Column 7, Lines 29-53. This pre- search filtering is applied to reduce the number of feature vectors that need to be searched, not to filter appearance metadata after retrieval as recited in the claims. The claims require "receiving one or more filtering parameters, wherein the retrieving comprises filtering appearance metadata based on the one or more filtering parameters." This filtering occurs during or after the retrieval of appearance metadata, not as a pre- search optimization of the search space.”). It is noted that the features upon which applicant relies (i.e., that there is a claimed order in the step where “filtering appearance metadata based on the one or more filtering parameters” occurs. The claim requires “the retrieving comprises filtering appearance metadata” the metadata filtered is not claimed as ”the appearance metadata” from claim one and thus the broadest reasonable interpretation of the claim includes filtering to narrow the search space.) Applicant further argues that the references fail to show certain features of the invention (filtering on appearance metadata that is "indicative of number of appearances of face detections belonging to the person identifier during a sequence of face detections recorded in a same camera”). It is noted that the features upon which applicant relies (i.e., that the filter parameters correspond to metadata indicative of number of appearances of face detections. The claim only requires filtering of appearance metadata. Claim language of “filtering the metadata” would require the metadata that is filtered be the data indicative of number of appearances and overcome the cited reference.) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant’s argument that Ong does not teach the appearance metadata limitation of claim 3, examiner respectfully disagrees. Examiner agrees the claim requires that an appearance ends based on absence from frames for a preconfigured time period. Ong ¶37 explains that because the preconfigured time period was not met in the example given (absent for 2 minutes instead of the 5 minute threshold) a single appearance is counted. However, if the threshold time was met a second appearance would be counted. Therefore the limitation is considered to be taught. Applicant argues that the "consists essentially of" limitation in claims 5, 10, and 15 limits the searching to only accessing fields that store reference face signatures, excluding other database fields or operations that would materially affect the basic and novel characteristics of the claimed search process” When an applicant contends that additional steps or materials in the prior art are excluded by the recitation of “consisting essentially of,” applicant has the burden of showing that the introduction of additional steps or components would materially change the characteristics of the claimed invention.” Applicant has not shown how comparing query signatures to the database, as it is mapped, is different than the claimed invention and would materially change the characteristics of their claimed invention. Therefore this argument is not considered persuasive, and the limitation “consists essentially of” is construed as equivalent to “comprising” as recommended in MPEP 2111.03 III. Applicant argues “that one of ordinary skill in the art would have been motivated to combine Steinberg's error-message-based quality assessment with the claimed system for determining whether to search a history database having the specific "consists essentially of" structure recited in claim 1.” Examiner maintains that the “consists essentially of structure” in claim 1 is construed as equivalent to “comprising” as explained above. Therefore the reasoning explained in Non-Final mailed 7/23/2025 (see page 9) is considered appropriate and the rejection is maintained. In response to applicant’s argument that Ong does not teach continuity of capture constraint of claim 18, examiner respectfully disagrees. Examiner agrees the claim requires that an appearance ends based on absence from frames for a preconfigured time period. Ong ¶37 explains that because the preconfigured time period was not met in the example given (absent for 2 minutes instead of the 5 minute threshold) a single appearance is counted. However, if the threshold time was met a second appearance would be counted. Therefore the limitation is considered to be taught. As such this action is made FINAL. Claim Interpretation The claim language of claim 20 is consistent with Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875, 69 USPQ2d 1865, 1868 (Fed. Cir. 2004), therefore the plain and ordinary meaning of “wherein the quality metric is indicative of at least one of a resolution of the image and sharpness of the image” is taken to be both at least one “resolution of the image” and “sharpness of the image” are claimed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-8, 10-13, 15, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dandekar (Patent No. US 10311288 B1) in view of Kim (Patent No. US10635918B1) and Ong (Pub. No. US 20200394395A1). Regarding claim 11, Dandekar discloses A computerized system for unsupervised signature based forensic search history based face searching, the computerized system comprises a processing circuit that is configured to: (Dandekar Col 12 lines 10-30, a processor that implements a method is disclosed.) obtain a query face signature, the query face signature is a face signature of a face of a person, the query face is captured in an image, a size of the query face signature is smaller than a size of visual information of the query face in the image; (Dandekar Fig. 3 and Col. 5 liens 34-45, features are extracted from an input image (face signature). They then go through a principle component analysis (PCA) and filter to reduce the number of features that that need to be searched (query face). The PCA and making the size of the size of the query face signature smaller than the visual information size. (see also Col. 6 lines 2-12))) search a history database for at least one similar reference face signature that may be similar to the query face signature, (Dandekar Col. 4 lines 35-49 and Fig. 2, query signatures are compared to a database to find a match.) wherein the history data base consists essentially of the reference face signatures, person identifiers, appearance metadata, and a mapping between the reference face signatures, person identifiers, and appearance metadata; (Dandekar Col. 3 lines 49-67 and Fig. 1; the database can store photos ( with corresponding face signatures), unique identifiers, etc. all being interpreted as metadata.) retrieve a person identifier associated with the at least one similar reference face signature, when finding the at least one similar reference face signature; retrieve appearance metadata related to the personal identifier; and (Dandekar Col. 10 lines 27-39, identity of the person in the query image is obtained for the user profile of the matched PCA feature vector (face signature). The metadata includes the PCA feature vector and user profile information (see Col. 3 lines 49-67).) Dandekar does not explicitly disclose wherein each person identifier is associated with a maximal capacity for a number of reference face signatures to be stored in relation to the person identifier; wherein when the maximal capacity is reached and a new reference face signature is received the new reference face signature replaces a currently stored reference face signature when a score of the new reference face signature is higher that a score of the currently stored reference face signature. Kim, however, discloses wherein each person identifier is associated with a maximal capacity for a number of reference face signatures to be stored in relation to the person identifier; (Kim Col. 1 lines 54-64 and Fig. 4; storing multiple images in association with an identifier can be seen in Fig. 4. The limited compacity of the number of images stored is disclosed in Col. 1 lines 54-64.) wherein when the maximal capacity is reached and a new reference face signature is received the new reference face signature replaces a currently stored reference face signature when a score of the new reference face signature is higher that a score of the currently stored reference face signature (Kim Fig. 4 and Col. 10 lines 3-26; adding new images when a the images of a preset number (max) is reached by deleting the image with the lowest quality score and replacing it with the new image is disclosed.) It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to modify the system of Dandekar with the teachings of Kim by managing which facial images are kept as examples of an individual in the database by using a quality score in order to keep the best images for face recognition performance (Kim Col. 10 lines 3-26). The combination of Dandekar and does not explicitly disclose, metadata indicative of number of appearances of face detections belonging to the person identifier during a sequence of face detections recorded in a same camera. Ong, however, discloses metadata indicative of number of appearances of face detections belonging to the person identifier during a sequence of face detections recorded in a same camera. (Ong Fig. 5 and ¶37-38, 44-45; number of logical appearances is output in the consolidated video appearances. These can be for a single location with one camera. Further ¶54 describes determining individuals that appear in more than a threshold number of scenes, therefore the appearances are counted.) output a response to the query that includes at least a part of the appearance metadata related to the personal identifier. (Ong Fig. 5 and ¶55; it can be seen that part of the output is an image of the person (appearance metadata).) It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to modify the system of the combination of Dandekar and Kim with the teachings of Ong by including a count of appearances in metadata, and output an image of the query person in order to make connections between people who may be associated with each other, and provide easy visualization of detected people to the user. Regarding claim 12, the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 11, as disclosed above. They further disclose wherein the processing circuit is configured to receive one or more filtering parameters, wherein a retrieval of the person identifier comprises filtering appearance metadata based on the one or more filtering parameters. (Dandekar Col. 7 29-53, a filter can be applied pre search to narrow down the search space. PCA vectors (appearance metadata) are filtered based on index parameters (filtering parameters).) Regarding claim 13, the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 11, as disclosed above. They further disclose wherein the processing circuit is configured to receive one or more filtering parameters, wherein an outputting comprises filtering appearance metadata based on the one or more filtering parameters. (Ong ¶37; various timing parameters (filters) that determine the counted number of appearances is disclosed. For instance the person can even be absent from the frame for a specified period of time and still be counted as a single appearance. Wherein it would have been obvious to include an appearance count to make connections between people who may be associated with each other.) Regarding claim 15, the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 11, as disclosed above. They further disclose wherein the search consists essentially of accessing fields of the history database that store reference face signatures. (Dandekar Col. 4 lines 35-49 and Fig. 2, query signatures are compared to a database to find a match.) Regarding claims 1-2, and 5, they are the corresponding method claims to claims 11-12 and 15 and are rejected for similar reasons. Regarding claims 6-8, and 10, they are the corresponding non-transitory computer readable medium claims to claims 11-13, and 15 and are rejected for similar reasons. Regarding claim 3, the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 1, as disclosed above. They further disclose wherein an appearance metadata of a person reflects appearances, wherein each appearance begins once a new face is detected in a frame, and ends when the face did not appear in any following frame for a preconfigured time period (Ong ¶37; a preconfigured time of 5 minutes is used as an example. A single appearance is counted, despite the person being absent from the frame for a specified period of time. Wherein it would have been obvious to include an appearance count to make connections between people who may be associated with each other.) Regarding claim 18 the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 1, as disclosed above. They further disclose wherein an appearance metadata that is related to a person is indicative of one or more appearances of the person, wherein a single appearance of the person spans along a time window during which the captured face appeared multiple times in compliance with a continuity of capture constraint that defines one or more maximal allowable time gaps between times of capture of a captured face of the acquired person in a video stream. (Ong ¶37; counting a block of time as a single appearance for an individual is disclosed. The person can even be absent from the frame for a specified period of time and still be counted as a single appearance. Wherein it would have been obvious to include an appearance count to make connections between people who may be associated with each other.) Claim(s) 16-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dandekar (Patent No. US 10311288 B1) in view of Kim (Patent No. US10635918B1), Ong (Pub. No. US 20200394395A1), and Steinberg (Patent No. US 7558408 B1). Regarding claim 16, the combination of Dandekar, Kim and Ong disclose the claim limitations with regards to claim 1, as disclosed above. The combination of the combination of Dandekar, Kim and Ong does not explicitly disclose comprising: obtaining quality metadata indicative of a quality of the query face signature; and determining not to perform the searching of the history database when the quality of the query face signatures does not exceed a first quality threshold. Steinberg, however, discloses comprising: obtaining quality metadata indicative of a quality of the query face signature; and determining not to perform the searching of the history database when the quality of the query face signatures does not exceed a first quality threshold. (Steinberg Fig. 5 (a); if the pose of the face is not oriented towards the camera an error message is output (i.e. poor quality), rather than the face being searched. Note specifically elements 5020-5050. Col 25 lines 1-26 discloses that the threshold is the facial rotation.) It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to modify the system of the combination of Dandekar, Kim and Ong with the teachings of Steinberg by including the orientation check of the face being searched disclosed in Steinberg in order to indicate to a user when detection cannot be performed (Steinberg Fig. 5(a) elements 5020-5050). Regarding claim 17, the combination of Dandekar, Kim, Ong and Steinberg disclose the claim limitations with regards to claim 16, as disclosed above. They further disclose wherein the quality metadata is determined based on at least one of (a) a pose of the query face that is captured in the image, (Steinberg Fig. 5 (a); if the pose of the face is not oriented towards the camera an error message is output (i.e. poor quality). Note specifically elements 5020-5050.) (b) a yaw value of the query face that is capture in the image, (Steinberg Fig. 5 (a) and Col 25 lines 1-26; if the face is rotated 45 degrees in the left/right direction (yaw direction from Col 25 line 43-44) the face is referred to as half profile, and at a 90 degree rotation it is full profile. It is disclosed that these are harder cases for facial identification and an error message is output if these pose criteria are not met (see Fig. (a) element 5050). or (c) a number of identified facial attributes of the face query. (Steinberg Col 24 lines 46-67; relative positions for principle face features need to be calculated. The amount of allowable rotation depends on the visibility of facial features (Col 25 lines 1-39), specifically the eyes, nose and mouth. When the face is rotated too far at least one eye is cut off from view. Wherein the orientation check of Steinberg is performed in order to indicate to a user when detection cannot be performed.) Regarding claim 19, the combination of Dandekar, Kim, Ong and Steinberg disclose the claim limitations with regards to claim 16, as disclosed above. They further disclose wherein the quality metadata is determined based on a number of identified facial attributes of the face query. (Steinberg Col 24 lines 46-67; relative positions for principle face features need to be calculated. The amount of allowable rotation depends on the visibility of facial features (Col 25 lines 1-39), specifically the eyes, nose and mouth. When the face is rotated too far at least one eye is cut off from view. Wherein the orientation check of Steinberg is performed in order to indicate to a user when detection cannot be performed.) Regarding claim 20 the combination of Dandekar, Kim, Ong and Steinberg disclose the claim limitations with regards to claim 16, as disclosed above. They further disclose wherein the quality metadata is determined based on a quality metric of the image, wherein the quality metric is indicative of at least one of a resolution of the image . (Steinberg Col 40 lines 15-30; a necessary resolution included in a face is calculated. Wherein the orientation check of Steinberg is performed in order to indicate to a user when detection cannot be performed.) and sharpness of the image. (Steinberg Col 32 line 55- Col 33 line6; decisions for processing the digital image and/or for adjusting parameters can use sharpness.) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEREDITH TAYLOR whose telephone number is (571)270-5805. The examiner can normally be reached M-Th 7:30-5. Examiner’s email is Meredith.taylor@uspto.gov. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached on (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEREDITH TAYLOR/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Nov 17, 2021
Application Filed
Mar 12, 2024
Non-Final Rejection — §103
Aug 16, 2024
Response Filed
Oct 25, 2024
Final Rejection — §103
Apr 07, 2025
Request for Continued Examination
Apr 08, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Oct 19, 2025
Response Filed
Mar 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579602
END-TO-END CAMERA CALIBRATION FOR BROADCAST VIDEO
2y 5m to grant Granted Mar 17, 2026
Patent 12551299
SYSTEM AND METHOD OF UTILIZING COMPUTER-AIDED IDENTIFICATION WITH MEDICAL PROCEDURES
2y 5m to grant Granted Feb 17, 2026
Patent 12511724
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND MAGNETIC RESONANCE IMAGING DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12511888
COMPUTER-IMPLEMENTED METHOD OF HANDLING AN EMERGENCY INCIDENT, COMMUNICATION NETWORK, AND EMERGENCY PROCESSING UNIT
2y 5m to grant Granted Dec 30, 2025
Patent 12505651
Image Identification System and Image Identification Method for Identifying Images Based on Divided Training Images
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+54.3%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 49 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month