Prosecution Insights
Last updated: April 18, 2026
Application No. 18/376,381

SYSTEMS AND METHODS FOR PRESENTING IMAGE CLASSIFICATION RESULTS

Final Rejection §103§112§DP
Filed
Oct 03, 2023
Examiner
MAY, ROBERT F
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
216 granted / 286 resolved
+20.5% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
327
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 286 resolved cases

Office Action

§103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The Action is responsive to the Amendments and Remarks filed on 11/13/2025. Claims 1-3 and 5-7, and 9-22 are pending claims. Claims 1, 2, and 20 are written in independent form. Priority Applicant’s claim for benefit as a Continuation of 17/106851, filed 11/30/2020, which is a Continuation of 16/534375 , filed 08/07/2019, is acknowledged. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 2. Claims 1-22 are rejected on the ground of obviousness-type double patenting as being unpatentable over claims 1-19 of U.S. Patent No. US10885099B1. Although the conflicting claims are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use similar limitations and they produce the same end results with omission of elements and functions. Claims 1 of the patent recites “An apparatus for classifying images, comprising: a camera; at least one storage device storing a set of instructions; and at least one processor coupled to the at least one storage device and the camera, the instructions configuring the at least one processor to perform operations comprising: executing an augmented reality application, the augmented reality application configured for displaying modified video feeds comprising icons or information superimposed on background images; capturing an image by the camera, the captured image being a frame of a video feed displayed in the augmented reality application; identifying attributes of the captured image using a classification model, the classification model comprising a convolutional neural network; identifying first results based on the attributes, the first results being associated with probability scores; selecting a subset of first results based on the probability scores, the first results in the subset having an accumulated probability score greater than a threshold probability score; generating a first graphical user interface, the first graphical user interface comprising: interactive icons corresponding to first results in the subset, at least one input icon, and a first button; receiving a selection of the first button; upon receiving the selection: performing a search to identify second results, the search being based on at least one of selected interactive icons or input in the at least one input icon; and generating a second graphical user interface, different from the first graphical user interface, displaying the second results; determining whether the at least one input icon is empty; transmitting, to a server, the captured image and input in the at least one input icon when the at least one input icon is determined not to be empty; and receiving, from the server, a patch for the classification model, the patch comprising updated model parameters and a classification model exception for the identified attributes.” The current application #18/376,381 recites a similar “A system for generating and implementing patches to improve classification model results based on user feedback through input icons, the system comprising: a camera of a mobile computing device; one or more processors of the mobile computing device; and one or more memory devices storing instructions that, when executed by the one or more processors, configure the one or more processors to perform operations comprising: in connection with capturing a first image with the camera, presenting a first graphical user interface comprising a set of icons corresponding to first results comprising object recognition results based on attributes identified in the first image using a classification model stored on the mobile computing device; and obtaining from a server and based on a user selection of the set of icons, a patch comprising model values used to modify the classification model and a classification model exception for the attributes; updating, using the one or more processors of the mobile computing device, the classification model by modifying the classification model using the model values and retraining the classification model to generate an updated classification model; generating a first model result and a second model result based on a second image; and substituting the first model result based on the classification model exception without substituting the second model result.” It would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify or to omit the additional elements of claims 1-19 of U.S US10885099B1 to arrive at claims 1- 22 of the instant application, because the program product comprising a computer readable storage medium having computer readable program code and executable by a computing processor would perform the functions of the computer-implemented method. “Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before.” See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U. S. Court of Customs and Patent Appeals. This is an obviousness-type double patenting rejection. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Independent Claims 1, 2, and 20 contain subject matter “model values used to modify an architecture of the classification model” and “modifying the architecture of the classification model using the model values” which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. It is not clearly stated in the written description any modification to the architecture of a classification model. For example, Para.[0032] of the present specification states “the patches may modify parameters of the model to correct inaccuracies and/or improve model configurations” but does not recite any modification of the architecture of the model . The only locations in the written description that a variation of the term “architecture” is recited are “distributing tasks at different points of the system architecture” (Para. [0033]), “Image normalization module 232 may have an architecture designed for implementation of specific algorithms. For example, image normalization module 232 may include a Simple Risc Computer (SRC) architecture or other reconfigurable computing system” (Para. [0067]), “image feature extraction module 234 may include independent hardware devices with specific architectures designed to improve the efficiency of aggregation or sorting processes” (Para. [0070]), “inventory search system 105 may extract image attributes using techniques as compiled functions that feed-forward data into an architecture to the layer of interest in the neural network” (Para. [0119]), “the selection of the pixel input resize value may be determined by a neural net architecture and the selection of the neural net architecture may be based on a required identification speed” (Para. [0180]), and “The patch with exception of step 1210 may be configured to be in a self-contained package that is identified by name, target application or operating system, processor architecture, and language locale” (Para. [0190]).It is noted that Applicant does not state in the Remarks dated 11/13/2025 where support for modifying an architecture of the classification model can be found in the specification.For purposes of compact prosecution, the claim limitations referring to modifying the architecture will be interpreted as “obtaining, from a server and based on a user selection of the set of icons, a patch comprising model values used to modify using the model values and retraining the classification model Dependent Claims 3-19 and 21-22 inherit the deficiencies of their parent claims and are therefore being rejected based upon the same reason(s) stated for their parent claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claims 1, 2, and 20, the limitation “updating…the classification model…to generate an updated classification model” followed by “…generating a first model result and a second model result using the classification model based on a second image;” renders the claims indefinite because when analyzed using its broadest reasonable interpretation, it is unclear whether the original classification model or the updated classification model is being used to generate the first model result and the second model result since the generating step is recited after the updating step. For purposes of compact prosecution, the generating limitation is understood as reciting “…generating a first model result and a second model result using the updated classification model based on a second image;”. Dependent Claims 2-19 and 21-22 inherit the deficiencies of their parent claims and are therefore being rejected based upon the same reason(s) stated for their parent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5-10, 12, 14-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pederson (U.S. Pre-Grant Publication No. 2016/0232765) and further in view of Athsani et al. (U.S. Pre-Grant Publication No. 2013/0044132, hereinafter referred to as Athsani). Regarding Claim 1: Pederson teaches a system for generating and implementing patches to improve classification model results based on user feedback through input icons, the system comprising: A camera of a computing device; (Para. [0037]) One or more processors of the computing device; and (Para. [0037]) One or more memory devices storing instructions that, when executed by the one or more processors, configure the one or more processors to perform operations comprising: (Para. [0037]). In connection with capturing a first image with the camera, presenting a graphical user interface comprising a set of icons corresponding to first results comprising object recognition results based on attributes identified in the first image using a classification model stored on the computing device; and Pederson teaches “The computer 22 for the intelligent video/audio observation and identification database system 10 may include an interface between any number of application specific databases 30, 62, which in turn may be coupled with screening and/or searching functions to identify vehicles 70 and/or individuals 56 within the United States” (Para. [0082]) thereby teaching in connection with capturing a first image (in the intelligent video/audio observation and identification database system) and presenting an interface comprising a set of icons identifying a result comprising an object recognition result (vehicles and/or individuals) based attributes identified in the first image using a classification model (screening and/or searching functions).Pederson further teaches “The system may identify vehicles and individuals entering or exiting the zone through image recognition of the vehicle or individual as compared to prerecorded information stored in a database” (Abstract). Pederson also teaches “Access software is used to communicate with internal databases 30, 62 or external or remote databases, and comparison software is used to review data as related to the external and/or internal databases 30, 62.” (Para. [0040]) thereby teaching using a locally stored model/software on the computing device. Obtaining, from a server and based on user selection of the set of icons, a patch comprising Pederson teaches “no system is known which enables a security surveillance, or law enforcement offer to either select one of many pre-programmed inquiries based upon profiles, searches, or screening functions in real time to implement a specific customized inquiry of the accumulated database to identify a specific target group of vehicles to receive further investigation” (Para. [0008]). Pederson teaches “Sensitivity software is also used to establish thresholds and to issue/trigger investigation signals, which may be displayed on the output device or monitor 40, and category software is used to divide data within individual files or images captured by the input devices 12, 18 into coherent segments. In addition, any other software as desired by security and/or law enforcement personnel may be utilized. Individuals will next verify the operational status and accuracy of the computer 22 operation for the intelligent audio/visual observation and identification database system 10 to insure functioning prior to implementation” (Para. [0040])Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]). model values used to modify the classification model and Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]) as well as “pre-established or customized profile parameters” (Para.[110]). Therefore, Pederson teaches a script, or instructions, detected by the input devices that performs updating activities, including updating pattern parameters that makes the model smarter and more efficient in analyzing risk situations over time. a classification model exception for the attributes; Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]) and The computer 22 will then filter, screen, and search groups of data within each priority classification to identify vehicles 56 and/or individuals 56 which satisfy the profile parameters” (Para. [0114]), thereby teaching classification model exceptions via filtering and updating the model including filter exceptions for improved classification. Updating, using the one or more processors of the mobile computing device, the classification model by modifying the classification model using the model values and retraining the classification model to generate an updated classification model; and Pederson teaches “Sensitivity software is also used to establish thresholds and to issue/trigger investigation signals, which may be displayed on the output device or monitor 40, and category software is used to divide data within individual files or images captured by the input devices 12, 18 into coherent segments. In addition, any other software as desired by security and/or law enforcement personnel may be utilized. Individuals will next verify the operational status and accuracy of the computer 22 operation for the intelligent audio/visual observation and identification database system 10 to insure functioning prior to implementation” (Para. [0040])Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]).By performing updates of the pattern and making the system smarter and more efficient in analyzing risk situations, the model values have been updated and the classification system has been retrained. Concurrently generating a first model result and a second model result using the updated classification model based on a second image; and Pederson teaches “A first preliminary screening inquiry may identify a vehicle 70 with a falsified license plate 54 and simultaneously a facial recognition system may identify an ethnic background for an individual 56” (Para.[0082]) thereby teaching generating multiple model results concurrently/simultaneously. Pederson further teaches using the most recently updated classification model by teaching “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]). Pederson explicitly teaches all of the elements of the claimed invention as recited above except: A mobile computing device; substituting the first model result based on the classification model exception without substituting the second model result. However, in the related field of endeavor of obtaining meta data regarding one or more images/video that are captured, Athsani teaches: Wherein the computing device is a mobile computing device; Athsani teaches “a camera enabled mobile device” (Abstract) where “the mobile device further includes processor and a memory” (Para. [0012]). substituting the first model result based on the classification model exception without substituting the second model result. Athsani teaches “the obtained search results may then be overlaid on the image/video in operation 304. The overlaid image/video may then be presented on the mobile device in operation 306.” (Para. [0047]) where “As the user points the mobile device's camera at one or more objects in one or more scenes, such objects are automatically analyzed by the UAR to identify the one or more objects and then provide meta data regarding the identified objects in the display of the mobile device. The meta data is interactive and allows the user to obtain additional information or specific types of information, such as information that will aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects. The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]).Athsani further teaches “a user can view a map overlay to facilitate navigation through an unfamiliar scene. The display of the mobile device could present directions on how the user can proceed to a particular destination, and directions could continue to be updated until the user arrives at her destination.” (Para. [0072] thereby teaching substituting the displayed overlaid content with new/updated results.Athsani further teaches only updating/substituting a portion of the results by teaching “The display of the mobile device could present directions on how the user can proceed to a particular destination, and directions could continue to be updated until the user arrives at her destination.” (Para. [0072]) and “the UAR may then make the reservation and update the calendars of the user (as well as other users if indicated by the user)” (Para. [0057]) where other calendar information is unchanged. Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Athsani and Pederson at the time that the claimed invention was effectively filed, to have combined the user augmented reality (UAR) service for a camera-enabled mobile device, as taught by Athsani, with the systems and methods for intelligent observation and identification database system, as taught by Pederson. One would have been motivated to make such combination because Athani teaches “A User Augmented Reality (UAR) service for use with a camera enabled mobile device by a user in order to gain meta-data or decision support information based on visual input received by the camera of the device. The obtained information may then be presented to the user on the mobile device, via the display or an audio output.” (Para. [0029]) and it would have been obvious to a person having ordinary skill in the art that the gain of meta-data or decision support information would aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects taught by Pederson. Regarding Claim 2: All of the limitations herein are similar to some or all of the limitations as recited in Claim 1. Regarding Claim 5: Athsani and Pederson further teach: Wherein obtaining the patch comprises performing a search to obtain an intermediate result, Pederson teaches “no system is known which enables a security surveillance, or law enforcement offer to either select one of many pre-programmed inquiries based upon profiles, searches, or screening functions in real time to implement a specific customized inquiry of the accumulated database to identify a specific target group of vehicles to receive further investigation” (Para. [0008]). updating the graphical user interface to present the intermediate result, Pederson teaches “no system is known which enables a security surveillance, or law enforcement offer to either select one of many pre-programmed inquiries based upon profiles, searches, or screening functions in real time to implement a specific customized inquiry of the accumulated database to identify a specific target group of vehicles to receive further investigation” (Para. [0008]). Therefore, Pederson teaches updating the interface by presenting an interface for another of the “many pre-programmed inquiries based upon profiles, searches or screening functions”. wherein the updating of the graphical user interface comprises: images associated with the intermediate result; Pederson teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]). conditions associated with the intermediate result; and Pederson teaches “The intelligent system is utilized to flag discrepancies related to information accessible and processed from a stored and accumulated continuously evolving database of information in order to warn security, surveillance, and/or law enforcement officers as to the existence of a condition warranting further investigation to minimize risk of danger, such as illegal activity or terrorist attacks.” (Para. [0033]). distances associated with the intermediate result. Pederson teaches “ Each database 30 may contain predetermined information, such as license plate registration date, vehicle history and warrant data, standard images and descriptions of vehicles including front, side and rear profile and undercarriage images, vehicle specifications such as height, width, length, total unloaded weight, and any other available vehicle data… Each evolving database 30 is also capable of being updated according to data saved by the system” (Para. [0038]) and “Upon identification of individuals 56 and/or vehicles 70 which satisfy the profile criteria, a communication signal will be generated to advise law enforcement, surveillance, or security zone 50 officers as to the status and location of the individuals 56 and/or vehicles 70 under investigation.” (Para. [0043]). Regarding Claim 6: Athsani and Pederson further teach: upon receiving an additional user selection of a first button of the graphical user interface: Athsani teaches “The user can also select the UAR option at any time as a selectable mobile application” such as by hitting a button 706 (Para. [0040]). determining whether an input icon of the graphical user interface is empty; and Athasni teaches “If the camera is not pointed at a scene, the procedure 200 may again check whether the UAR option has been selected, e.g., whether the option has been turned off” (Para. [0042]). in response to determining the input icon is not empty, transmitting, to the server, the first image and content in the input icon. Athsani teaches “When the camera is pointed at a scene, such scene may be displayed with overlaid UAR options for selecting an encyclopedia, decision support, or action mode in operation 208.” (Para. [0043]) and “the UAR server 101 includes a contextual information management module 102, a decision management module 104, and an action management module 106, which operate to return contextual, decision support, and/or action information 112, respectively, to the mobile device 108 for displaying on such mobile device.” (Para. [0031]) Regarding Claim 7: Athsani and Pederson further teach: wherein the classification model exception is based on content in the set of icons. Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]) and The computer 22 will then filter, screen, and search groups of data within each priority classification to identify vehicles 56 and/or individuals 56 which satisfy the profile parameters” (Para. [0114]), thereby teaching classification model exceptions via filtering and updating the model including filter exceptions for improved classification.Pederson further teaches the system automatically may establish customized thresholds, filters, and/or tables for a security zone 50 minimizing officer input” and “the use of pre-stored profile queries in conjunction with manual customized queries enhances the performance of the intelligent video/audio observation and identification database system 10 for apprehension and deterrence of terrorist acts within a safety zone 50. The use of pre-stored and/or customized profiles expedites the searching of vehicles 70 and/or facial recognition data by narrowing investigation to priority classifications of individuals 56 and/or vehicles 70” (Para. [0130]). Regarding Claim 9: Athsani and Pederson further teach: Wherein updating the classification model comprises, upon receiving the set of model values and the classification model exception via the patch, automatically invoking one or more patch management systems in an operating system to update the classification model. Pederson teaches “Sensitivity software is also used to establish thresholds and to issue/trigger investigation signals, which may be displayed on the output device or monitor 40, and category software is used to divide data within individual files or images captured by the input devices 12, 18 into coherent segments. In addition, any other software as desired by security and/or law enforcement personnel may be utilized. Individuals will next verify the operational status and accuracy of the computer 22 operation for the intelligent audio/visual observation and identification database system 10 to insure functioning prior to implementation” (Para. [0040]). Pederson further teaches “The computer 22 will generally continue to store data, and therefore update the pattern, as detected by the input devices 12, 18.” and “the computer 22 is engaged in updating activities becomes smarter and more efficient in analyzing risk situations over time” (Para. [0118]).By performing updates of the pattern and making the system smarter and more efficient in analyzing risk situations, the model values and updates have been used by computer 22 (the patch management systems in the operating system) and the classification system (model) has been retrained/updated. Pederson also teaches automatically performing steps by teaching “various building control systems 64 may be activated by authorized personnel through voice recognition of vocal commands through the intelligent audio/visual observation and identification database system 10 as received by the transducer and verified with respect to pre-stored data for the authorized person, or which may be automatically opened or activated” (Para. [0120]) and “reassignment of priorities and the storage and recognition of the assigned priorities occurs at the computer 22 to automatically recalibrate the assignment of points or flags for further comparison to a profile” (Para. [0121]). Regarding Claim 10: Athsani and Pederson further teach: wherein the set of icons display thumbnails of vehicles identified as preliminary results. Pederson teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]). Regarding Claim 12: Athsani and Pederson further teach: upon receiving an additional user selection of a first button of the first graphical user interface, transmitting an error message to the server. Athsani teaches “If the camera is not pointed at a scene, the procedure 200 may again check whether the UAR option has been selected, e.g., whether the option has been turned off. For instance, the user may have turned off the translation function on her mobile device to take a normal photograph or video or utilize some other mobile application, besides the UAR application. If the UAR option is turned off, the UAR service may also be deactivated if needed in operation 203.” (Para. [0042]). Regarding Claim 14: Athsani and Pederson further teach: upon receiving a user selection of a first button of the first graphical interface: Athsani teaches “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]). transmitting a repopulate request to the server; Athsani teaches “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]) thereby teaching transmitting a repopulate request to update the display. removing the set of icons from the graphical user interface; and Athsani teaches “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]) thereby teaching adding second icons and/or removing icons in the display when changing the objects and scenes dictating the updated display. displaying second icons in the first graphical user interface. Athsani teaches “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]) thereby teaching adding second icons and/or removing icons in the display when changing the objects and scenes dictating the updated display. Regarding Claim 15: Athsani and Pederson further teach: upon receiving a user selection of a second button of the graphical user interface: transmitting, to the server, a query for available vehicles without filtering conditions. Pederson further teaches the system automatically may establish customized thresholds, filters, and/or tables for a security zone 50 minimizing officer input” and “the use of pre-stored profile queries in conjunction with manual customized queries enhances the performance of the intelligent video/audio observation and identification database system 10 for apprehension and deterrence of terrorist acts within a safety zone 50. The use of pre-stored and/or customized profiles expedites the searching of vehicles 70 and/or facial recognition data by narrowing investigation to priority classifications of individuals 56 and/or vehicles 70” (Para. [0130]). Therefore, Pederson teaches transmitting queries with “customized thresholds, filters, and/or tables” which means that the filter is not mandatory and thus might not be included with the query as a condition. Regarding Claim 16: Athsani and Pederson further teach: upon receiving a user selection of a first button of the graphical user interface, presenting an augmented reality application. Athsani teaches “a method of providing information regarding one or more scenes captured with a camera of a mobile device is disclosed. When a camera of the mobile device is pointed at a scene having one or more object(s), an image or video of the scene is presented in a display of the mobile device, and the image or video is overlaid with a plurality of options for selecting one of a plurality of user augmented reality modes that include an encyclopedia mode, a decision support mode, and an action mode.” (Para. [0005]). Regarding Claim 17: Athsani and Pederson further teach: wherein the set of icons corresponds with a first set of results, and Pederson teaches “The computer 22 for the intelligent video/audio observation and identification database system 10 may include an interface between any number of application specific databases 30, 62, which in turn may be coupled with screening and/or searching functions to identify vehicles 70 and/or individuals 56 within the United States” (Para. [0082]) thereby teaching in connection with capturing a first image (in the intelligent video/audio observation and identification database system) and presenting an interface comprising a set of icons identifying a result comprising an object recognition result (vehicles and/or individuals) based attributes identified in the first image using a classification model (screening and/or searching functions).Pederson further teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]) thereby teaching the results corresponding to different icons representing different makes or models of vehicles. wherein presenting the graphical user interface comprises: retrieving visualization preferences from a local memory; and Athsani teaches “ if the user of the device is concerned about the environment, the decision support information which is presented can be prioritized based on the user's preference for being more interested in that data--e.g., use of green packaging etc.” (Para. [0033]). determining the first set of results by truncating preliminary search results based on the visualization preferences. Pederson teaches “no system is known which enables a security surveillance, or law enforcement offer to either select one of many pre-programmed inquiries based upon profiles, searches, or screening functions in real time to implement a specific customized inquiry of the accumulated database to identify a specific target group of vehicles to receive further investigation” (Para. [0008]). Pederson further teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]). Regarding Claim 20: All of the limitations herein are similar to some or all of the limitations as recited in Claim 1. Claim(s) 3, 11, 13, 18, 19, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Athsani and Pederson, and further in view of Turkelson et al. (U.S. Pre-Grant Publication No. 2020/0193206, hereinafter referred to as Turkelson). Regarding Claim 3: Athsani and Pederson explicitly teach all of the elements of the claimed invention as recited above except: The classification model comprises a convolutional neural network; The set of model values comprises a number of layers or a number of nodes for the convolutional neural network; Updating the classification model comprises updating the convolutional neural network based on the set of model values; and Retraining the classification model comprises retraining layers of the convolutional neural network after updating the convolutional neural network using the set of model values. However, in the related field of endeavor of visual search, Turkelson teaches: The classification model comprises a convolutional neural network; Turkelson teaches “The scene classification model may be a convolutional neural network (CNN) including a plurality of layers (e.g., 4 or more layers, 5 or more layers, 6 or more layers, 8 or more layers, etc.), which may form a portion of a deep neural network for classifying, or recognizing, a scene” (Para.[0056]). The set of model values comprises a number of layers or a number of nodes for the convolutional neural network; Turkelson teaches “The scene classification model may be a convolutional neural network (CNN) including a plurality of layers (e.g., 4 or more layers, 5 or more layers, 6 or more layers, 8 or more layers, etc.), which may form a portion of a deep neural network for classifying, or recognizing, a scene” (Para.[0056]). Turkelson further teaches “the trained computer-vision object recognition model having parameters that encode information about a subset of visual features of the object depicted by each image from the training data set. For example, by training the computer-vision object recognition using the training data set, weights and biases of neuron of a neural network (e.g., a convolutional neural network, a discriminative neural network, a region-based convolution neural network, a deep neural network, etc.) may be adjusted. The adjustment of the weights and biases, thus the configurations of the parameters of the object recognition model, enable the object recognition model to recognize objects within input images.” (Para. [0069]). Updating the classification model comprises updating the convolutional neural network based on the set of model values; and Turkelson teaches “The scene classification model may be a convolutional neural network (CNN) including a plurality of layers (e.g., 4 or more layers, 5 or more layers, 6 or more layers, 8 or more layers, etc.), which may form a portion of a deep neural network for classifying, or recognizing, a scene” (Para.[0056]). Turkelson further teaches “the trained computer-vision object recognition model having parameters that encode information about a subset of visual features of the object depicted by each image from the training data set. For example, by training the computer-vision object recognition using the training data set, weights and biases of neuron of a neural network (e.g., a convolutional neural network, a discriminative neural network, a region-based convolution neural network, a deep neural network, etc.) may be adjusted. The adjustment of the weights and biases, thus the configurations of the parameters of the object recognition model, enable the object recognition model to recognize objects within input images.” (Para. [0069]). Retraining the classification model comprises retraining layers of the convolutional neural network after updating the convolutional neural network using the set of model values. Turkelson teaches “The scene classification model may be a convolutional neural network (CNN) including a plurality of layers (e.g., 4 or more layers, 5 or more layers, 6 or more layers, 8 or more layers, etc.), which may form a portion of a deep neural network for classifying, or recognizing, a scene” (Para.[0056]). Turkelson further teaches “the trained computer-vision object recognition model having parameters that encode information about a subset of visual features of the object depicted by each image from the training data set. For example, by training the computer-vision object recognition using the training data set, weights and biases of neuron of a neural network (e.g., a convolutional neural network, a discriminative neural network, a region-based convolution neural network, a deep neural network, etc.) may be adjusted. The adjustment of the weights and biases, thus the configurations of the parameters of the object recognition model, enable the object recognition model to recognize objects within input images.” (Para. [0069]). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Turkelson, Athsani, and Pederson at the time that the claimed invention was effectively filed, to have combined the use of input information, such as eye gaze location or touch location, to reduce the search space of an image when detecting objects, as taught by Turkelson, with the user augmented reality (UAR) service for a camera-enabled mobile device, as taught by Athsani, and the systems and methods for intelligent observation and identification database system, as taught by Pederson. One would have been motivated to make such combination because Turkelson teaches “some embodiments may leverage an additional channel of information beyond the image itself to improve object detection, object recognition, object selection, or any combination thereof. Some embodiments may use input information, such as touch location or eye gaze location, to reduce the search space of an image (or modulate the amount of computational effort expended in different areas of the image) when detecting objects therein or inferring user intent from images with multiple objects” (Para. [0037]) and it would have been obvious to a person having ordinary skill in the art that by narrowing the space/area of interest based on at least eye gaze location would reduce resources used by not needing to identify objects in areas/spaces that are not of interest to the user. Regarding Claim 11: Turkelson, Athsani and Pederson further teach: changing an icon color or an icon transparency in response to the user selection of the set of icons. Turkelson teaches “tap point information (or coordinates of other forms of user input) may be used to enhance or selectively process an image prior to being provided to a server. For instance, enhancement may be performed on-device (e.g., on a computing device) to a portion of an image centralized around the tap point. Such enhancements may include light balance enhancement and shadow removal (e.g., embodiments may transform an image in a raw file format (having a relatively wide color gamut) into a file format in a positive file format (having a narrower color gamut), and tradeoffs in white balance, intensity, and other pixel values may be made to favor areas of an image near (e.g., within a threshold distance of, like less than 10%, less than 20%, or less than 50% of an images width in pixels) a touch location. Additionally, patterns and colors may be detected within a region of the image where the tap point is located, which may be used to select an object from an object ontology.” (Para. [0045]). Regarding Claim 13: Some of the limitations herein are similar to some or all of the limitations of Claim 5. Turkelson, Athsani, and Pederson further teach: wherein updating the graphical user interface causes the graphical user interface to present the intermediate result in a ranking based on financing availability. Pederson teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]). Athsani further teaches “a user may view his/her financial information to help decide whether to purchase a particular imaged product” (Para. [0072]). Tuckelson teaches “ Information about the object may be retrieved by the visual search system and may be provided to the computing device with which the input was detected. For instance, embodiments may access an index keyed to object identifiers (e.g., stock keeping units (SKUs)), and may retrieve and present records related to the object, including a URL of a merchant's website at which the object can be purchased, descriptions of products corresponding to the object, related objects, reviews, and the like” (Para. [0044]). Regarding Claim 18: Turkelson, Athsani and Pederson further teach: wherein the set of icons corresponds with a first set of results, and Pederson teaches “The computer 22 for the intelligent video/audio observation and identification database system 10 may include an interface between any number of application specific databases 30, 62, which in turn may be coupled with screening and/or searching functions to identify vehicles 70 and/or individuals 56 within the United States” (Para. [0082]) thereby teaching in connection with capturing a first image (in the intelligent video/audio observation and identification database system) and presenting an interface comprising a set of icons identifying a result comprising an object recognition result (vehicles and/or individuals) based attributes identified in the first image using a classification model (screening and/or searching functions).Pederson further teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles” (Para. [0065]) thereby teaching the results corresponding to different icons representing different makes or models of vehicles. wherein presenting the graphical user interface comprises preselecting a plurality of icons from the set of icons corresponding to a subset of the first set of results based on confidence levels for the subset of the first set of results being within a predetermined range; Pederson teaches “no system is known which enables a security surveillance, or law enforcement offer to either select one of many pre-programmed inquiries based upon profiles, searches, or screening functions in real time to implement a specific customized inquiry of the accumulated database to identify a specific target group of vehicles to receive further investigation” (Para. [0008]). Pederson teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles, in order of rank according to probability of the match, thus allowing an operator or security person to select the actual vehicle and corresponding prerecorded undercarriage image 34 for use in comparing to the observed undercarriage image 36.” (Para. [0065]) thereby teaching a predetermined range for the ranking/confidence level of the match. Turkelson further teaches “initially a scene classification model may classify an image as depicting a winter scene, and may assign a winter classification label to the image with a first confidence level (e.g., a confidence score). The winter classification label and the image may be provided to an object recognition model, which may determine, based on the winter classification label and the image, that a tree is depicted within the image and may assign a tree identification label to the image with a second confidence level (e.g., a confidence score). Subsequently, the tree identification label, the winter classification label, the first and second confidence levels, and the image may be provided back to the scene classification model” (Para. [0061]).Therefore, Person and Tuckelson together teaches generating a first GUI comprising preselecting a set of icons that are a subset of the results and having probabilities of the match within a predetermined range (Pederson) based on confidence levels for the subset (Turkelson). Detecting, during operation of the graphical user interface, an availability status change associated with a result of the subset of the first set of results in a dataset; and Pederson teaches “identify and track vehicles 70 and/or individuals 56 within a security zone 50 and compare the observed data in real time to previously stored data.” (Para. [0102]) and “The accumulation and storage of the information of the type identified above will be stored within particular continuously updated and evolving files to create a database for future reference” (Para. [0035]). Athsani teaches “So as to aid in a decision…the decision support material may be tailored to the particular type of place. For instance, if the place is a restaurant, the decision support material may include a menu, prices of food, dress code requirements, reservations availability etc. If the place is a golf course, the decision support material may include the type of clubs or balls to use, whether to rent a cart, which caddy to pick, which member might be available to play with the user, ratings, reviews, menu, political affiliation of the owner, eco-friendly, etc. Information may be also tailored to the user.” (Para. [0033]) and “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]). Therefore, Pederson in combination with Athsani teaches detecting changes in availability of data by continuously updating and evolving files in a database (e.g., add, edit, remove) and continuously updating the display of the results to reflect the detected change in the continuously updating and evolving files in the database (removing results that are no longer relevant or available). Updating the graphical user interface by removing an icon of the plurality of icons based on the detecting of the availability status change. Pederson teaches “identify and track vehicles 70 and/or individuals 56 within a security zone 50 and compare the observed data in real time to previously stored data.” (Para. [0102]) and “The accumulation and storage of the information of the type identified above will be stored within particular continuously updated and evolving files to create a database for future reference” (Para. [0035]). Athsani teaches “So as to aid in a decision…the decision support material may be tailored to the particular type of place. For instance, if the place is a restaurant, the decision support material may include a menu, prices of food, dress code requirements, reservations availability etc. If the place is a golf course, the decision support material may include the type of clubs or balls to use, whether to rent a cart, which caddy to pick, which member might be available to play with the user, ratings, reviews, menu, political affiliation of the owner, eco-friendly, etc. Information may be also tailored to the user.” (Para. [0033]) and “The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.” (Para. [0004]). Therefore, Pederson in combination with Athsani teaches detecting changes in availability of data by continuously updating and evolving files in a database (e.g., add, edit, remove) and continuously updating the display of the results to reflect the detected change in the continuously updating and evolving files in the database (removing results that are no longer relevant or available). Regarding Claim 19: Turkelson, Athsani, and Pederson further teach: wherein preselected first icons are displayed in a different color in the graphical user interface. Turkelson teaches “obtain information regarding the type of nail, sub-type of nail, color, shape, size, weight, material composition, location of that nail within the facility, a cost for purchasing the nail, or any other information related to the nail, or any combination thereof” (Para. [0098]). Regarding Claim 21: All of the limitations herein are similar to some or all of the limitations as recited in Claim 3. Regarding Claim 22: Turkelson, Athsani, and Pederson further teach: wherein the predetermined range applies to a set of top N preliminary classification results ranked by confidence score, wherein N is an integer greater than one. Pederson teaches “the computer may display a predetermined number of likely but differing matches, for example different makes or models of vehicles, in order of rank according to probability of the match, thus allowing an operator or security person to select the actual vehicle and corresponding prerecorded undercarriage image 34 for use in comparing to the observed undercarriage image 36.” (Para. [0065]) Response to Amendment Applicant’s Amendments, filed on 11/13/2025, are acknowledged and accepted. Response to Arguments On page 9 of the Remarks filed on 11/13/2025, Applicant states that “the cited references do not teach ‘updating the classification model by modifying an architecture of the classification model with the set of model values and retraining the classification model after the modifying of the architecture with the set of model values to generate an updated classification model.’”.Applicant’s argument is moot in because modifying an architecture of a classification model does not appear to be supported by the specification, as is further addressed in the 112(a) rejection above. On page 9 of the Remarks filed on 11/13/2025, Applicant states that “with reference to the amended claim 1, the cited references do not disclose (1) ‘concurrently generating a first model result and a second model result using the classification model based on a second image’ and (2) ‘substituting the first model result based on the classification model exception without substituting the second model result’ (emphasis added)”.Upon further time for search and consideration, Applicant’s statement related to the amended claims does not appear to overcome the previously cited prior art. The amended limitations being argued are further addressed in the rejection above. On pages 9-10 of the Remarks filed on 11/13/2025, Applicant states that “the cited references do not disclose the amended claim 18. For example, the cited references do not disclose (1) ‘detecting, during operation of the graphical user interface, an availability status change associated with a result of the subset of the first set of results in a dataset’ and (2) ‘updating the graphical user interface by removing an icon of the plurality of icons based on the detecting of the availability status change.’”. Upon further time for search and consideration, Applicant’s statement related to the amended claim 18 does not appear to overcome the previously cited prior art. The amended limitations being argued are further addressed in the rejection above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jin et al. (U.S. Pre-Grant Publication No. 2014/0172831) teaches an intuitive information search method and device based on a displayed image and a computer readable recording medium thereof. The information search method based on an image displayed on a device includes recognizing a first input indicating a selection related to a plurality of objects included in the image, recognizing a second input indicating a search relationship between the selected plurality of objects, and searching for information based on the first input and the second input; and outputting found information through the device. Chang (U.S. Pre-Grant Publication No. 2014/0244392) teaches a graphical recognition inventory management and marketing system is disclosed. An image comprising a plurality of product elements is captured. Graphical recognition is utilized to detect, isolate, and identify each element by comparing characteristics of the element with element data stored in a database. When a match is found, the detected and isolated element data is stored with the matched database entry. Additional element data obtained during the graphical recognition process is added to element data stored in the database to expand the database entry. The current location of a user device is detected and product element data is displayed along with a map indicating the physical location of the product element and a route to the product element from the user's current location. A user can also capture an image and have product data and a map to the product displayed. Chun et al. (U.S. Pre-Grant Publication No. 2016/0170614) teaches displaying a first object on a display of the electronic device, the first object being associated with at least one of a first content or a first function; determining a service related to the first object based on at least one of the first content or the first function; and providing the service. Bhardwaj et al. (U.S. Pre-Grant Publication No. 2014/0279265) teaches receiving a sketch that corresponds to a search item from a user (105), where the sketch is partially generated by the user. An analysis module is executable by a processor e.g. CPU, of a machine e.g. netbook (110), to extract an item attribute from the sketch, where the item attribute corresponds to a physical attribute of the search item. The analysis module identifies inventory items similar to the search item based on the extracted item attribute and a search scope. The user interface module causes presentation of the inventory items to the user. WIPO Publication WO 2019148923 A1 teaches searching for images with an image, an electronic device, and a storage medium. The method comprises: obtaining an image to be detected (110); detecting, by means of a preset algorithm, a plurality of target objects in the image to be detected, and determining coordinate information of areas where the plurality of target objects is located respectively (120); respectively extracting, according to the coordinate information, each pixel point of the areas where the plurality of target objects is located from the image to be detected, and constituting a plurality of a target object images respectively corresponding to the plurality of target objects (130); then respectively displaying the plurality of target object images at preset positions (140); determining a target object image to be searched from the plurality of target object images displayed at the preset positions (150); and performing search on a preset database to determine an image matching the target object image to be searched (160). By extracting the target object image and displaying same independently, it is possible to obtain a more accurate search result without interference from other target objects in the search based on the target object image. Kundu (U.S. Pre-Grant Publication No. 2012/0054060) teaches enabling a selection of a user selectable silhouette image of an item e.g. handbag, at a client machine (32) e.g. personal computer. A display of a set of user selectable silhouette images representing aspects of the item, other than size, at the client machine is enabled in response to the selection. A rendering at the client machine of a grayed-out image indicating that no item listing that has an aspect corresponding to an aspect e.g. style, represented by the grayed out image is found is enabled. Chinese Patent Application Publication CN110188229 teaches a picture search method, a mobile terminal and a computer-readable storage medium. The method includes: displaying a first target picture, wherein the first target picture includes at least one object; A first selection operation of a target object; in response to the first selection operation, searching for a second target picture containing the target object in a preset picture set; and displaying the second target picture. In the embodiment of the present invention, by acquiring the target object in the first target picture, and finding and displaying the second target picture including the target object in the preset picture set, it can help the user to quickly find the desired picture in a large number of pictures The photo containing the target object is convenient for the user to operate and save the user's time. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT F MAY/Examiner, Art Unit 2154 1/23/2026 /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Jun 15, 2024
Non-Final Rejection — §103, §112, §DP
Sep 04, 2024
Interview Requested
Sep 19, 2024
Applicant Interview (Telephonic)
Sep 19, 2024
Examiner Interview Summary
Sep 23, 2024
Response Filed
Sep 27, 2024
Final Rejection — §103, §112, §DP
Nov 04, 2024
Interview Requested
Dec 06, 2024
Applicant Interview (Telephonic)
Dec 10, 2024
Examiner Interview Summary
Dec 14, 2024
Examiner Interview Summary
Dec 30, 2024
Request for Continued Examination
Jan 07, 2025
Response after Non-Final Action
Mar 24, 2025
Response Filed
Aug 09, 2025
Non-Final Rejection — §103, §112, §DP
Oct 23, 2025
Interview Requested
Oct 27, 2025
Interview Requested
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 03, 2025
Examiner Interview Summary
Nov 13, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103, §112, §DP
Mar 16, 2026
Interview Requested
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Examiner Interview Summary
Mar 27, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586145
METHOD AND APPARATUS FOR EDITING VIDEO IN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12468740
CATEGORY RECOMMENDATION WITH IMPLICIT ITEM FEEDBACK
2y 5m to grant Granted Nov 11, 2025
Patent 12367197
Pipelining a binary search algorithm of a sorted table
2y 5m to grant Granted Jul 22, 2025
Patent 12360955
Data Compression and Decompression Facilitated By Machine Learning
2y 5m to grant Granted Jul 15, 2025
Patent 12347550
IMAGING DISCOVERY UTILITY FOR AUGMENTING CLINICAL IMAGE MANAGEMENT
2y 5m to grant Granted Jul 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 286 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month