Prosecution Insights
Last updated: April 19, 2026
Application No. 18/078,360

SYSTEM, METHOD, AND COMPUTER PROGRAM FOR RETRAINING A PRE-TRAINED OBJECT CLASSIFIER

Non-Final OA §103
Filed
Dec 09, 2022
Examiner
CHOI, TIMOTHY WING HO
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Axis AB
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
199 granted / 331 resolved
-1.9% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
352
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 331 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 December 2025 has been entered. Response to Amendment Applicant’s response, filed 22 December 2025, to the last office action has been entered and made of record. In response to the amendments to the claims, they are acknowledged, supported by the original disclosure, and no new matter is added. In response to the amendments to the claims, specifically addressing the rejections under 35 U.S.C. § 101, for being directed to a judicial exception, of the previous Office action, the amended language has overcome the respective rejections, and the rejections have been withdrawn. Independent claims 1, 14, and 15 are amended to recite the additional elements of “annotating all instances of the tracked object in the stream of image frames as belonging to the single object class with high confidence, yielding annotated instances of the tracked object, wherein at least one other instance of the tracked object in the stream of image frames has a level of confidence that is lower than the threshold confidence value for the single object class is annotated as belonging to the single object class with high confidence; and retraining, by the at least one processor, the pre-trained object classifier with said at least one other instance of the tracked object in the stream of image frames that has the level of confidence that is lower than the threshold confidence value for the single object class” which describes additional claim elements which cannot be practically performed in the human mind and would apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See MPEP 2106.04(d) and MPEP 2106.05(e). Amendments to the independent claim 1, 14, and 15 have necessitated an updated ground of rejection over the applied prior art. Please see below for the updated interpretations and rejections. Response to Arguments Applicant's arguments filed 22 December 2025 have been fully considered but they are not persuasive. In response to Applicant’s arguments on p. 22-23, where the combined teachings of Lee and Wang, notably, Wang, fail to teach or suggest the amended independent claim limitations, the Examiner respectfully disagrees. Examiner notes the claims are treated with their broadest reasonable interpretations consistent with the specification. See MPEP 2111. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the test for obviousness is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ871 (CCPA 1981). Wang is noted to further teach that initial trajectory model for multiple targets is created from a set of received image detections and tracklets or detections are automatically linked into trajectories and detections with the same label in adjacent frames are linked to form reliable tracklets and the trajectory models are updated using reliable tracklets (see Wang [0026]-[0028]). For a set of detections and target ID labels, an optimal assignment for the identify of targets based on the detection set is searched (see Wang [0029]). An iterative algorithm alternatively optimizes the trajectory models for all targets and maximizes the conditional probability of a pairwise Markov Random Field (MRF) model, and a loopy belief propagation (LBP) algorithm is used for maximizing the MRF conditional probability and generating a set of confident and separated tracklets by setting a threshold for the belief of a node to be assigned a target ID label, such that if the belief of the node is greater than the threshold, the node will be assigned the target ID label, and nodes with the same label in adjacent frames are linked to form a tracklet which is a relatively reliable segment of the final target trajectory, and suggesting that the segment does not have a belief higher than the threshold to be assigned to other labels (see Wang [0031]-[0034]). Initial detections based on a whole video segmented into non overlapping short windows are followed by grouping in reliable tracklets and initial training samples are only generated inside each individual sliding window and an online metric learning is performed for each sliding window and short tracklets in adjacent windows can be associated to extend generate a training samples set designated at the initial appearance module, where, as tracklets are linked into longer trajectories, more samples are collected to update training of more discriminative target appearances , and using the expanded training set, a new appearance model can be further obtained, allowing for more effective metric function to be re-learned in an iterative fashion and the new metric can be used to link all the target tracklets window by window to form longer trajectories (see Wang [0036]). Furthermore, as the combined teachings of Lee and Wang suggest the iterative collection of more samples to update the training and the detection and classification of more discriminative object appearances and provides an implicit teaching that the more discriminative object appearances had an initial level of confidence that is lower than a threshold confidence level for classifying the object, the combined teachings of Lee and Wang suggests the broadest reasonable interpretation of that at least one other instance of the tracked object in the stream of image frame has a level of confidence that is lower than the threshold confidence value for the single object class. See MPEP 2144.01. Thus, the teachings of Lee in view of further teachings of Wang provides for the broadest reasonable interpretation of the amended independent claim subject matter. Claim Objections Claims 14 and 15 are objected to because of the following informalities: Amended claims 14 and 15 recites in the body of the respective claims, “classify, by the at least one processor using the deep learning mode”, where a typographical error is assumed to exist, and “deep learning model” is assumed to be intended. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-9 and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 2020/0012894), herein Lee, in view of Wang et al. (US 2018/0114072), herein Wang. Regarding claim 1, Lee discloses a method for retraining a pre-trained object classifier, the method being performed by a system comprising at least one processor configured to execute a deep learning model (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings; see Lee [0068]-[0070], where a convolutional neural network can be used to perform classification of image content into corresponding object classes), the method comprising: Obtaining, by the at least one processor, a stream of image frames of a scene, wherein each of the image frames depicts an instance of a tracked object (see Lee [0042]-[0044], where a set of images of a scene are obtained by sensor(s), such as a video camera, where the set of images includes objects); classifying, by the at least one processor configured to execute a deep learning model (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings; see Lee [0068]-[0070], where a convolutional neural network can be used to perform classification of image content into corresponding object classes), each instance of the tracked object to belong to an object class of a plurality of object classes with a respective level of confidence (see Lee [0057], where a classifier classify multiple objects in an input image from the set of images with a classification metric indicative of uncertainty of each of the classified object belonging to one or different classes; and see Lee [0067]-[0069] where the content of each bounding box is classified into corresponding object class and estimates a probability of difference class and associates the index to the most possible object class with the bounding box, where the class probability is the confidence score of the bounding box indicative of uncertainty of each of the classified object to belong to one or different classes); Lee does not explicitly disclose verifying, by the at least one processor, that, for at least one of the instances of the tracked object, the level of confidence for a single object class of the plurality of object classes is higher than a predetermined threshold confidence value, and for said at least one instance, the respective level of confidence for each other object class of the plurality of object classes is not higher than the threshold confidence value to ensure that said at least one of the instances of the tracked object is classified with high confidence to the single object class; annotating, by the at least one processor, all instances of the tracked object in the stream of image frames as belonging to the single object class with high confidence, yielding annotated instances of the tracked object, wherein at least one other instance of the tracked object in the stream of image frames has a level of confidence that is lower than the threshold confidence value for the single object class is annotated as belonging to the single object class with high confidence; and retraining, by the at least one processor, the pre-trained object classifier with said at least one other instance of the tracked object in the stream of image frames that has the level of confidence that is lower than the threshold confidence value for the single object class. Wang teaches in a related and pertinent vision based target tracking system (see Wang Abstract), where initial trajectory model for multiple targets is created from a set of received image detections and tracklets or detections are automatically linked into trajectories and detections with the same label in adjacent frames are linked to form reliable tracklets and the trajectory models are updated using reliable tracklets (see Wang [0026]-[0028]), where the for a set of detections and target ID labels, an optimal assignment for the identify of targets based on the detection set is searched (see Wang [0029]), an iterative algorithm alternatively optimizes the trajectory models for all targets and maximizes the conditional probability of a pairwise Markov Random Field (MRF) model, and a loopy belief propagation (LBP) algorithm is used for maximizing the MRF conditional probability and generating a set of confident and separated tracklets by setting a threshold for the belief of a node to be assigned a target ID label, such that if the belief of the node is greater than the threshold, the node will be assigned the target ID label, and nodes with the same label in adjacent frames are linked to form a tracklet which is a relatively reliable segment of the final target trajectory, and suggesting that the segment does not have a belief higher than the threshold to be assigned to other labels (see Wang [0031]-[0034]), initial detections based on a whole video segmented into non overlapping short windows are followed by grouping in reliable tracklets and initial training samples are only generated inside each individual sliding window and an online metric learning is performed for each sliding window and short tracklets in adjacent windows can be associated to extend generate a training samples set designated at the initial appearance module, where, as tracklets are linked into longer trajectories, more samples are collected to update training of more discriminative target appearances , and using the expanded training set, a new appearance model can be further obtained , allowing for more effective metric function to be re-learned in an iterative fashion and the new metric can be used to link all the target tracklets window by window to form longer trajectories (see Wang [0036]). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Wang to the teachings of Lee such that objects are further tracked across the set of images detections form reliable tracklets and trajectory models and that the tracking can be optimized by an iterative algorithm, where nodes with a belief greater than a threshold for a target ID label are assigned the target label and nodes in adjacent frames with the same label are linked to form tracklets which are iteratively used to collect more samples to update training of more discriminative target appearances and form longer trajectories. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Lee disclose a base method for active learning, where the objects in a set of images are detected and classified with a neural network with a confidence and the images with the highest scores are selected for labelling and added into labelled training set to retrain the network based on the new training dataset. Wang teaches a known technique for tracking multiple targets from a set of received image detections to form tracklets which are linked into trajectories, where nodes with a belief greater than a threshold for a target ID label are assigned the target label and nodes in adjacent frames with the same label are linked to form tracklets which are iteratively used to collect more samples to update training of more discriminative target appearances and form longer trajectories. One of ordinary skill in the art would have recognized that by applying Wang’s technique would allow for the method of Lee to further track the detected objects from across the detection in the set of images to iteratively form reliable tracklets and trajectory models and iteratively used to collect more samples to update training the detection and classification for more discriminative appearances of the objects, predictably leading to an improved active learning method which further tracks the trajectories of detected objects for additional object features for classification. While the combined teachings of Lee and Wang do not explicitly disclose that at least one other instance of the tracked object in the stream of image frame has a level of confidence that is lower than the threshold confidence value for the single object class, the combined prior art suggested teachings for the iterative collection of more samples to update the training and the detection and classification of more discriminative object appearances provides an implicit teaching that the more discriminative object appearances had an initial level of confidence that is lower than a threshold confidence level for classifying the object. See MPEP 2144.01. Regarding claim 2, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein at least some of the instances of the tracked object are classified to also belong to a further object class with a further level of confidence, and wherein the method further comprises: verifying, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings), that the further level of confidence is lower than the threshold confidence value for the at least some of the instances of the tracked object (see Lee [0075], where only tracked objects that are classified with a confidence below a threshold are considered ). Regarding claim 3, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the method further comprises: verifying, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings), that the object class of the instances of the tracked object does not change within the stream of image frames (see Wang [0040], where one object cannot belong to two tracklets and that overlap between two tracklets are treated as different persons; suggesting that the tracked object does not change object classes). Regarding claim 4, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the tracked object moves along a path in the stream of image frames, and wherein the path is tracked when the tracked object is tracked (see Wang [0026]-[0028]), where initial trajectory model for multiple targets is created from a set of received image detections and tracklets or detections are automatically linked into trajectories and detections with the same label in adjacent frames are linked to form reliable tracklets and the trajectory models are updated using reliable tracklets). Regarding claim 5, please see the above rejection of claim 4. Lee and Wang disclose the method according to claim 4, wherein the path is tracked at a level of accuracy, and wherein the method further comprises: verifying, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings) that the level of accuracy is higher than a threshold accuracy value (see Wang [0033], where nodes are assigned a label when a belief threshold is exceeded and nodes with the same label in adjacent frames are linked to form tracklets and providing reliable segment of the final trajectory). Regarding claim 6, please see the above rejection of claim 4. Lee and Wang disclose the method according to claim 4, wherein the method further comprises: verifying, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings) that the path has neither split into at least two paths nor merged from at least two paths within the stream of image frames (see Wang [0040], where one object cannot belong to two tracklets and that overlap between two tracklets are treated as different persons; suggesting that the trajectory of the tracked object does not split or merge). Regarding claim 7, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the tracked object has a size in the image frames (see Lee [0078], where the geometry of the bounding boxes around detected objects are considered), and wherein the method further comprises: verifying, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings), that the size of the tracked object does not change more than a threshold size value within the stream of image frames (see Lee [0078], where the size of the bounding box is considered in evaluating the diversity metric). Regarding claim 8, please see the above rejection of claim 7. Lee and Wang disclose the method according to claim 7, wherein the size of the tracked object is adjusted by a distance-dependent compensation factor determined as a function of distance between the tracked object and a camera device having captured the stream of image frames of the scene when verifying that the size of the tracked object does not change more than the threshold size value within the stream of image frames (see Lee [0078]-[0080], where the geometry and location, which considers the camera mounting setting of the image capture by the camera, of the bounding box is considered in determining the diversity and classification metrics). Regarding claim 9, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the pre-trained object classifier is retrained only with the annotated instances of the tracked object for which the level of confidence was is not higher than the threshold confidence value for the single object class (see Wang [0036], where more samples are iteratively collected to update training of more discriminative target appearances; where the combined prior art suggested teachings for the iterative collection of more samples to update the training and the detection and classification of more discriminative object appearances provides an implicit teaching that the more discriminative object appearances had an initial level of confidence that is lower than a threshold confidence level for classifying the object). Regarding claim 11, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the method further comprises: providing, by the at least one processor (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings), the annotated instances of the tracked object to one or more of a database and a further device (see Lee [0047], where the annotated images are then added to the initial labelled training dataset and the trainer retrains the network by fitting the new training dataset of images). Regarding claim 12, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the stream of image frames originates from image frames having been captured by at least two camera devices (see Lee [0043], where sensor(s) may be a video camera or camera like device; and see Lee [0066], where sensors can obtain the set of images from the scene; suggesting more than one sensors/cameras are used to obtain the set of images from the scene). Regarding claim 13, please see the above rejection of claim 1. Lee and Wang disclose the method according to claim 1, wherein the classifying is performed at a first entity and the retraining is performed at a second entity physically separated from the first entity (see Lee [0047], where the trained neural network and the updated neural network are used to perform the classifying and the trainer is used to perform retraining the network). Regarding claim 14, it recites a system comprising at least one processor configured to execute a deep learning model the at least one processor configured to cause the system to perform the method of claim 1. Lee and Wang teach a system performing the method of claim 1 (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings; see Lee [0068]-[0070], where a convolutional neural network can be used to perform classification of image content into corresponding object classes). Please see above for detailed claim analysis. Please see the above rejection for claim 1, as the rationale to combine the teachings of Lee and Wang are similar, mutatis mutandis. Regarding claim 15, it recites a non-transitory computer-readable storage medium having stored thereon a computer program for performing the method of claim 1. Lee and Wang teach a non-transitory computer-readable storage medium having stored thereon a computer program comprising computer code, when run on at least one processor configured to execute a deep learning model of a system, causes the at least processor to perform the method of claim 1 (see Lee Fig. 6 and [0061]-[0062], where computer readable memory can store instructions that are executable by the processor to perform the disclosed teachings; see Lee [0068]-[0070], where a convolutional neural network can be used to perform classification of image content into corresponding object classes). Please see above for detailed claim analysis. Please see the above rejection for claim 1, as the rationale to combine the teachings of Lee and Wang are similar, mutatis mutandis. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lee and Wang disclose as applied to claim 1 above, and further in view of Ribeiro et al. (“Deep Bayesian Self-Training”), herein Ribeiro. Regarding claim 10, please see the above rejection of claim 1. Lee and Wang do not explicitly disclose the method according to claim 1, wherein each of the annotated instances of the tracked object is assigned a respective weighting value according to which the annotated instances of the tracked object are weighted when the pre-trained object classifier is retrained, and wherein the weighting value of the annotated instances of the tracked object for which the level of confidence is higher than the threshold confidence value for the single object class is lower than the weighting value of the annotated instances of the tracked object for which the level of confidence is not higher than the threshold confidence value for the single object class. Ribeiro teaches in a related and pertinent deep Bayesian self-training method for automatic data annotation (see Ribeiro Abstract), where a sample-wise weighting scheme during training that weights each training sample proportional to the predictive uncertainty over its pseudo label such that its contribution to the loss function is inversely proportional to its predictive uncertainty, where the model assigns more weight to uncertain pseudo-labelled samples as self-training progresses and forces exploration by adding more uncertain and potentially informative samples to the training set (see Ribeiro sect. 3.5 Inverse uncertainty weighting). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Ribeiro to the teachings of Lee and Wang such that the annotated images added into the new training dataset similarly are inversely weighted according to their respective classification confidence to force the retraining towards exploration by adding more uncertain and potentially informative samples to the training set. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Lee and Wang disclose a base method for active learning, where the objects in a set of images are detected and classified with a neural network with a confidence and the images with the highest scores are selected for labelling and added into labelled training set to retrain the network based on the new training dataset and objects are further tracked across the set of images detections form reliable tracklets and trajectory models and that the tracking can be optimized by an iterative algorithm, where nodes with a belief greater than a threshold for a target ID label are assigned the target label and nodes in adjacent frames with the same label are linked to form tracklets which are iteratively used to collect more samples to update training of more discriminative target appearances and form longer trajectories. Ribeiro teaches a known technique of using a sample-wise weighting scheme during training that weights each training sample proportional to the predictive uncertainty over its pseudo label such that its contribution to the loss function is inversely proportional to its predictive uncertainty, where the model assigns more weight to uncertain pseudo-labelled samples as self-training progresses and forces exploration by adding more uncertain and potentially informative samples to the training set. One of ordinary skill in the art would have recognized that by applying Ribeiro 's technique would allow for the method of Lee and Wang to force the retraining towards exploration by adding more uncertain and potentially informative samples to the training set by inversely weighting the annotated images added into the new training dataset according to their respective classification confidence, predictably leading to an improved method for active learning by forcing the retraining to use potentially informative samples. Claims 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Lee and Wang as applied to claims 1, 14, and 15 above, and further in view of Cen et al. (“Deep feature augmentation for occluded image classification”), herein Cen. Regarding claim 16, please see the above rejection of claim 1. Lee and Wang do not explicitly disclose the method of claim 1, wherein said at least one other instance depicts the tracked object under occluded or low-visibility conditions. Cen teaches in a related and pertinent method to improve the classification accuracy of occluded images by fine-tuning pretrained models with a set of augmented deep feature vectors (DFV) (see Cen Abstract), where a deep feature augmentation approach includes, in a difference vector (DV) data flow, a set of clean and occluded image pairs fed into a base CNN to extract DVs, in a DFV workflow, a set of training images are fed into a base CNN to extract DFVs, and the DVs are randomly added to the DFVs to yield pseudo-DFVs, in which the original DFVs and pseudo-DFVs are sent to the softmax layer with a pass through probability switch to ensure the CNN can be trained for classification of both clean images and occluded images, and that the deep feature augmentation approach is applied to fine tune the pre-trained CNNs (see Cen Fig. 4, sect. 3.2 Deep feature augmentation, and sect. 3.3 Implementation consideration). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Cen to the teachings of Lee and Wang such that clean and occluded image pairs of objects are further labeled and used to retrain the network to improve the classification accuracy of occluded objects. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Lee and Wang disclose a base method for active learning, where the objects in a set of images are detected and classified with a neural network with a confidence and the images with the highest scores are selected for labelling and added into labelled training set to retrain the network based on the new training dataset and objects are further tracked across the set of images detections iteratively form reliable tracklets and trajectory models where nodes in adjacent frames with the same label are linked to form tracklets which are iteratively used to collect more samples to update training of more discriminative target appearances and form longer trajectories, and Wang further teaches that by providing a framework for collecting samples online to learn the appearance model during tracking and using an iterative process to obtain more training samples that are less sensitive to the variation of targets’ visual appearance allows for better handling of inter-object occlusions and interactions (see Wang [0035]). Cen teaches a known technique of further using a set of clean and occluded image pairs to generate pseudo deep feature vectors to further train and fine tune pre-trained CNNs and ensure the CNN can be trained for classification of both clean images and occluded images. One of ordinary skill in the art would have recognized that by applying Cen's technique would allow for the method of Lee and Wang to further use clean and occluded image pairs of labeled objects to retrain the network, and predictably lead to an improved object classification network with improved classification accuracy of occluded objects. Regarding claim 17, see above rejection for claim 14. It is a system claim reciting similar subject matter as claim 16. Please see above claim 16 for detailed claim analysis as the limitations of claim 17 are similarly rejected. Regarding claim 18, see above rejection for claim 15. It is a non-transitory computer-readable storage medium claim reciting similar subject matter as claim 16. Please see above claim 16 for detailed claim analysis as the limitations of claim 18 are similarly rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIMOTHY CHOI/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Dec 09, 2022
Application Filed
Apr 05, 2025
Non-Final Rejection — §103
Jul 08, 2025
Response Filed
Oct 18, 2025
Final Rejection — §103
Dec 22, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103
Apr 13, 2026
Examiner Interview Summary
Apr 13, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12497051
APPARATUSES, SYSTEMS, AND METHODS FOR DETERMINING VEHICLE OPERATOR DISTRACTIONS AT PARTICULAR GEOGRAPHIC LOCATIONS
2y 5m to grant Granted Dec 16, 2025
Patent 12488569
UNPAIRED IMAGE-TO-IMAGE TRANSLATION USING A GENERATIVE ADVERSARIAL NETWORK (GAN)
2y 5m to grant Granted Dec 02, 2025
Patent 12475992
SYSTEM AND METHOD FOR NAVIGATING A TOMOSYNTHESIS STACK INCLUDING AUTOMATIC FOCUSING
2y 5m to grant Granted Nov 18, 2025
Patent 12469300
SYSTEMS, DEVICES, AND METHODS FOR VEHICLE CAMERA CALIBRATION
2y 5m to grant Granted Nov 11, 2025
Patent 12469190
X-RAY TOMOGRAPHIC RECONSTRUCTION METHOD AND ASSOCIATED DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
95%
With Interview (+35.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 331 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month