Prosecution Insights
Last updated: April 19, 2026
Application No. 18/262,874

METHODS AND APPARATUS FOR INCREMENTAL LEARNING USING STORED FEATURES

Final Rejection §103
Filed
Jul 25, 2023
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Eli Lilly And Company
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Receipt is acknowledged of claim amendments with associated arguments/remarks, received December 17, 2025. Claims 1-12, 14-23 are pending in which claims 1-7, 9, 12, 14-19, 22 were amended. Claim 13 was cancelled. Response to Arguments Applicant’s arguments, see Remarks, pg 10-13, filed 12/17/2025, with respect to the rejection of claim 1-12, 14-23 under 35 USC § 101 has been fully considered and, in light of the associated amendment and remarks, is persuasive. In particular, the applicant cites to example 39 of the 2019 Revised Patent Subject Matter Eligibility Guidance, which the applicant articulated the parallel claim construction constituting subject matter eligible. Therefore, the rejection has been withdrawn. Applicant’s arguments, see pg 14-16, filed 12/17/2025, with respect to the rejections of claim 1-12, 14-23 under 35 USC § 103 has been fully considered and, in light of the associated amendments to the independent claims that has changed the scope of the claims, is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made under Brandes et al (US 2020/0364520) in view of Turkelson et al (US 2020/0193552). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 12, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Brandes et al (US 2020/0364520, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) in view of Turkelson et al (US 2020/0193552, cited in the Non-Final Rejection – 10/01/2025). Regarding Claim 1, Brandes et al teach a computerized method for incrementally training a classifier using stored sets of representative features (system 300 with methodology steps for training and retraining classifier 302; Fig 3 and ¶ [0054]), the method comprising: accessing a first set of training images for a first class (training data set with an image and associated meta-data for a first classification are accessed and fed into classifier, 301, 302; Fig 3 and ¶ [0055]); processing the first set of training images, (the classifier 302 determines the features of the data, which are output as a prediction class (a classifier prediction is based on comparing features, also described ¶ [0073]-[0074]); Fig 3 and ¶ [0054]-[0056]); determining, using a selection technique, a set of representative features from the set of features for the first class (the prediction class data is processed by the evaluator engine 304 to determine if the data is valid 306 or a rare case, based on a knowledge graph 314 analysis, and the valid data is fed out 307 and may become a component of training data 308; Fig 3 and ¶ [0056]-[0057]); training the classifier using the set of representative features for the first class (the classifier 304 may undergo a training when a predefined amount of additionally added valid 306 data is added to the training data set 308; Fig 3 and ¶ [0057]-[0058], [0066]); adding the set of representative features for the first class to an exemplar set of representative features for a plurality of classes (the valid 306 training data is added to the training data set 308; Fig 3 and ¶ [0057]); accessing a second set of training images for a second class (the rare case image data are forwarded to the rare case extractor 310 and may be used to enlarge the rare training data 308, by included similar external data 312, in a different class of the training data set 308; Fig 3 and ¶ [0058], [0061]-[0063]); and re-training the classifier using the second set of training images and at least part of the exemplar set of representative features (the classifier 304 may undergo a retraining when a predefined amount of additional rare training data added to the training data set 308; Fig 3 and ¶ [0066]). Brandes et al teach features are calculated in the images and are used for distinguishing the features between multiple images using the similarity engine 316 but does not explicitly teach a feature extraction technique. Turkelson et al is analogous art pertinent to the technological problem addressed in the current application and teaches a feature extraction technique (the learning model encodes objects as vectors to detect object identifiers (features extracted using feature extraction subsystem 114), step 204; Fig 1, 2 and ¶ [0046], [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al with Turkelson et al including a feature extraction technique. By extracting features of image data using feature vectors, object data extraction, used for matching and classification is enriched during runtime and allow efficient high volume data analysis, as recognized by Turkelson et al (¶ [0019]-[0021]). Regarding Claim 2, Brandes et al in view of Turkelson et al teach the method of claim 1 (as described above), wherein determining the set of representative features using the selection technique (Turkelson et al, the extracted features are normalized are those features are used to train a classifier, which can be based on an algorithm that determines features of a region representing a classification, step 204; Fig 1, 2 and ¶ [0046]-[0048], [0060]-[0061]) comprises: determining, based on the set of features, a mean of the features for the first class (Turkelson et al, the feature extraction subsystem 114 uses a mean pixel value process based on the color channels to determine and classify the feature; ¶ [0045]-[0046], [0061]); and determining, based on the mean, the set of representative features, wherein the set of representative features is a subset of the set of features (Turkelson et al, a subset of visual features may be determined based on the mean pixel value for the multichannel color image; ¶ [0050], [0061]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al with Turkelson et al wherein determining the set of representative features using the selection technique comprises: determining, based on the set of features, a mean of the features for the first class; and determining, based on the mean, the set of representative features, wherein the set of representative features is a subset of the set of features. By using a mean pixel value for a multichannel color image, the parameters of the object, such as the edge, may be easily detected thereby improving the object recognition accuracy and allow for metric learning by the neural network, as recognized by Turkelson et al (¶ [0061]). Regarding Claim 12, Brandes et al teach a non-transitory computer-readable media (computing system 500 with memory 504; Fig 5 and ¶ [0077]-[0078]) comprising instructions (memory 504 stores program modules 516 with instructions; Fig 5 and ¶ [0079]) for incrementally training a classifier using stored sets of representative features (program modules 516 are used for the complete system 300; Fig 3, 5 and ¶ [0054], [0079]), wherein the instructions, when executed by one or more processors on a computing device (processor 502 executes instructions stored on memory 504; ¶ [0077]), are operable to cause the one or more processors to: perform steps identical to claim 1 (as described above). Regarding Claim 14, Brandes in view of Turkelson et al teach the non-transitory computer-readable media of claim 12 (as described above) with further steps claimed identical to claim 2 (as described above). Claims 3-6, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Brandes et al (US 2020/0364520, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) in view of Turkelson et al (US 2020/0193552, cited in the Non-Final Rejection – 10/01/2025) and Byeon et al (Simultaneously Removing Noise and Selecting Relevant Features for High Dimensional Noisy Data, cited in the Non-Final Rejection – 10/01/2025). Regarding Claim 3, Brandes et al in view of Turkelson et al teach the method of claim 1 (as described above), including retraining the classifier using the second set of training images and at least part of the exemplar set of representative features (Brandes et al, the classifier 304 may undergo a retraining when a predefined amount of additional rare training data added to the training data set 308; Fig 3 and ¶ [0066]). Brandes et al in view of Turkelson et al does not teach generating a first modified representative feature for the first class, comprising: selecting a first representative feature in the set of representative features for the first class; determining a noise component for the first representative feature; and generating, based on the first representative feature and the noise component, the first modified representative feature. Byeon et al is analogous art pertinent to the technological problem addressed in this application and teaches generating a first modified representative feature for the first class (a model from the Noise Detection algorithm is used to identify candidates which contain too much noise 2.2 PS-ND: identifying actual noises), comprising: selecting a first representative feature in the set of representative features for the first class (a genetic algorithm (GA) is used to first search for the population and the feature using the noise detection (ND) in the NDFS algorithm; Fig 1 and 2.1 NDFS GA-ND: identifying candidates for noises); determining a noise component for the first representative feature (the noise detection (ND) is determined within the secondary classifier (prototype selection, PS-ND) representing the set of candidates with noisy instances that cannot be classified correctly from the primary classifiers; 2.2 NDFS PS-ND: identifying actual noises); and generating, based on the first representative feature and the noise component, the first modified representative feature (the noise instances of the PS-ND dataset are removed from the GA-ND set to create a noise-free dataset; 2.2 NDFS PS-ND: identifying actual noises). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al in view of Turkelson et al with Byeon et al including generating a first modified representative feature for the class, comprising: selecting a first representative feature in the set of representative features for the class; determining a noise component for the first representative feature; and generating, based on the first representative feature and the noise component, the first modified representative feature. By identifying noise instances in a dataset and removing such noise instances, the remaining classifier data is used to build a more accurate model, as recognized by Byeon et al (2.2 PS-ND: identifying actual noises). Regarding Claim 4, Brandes et al in view of Turkelson et al and Byeon et al teach the method of claim 3 (as described above), further comprising generating a second modified representative feature (Byeon et al, identifying features of the data set after the noise is filtered from the initial large dataset; 2.2 PS-ND: identifying actual noises, 2.3 GA-FS: identifying relevant features), comprising: selecting a second representative feature from the set of representative features for the first class (Byeon et al, the GA-FS identifies relevant features from the noise-removed classifier data with multiple selected features thereby including a second feature); 2.3 GA-FS: identifying relevant features); and determining the second modified representative feature based on a difference between values of the first representative feature and the second representative feature (Byeon et al, the GA-FS uses only the instances selected by the GA-ND; 2.3 GA-FS: identifying relevant features). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al in view of Turkelson et al with Byeon et al including generating a second modified representative feature, comprising: selecting a second representative feature from the set of representative features for the first class; and determining the second modified representative feature based on a difference between values of the first representative feature and the second representative feature. By identifying differences between compared image data in a dataset and removing such instances, the remaining classifier data is used to build a more accurate model, as recognized by Byeon et al (2.2 PS-ND: identifying actual noises). Regarding Claim 5, Brandes et al in view of Turkelson et al and Byeon et al teach the method of claim 4 (as described above), wherein the method further comprises: determining, using the second set of training images, a second set of features for the second class (Byeon et al, the instances in the training set after processing by the PS-ND and returned to the GA-ND are distributed to the GA-FS and a second set of features (from the number of features) is determined; 2.3 GA-FS: identifying relevant features); and retraining the classifier comprises training the classifier using: the second set of features; and at least one of the first representative feature, the first modified representative feature, and the second modified representative feature (Byeon et al, instances with the highest classification accuracy as selected as training data to train a classifier, including training and testing datasets; 3.2 Performance measure). Regarding Claim 6, Brandes et al in view of Turkelson et al and Byeon et al teach the method of claim 5 (as described above), further comprising: determining a second set of representative features from the second set of features for the second class (Byeon et al, the secondary classifier representative features are determined based on highest classification accuracy to build the model; 2.2 PS-ND: identifying actual noises ¶ 4-6); and adding the second set of representative features for the second class to the exemplar set (Byeon et al, the selected data sets by the secondary classifier as stored as prototype data; 2.2 PS-ND: identifying actual noises). Regarding Claim 15, Brandes et al in view of Turkelson et al teach the non-transitory computer-readable media of claim 12 (as described above) with further steps claimed identical to claim 3 (as described above). Regarding Claim 16, Brandes et al in view of Turkelson et al and Byeon et al teach the non-transitory computer-readable media of claim 15 (as described above) with further steps claimed identical to claim 4 (as described above). Regarding Claim 17, Brandes et al in view of Turkelson et al and Byeon et al teach the non-transitory computer-readable media of claim 16 (as described above) with further steps claimed identical to claim 5 (as described above). Regarding Claim 18, Brandes et al in view of Turkelson et al and Byeon et al teach the non-transitory computer-readable media of claim 17 (as described above) with further steps claimed identical to claim 6 (as described above). Claims 7-10, 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Brandes et al (US 2020/0364520, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) in view of Turkelson et al (US 2020/0193552, cited in the Non-Final Rejection – 10/01/2025) and Song (Personalized Image Classification by Semantic Embedding and Active Learning, cited in the Non-Final Rejection – 10/01/2025). Regarding Claim 7, Brandes et al in view of Turkelson et al teach the method of claim 1 (as described above), wherein the second set of images comprises a new image (Turkelson et al, a second new (updated) image training set with new image is obtained, step 206; Fig 1 and ¶ [0054]-[0055], [0062]). Brandes et al in view of Turkelson et al does not teach further comprising executing a testing phase, comprising: receiving a new image; determining a new set of features for the new image; executing, using the new set of features, the classifier to generate a first set of predictions for the new image; executing, using the exemplar set of representative features, a machine learning model to generate a second set of predictions for the new image; and determining a predicted class for the new image based on the first set of predictions and the second set of predictions. Song is analogous art pertinent to the technological problem addressed in this application and teaches executing a testing phase (the image classifier performs a testing (evaluation) phase; Fig 1 and 8.1 Experimental Setup, 8.2 Evaluation of Annotation Selection), comprising: receiving a new image (the image classifier performs a testing (evaluation) phase with a testing (evaluation) image set based on the classification categories; Fig 1 and 8.1 Experimental Setup, 8.2 Evaluation of Annotation Selection); determining a new set of features for the new image (the evaluation (testing) image set are analyzed for a set of features based on a first classification granularity; Fig 3 and 4. Feature Learning ¶ 2); executing, using the new set of features, the classifier to generate a first set of predictions for the new image (features analyzed based on the first classification granularity results in a first classification result with discrete label c; Fig 3 and 4. Feature Learning ¶ 2-4); executing, using the exemplar set of representative features, a machine learning model to generate a second set of predictions for the new image (features are analyzed for the input image to generate a second classification based on a different granularity using pre-trained model f g and generates continuous label vc; Fig 3 and 4. Feature Learning ¶ 2-4); and determining a predicted class for the new image based on the first set of predictions and the second set of predictions (the continuous label vc is generated based on pre-trained model and the category label c; Fig 3 and 4. Feature Learning ¶ 2-4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al in view of Turkelson et al with Song including executing a testing phase, comprising: receiving a new image; determining a new set of features for the new image; executing, using the new set of features, the classifier to generate a first set of predictions for the new image; executing, using the exemplar set of representative features, a machine learning model to generate a second set of predictions for the new image; and determining a predicted class for the new image based on the first set of predictions and the second set of predictions. By evaluating a classification model with a specific evaluation image set, classes are highly analyzed to create fine-grained classes and superclasses, resulting in detailed and accurate feature analysis and classification by the model, as recognized by Song (1. Introduction ¶ 4, 8.1 Experimental Setup). Regarding Claim 8, Brandes et al in view of Turkelson et al and Song teach the method of claim 7 (as described above), wherein determining the predicted class for the new image comprises: generating, based on the first set of predictions and the second set of predictions, a final set of predictions (Song, labeled predictions undergo a verification selection (final prediction) step; Fig 1, 2, 5, 6 and 5.1 Efficiency Model ¶ 4-6); and determining the predicted class based on the final set of predictions (Song, the predicted label that undergoes a verification selection process is approved as the correct classification and entered in the structured collection; Fig 1, 2, 5, 6 and 5.3 Verification Selection). Regarding Claim 9, Brandes et al in view of Turkelson et al and Song teach the method of claim 7 (as described above), wherein executing the machine learning model using the exemplar set of representative features (Song, features are analyzed for the input image to generate a second classification based on a different granularity using pre-trained model f g and generates continuous label vc; Fig 3 and 4. Feature Learning ¶ 2-4) comprises: accessing a plurality of sets of stored representative features in the exemplar set, wherein each set of stored representative features is associated with an associated class and an associated training step in which the classifier was re-trained (Song, features of images are stored with associated with classes and classification granularity (an associated step to determine the associated class) relates the semantic similarity between the label concepts of the image; Fig 3 and 4. Feature Learning ¶ 2); and executing the machine learning model using the plurality of sets of stored representative features to generate the second set of predictions for the new image (Song, the GloVe model can perform the continuous vector analysis allowing for d-dimensional semantic-embedding to perform the classification prediction based on the chosen classification granularity; Fig 3 and 4. Feature Learning ¶ 2-4). Regarding Claim 10, Brandes et al in view of Turkelson et al and Song teach the method of claim 9 (as described above), wherein: generating the first set of predictions comprises generating, for each class of the plurality of sets of stored representative features, a first value that is indicative of a prediction of whether the second set of features belongs to the class (Song, features are generated with a first taxonomy (first value) based on a first objective function in a first network with a cross-entropy loss indicating prediction of a class based on the first class granularity; Fig 3-5 and 4. Feature Learning Training Data Generation); and generating the second set of predictions comprises generating, for each step of the plurality of sets of stored representative features, a second value that is indicative of a prediction of whether the second set of features belongs to the step (Song, features are generated with a second taxonomy (second value) based on a first objective function in a first network with a cross-entropy and log-ratio loss indicating prediction of a class based on the second class granularity; Fig 3-5 and 4. Feature Learning Training Data Generation). Regarding Claim 19, Brandes et al in view of Turkelson et al teach the non-transitory computer-readable media of claim 12 (as described above) with further steps claimed identical to claim 7 (as described above). Regarding Claim 20, Brandes et al in view of Turkelson et al and Song teach the non-transitory computer-readable media of claim 15 (as described above) with further steps claimed identical to claim 8 (as described above). Regarding Claim 21, Brandes et al in view of Turkelson et al and Song teach the non-transitory computer-readable media of claim 16 (as described above) with further steps claimed identical to claim 9 (as described above). Regarding Claim 22, Brandes et al in view of Turkelson et al and Song teach the non-transitory computer-readable media of claim 17 (as described above) with further steps claimed identical to claim 10 (as described above). Claims 11, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Brandes et al (US 2020/0364520, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) in view of Turkelson et al (US 2020/0193552, cited in the Non-Final Rejection – 10/01/2025), Song (Personalized Image Classification by Semantic Embedding and Active Learning, cited in the Non-Final Rejection – 10/01/2025) and Dai et al Large Discriminative Structured Set Prediction Modeling with Max-Margin Markov Network for Lossless Image Coding, cited in the Non-Final Rejection – 10/01/2025). Regarding Claim 11, Brandes et al in view of Turkelson et al and Song teach the method of claim 8 (as described above). Brandes et al in view of Turkelson et al and Song do not teach wherein generating the final set of predictions comprises: determining a weighting factor based on (a) a maximum prediction of the second set of predictions and a minimum prediction of the second set of predictions and (b) a normalization constant; adjusting the second set of predictions based on the weighting factor; and adding the first set of predictions to the adjusted second set of predictions. Dai et al is analogous art pertinent to the technological problem addressed in this application and teaches wherein generating the final set of predictions comprises: determining a weighting factor based on (a) a maximum prediction of the second set of predictions and a minimum prediction of the second set of predictions and (b) a normalization constant (a structured set prediction model incorporates a weighting normal vector for optimal prediction of a pixel based on a minimum and maximum model formula with joint loss function; Fig 3 and II.C. Framework Training Based Prediction Model); adjusting the second set of predictions based on the weighting factor (the weighting vector can be adjusted iteratively based on the learning rate and loss function; Fig 3 and II.C. Framework Training Based Prediction Model); and adding the first set of predictions to the adjusted second set of predictions (the adjustment of the class prediction is based on loss function, which is added to the weighting vector to generate a fine adjustment to the pixel prediction; Fig 3 and II.C. Framework Training Based Prediction Model). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Brandes et al in view of Turkelson et al and Song with Dai et al including wherein generating the final set of predictions comprises: determining a weighting factor based on (a) a maximum prediction of the second set of predictions and a minimum prediction of the second set of predictions and (b) a normalization constant; adjusting the second set of predictions based on the weighting factor; and adding the first set of predictions to the adjusted second set of predictions. By using a min-max approach to make maximum margin estimation, the prediction is performed based on all possible estimations thereby allowing for the prediction error to be optimized in the prediction of structure components in the image and object detection, as recognized by Dai et al (Abstract, I. Introduction ¶ 3, 5-6). Regarding Claim 23, Brandes et al in view of Turkelson et al and Song teach non-transitory computer-readable media of claim 20 (as described above) with further steps claimed identical to claim 11 (as described above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gong et al (US 2022/0318621, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) teach a machine learning model to determine relationship matches of objects based on a ranking methodology for identifying, matching and labeling target objects. Kask (US 2013/0064441, cited in the Non-Final Rejection – 10/01/2025 as pertinent art, see Conclusion section) teach a system and method for automated selection of objects and classification of the object including determining a signal to noise ratio for the feature in determining the relative feature relationship to more accurately identify and classify the features. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Sep 28, 2025
Non-Final Rejection — §103
Dec 17, 2025
Response Filed
Feb 14, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month