Prosecution Insights
Last updated: April 19, 2026
Application No. 18/508,219

SYSTEMS AND METHODS TO ANALYZE FAILURE MODES OF MACHINE LEARNING COMPUTER VISION MODELS USING ERROR FEATURIZATION

Non-Final OA §102§103
Filed
Nov 13, 2023
Examiner
MEMON, OWAIS IQBAL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Akridata, Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
75 granted / 101 resolved
+12.3% vs TC avg
Strong +22% interview lift
Without
With
+22.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
128
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
30.6%
-9.4% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 101 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on 11/13/2023. These drawings are accepted. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 10, 47-53 and 56 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dasgupta et al. (US11556746, hereinafter “Dasgupta”) Claim 1. (Original) Dasgupta teaches A method for verification and analysis of artificial intelligence (AI) models, (Abstract “perform iterative model experiments to develop machine learning (ML) media models… allows users to visually inspect media samples that were used during the model experiment to determine corrective actions to improve model performance for later iterations of experiments.”) the method comprising: selecting (col22line65 “user may specify…providing production input data to the model”) a test data set of a plurality of images (col18line52 “input data 505 (e.g. production images)”) with each datapoint annotated with a unique identification; (Col38line59 “each datum or sample (an image, for example) may be annotated with one or more labels.”) receiving ground truth annotation associated with each image in the test data set; (col26line25 “input data may be truth labeled” And col15line31“media samples from the production environment may be truth labeled so that performances of both the production model and the model under development may be viewed and compared.”) receiving a fitted AI model (col18line49 “model under development (MUD) 522” is understood to be the same as the claimed fitted AI model in light of instant specifications [0112]) to be verified and analyzed; ( col18line49 “MDE may allow the user to perform a simulation of a model under development (MUD) 522, which allows the MUD to be tested against production input data 505 (e.g. production images)” and abstract “model development environment (MDE) that allows a team of users to perform iterative model experiments to develop machine learning (ML) media models.” running the fitted AI model on the test data set using an AI server to receive output data regarding each image in the test data set; (col26line20 “compare the performance results of the two models after a period of simulating the model under development using production input data… to obtain the results shown on this interface,”) for each image of the plurality of images in the test data set, featurizing the output data, (col59line65 “the feature vector may be an output generated by an intermediate layer in model's neural network”) the ground truth annotation,(col35line 51 “annotation”) and the image (col27line31 “feature vectors extracted from the images).”) to generate an output feature vector; (col35line52 “reduce all images in the image set to a feature vector… output the feature vector as an intermediate representation of an input image.”) and reducing and clustering the plurality of output feature vectors together to generate a two- dimensional scatter plot and cluster information of a plurality of data points. (col36line14 “scatter plot 1430 displays the feature vector 1432 of each image as a point in the plot, and also a cluster indicator 1440 for each determined cluster of images in the set.” And fig.14) PNG media_image1.png 526 718 media_image1.png Greyscale PNG media_image2.png 719 1000 media_image2.png Greyscale Claim 2. (Original) Dasgupta teaches The method of claim 1, further comprising: supporting an interactive user interface (UI) to explore, browse, (col35line41 “the user interface 1400 may also include a view button 1412 that allows users to visually inspect the images or other media samples to be loaded.” Is understood to be the same as the claimed interface to explore and browse in light of instant specifications [0163]) and analyze one or more of the data points in the two-dimensional scatter plot. (col35line64 “the user interface may include a button 1436 that allows users to configure which features should be used to make the scatter plot, or how the scatter plot should be displayed.” And Col35line60 “the scatter plot 1430 may be implemented as a user control that allows users to view feature vectors in two-dimensional space” and Fig. 14 shows the scatter plot is a two-dimensional scatter plot) Claim 10. (Original) Dasgupta teaches The method of claim 1, wherein: the AI model is an image classification model, (col14line5 “(e.g., every pixel in an image) may be classified…neural networks”) the ground truth annotation comprises a single class (col21line9 “annotations may be used to assign each sample to a class.”) of a plurality of classes; (col9line21“tags that classify media samples into different desired classes.”) and the model output comprises prediction confidence scores for each of the plurality of classes. (col52 line39“probability score for each class.” And col32line47 “predicted class probability”) Claim 47. (New) Dasgupta teaches The method of claim 10, wherein the featurization further comprises: a class-wise evaluation of divergence between ground truth class-labels and model prediction confidences in the case of image classification. (col72line66 “second saliency map indicates one or more other regions in the test sample that are salient to the ML model to classify the test sample to a different class from the prediction result.”) Claim 48. (New) Dasgupta teaches The method of claim 47, wherein the featurization further comprises: a test dataset of images, with each data sample in the dataset assigned a ground truth class-label provided by expert annotators; (col19line63“The sample annotation interface 532 may be configured to allow a user to manually or programmatically annotate individual input samples (e.g. images).”) and model predictions for the AI model being evaluated obtained by running inferences on the test data to obtain prediction confidences. (col51line44 “If a pixel change causes a large effect on the prediction result (e.g. the confidence level of the prediction), that pixel may be deemed “salient” for the prediction result. In some embodiments, the saliency level may be determined based on regions in the image, which may be determined via a semantic segmentation of the image, possibly produced by the model itself.”) Claim 49. (New) Dasgupta teaches The method of claim 1, wherein: the AI model is an object detection model, (col9 line60“neural network…object detection,”) the ground truth annotation (col39line36 “truth label value”) comprises zero (figure17b Shows zero which is understood to be the same as the claimed zero in light of instant specifications [0144]) or more object class (fig 17b “cow, horse giraffe”) and bounding boxes; (col52line65 “all of the images or maps displayed may indicate a bounding box”) and the model output (col39line36 “prediction label value determined by the classifier 1380.”) comprises a zero (figure17b Shows zero which is understood to be the same as the claimed zero in light of instant specifications [0144]) or more object class (fig 17b “cow, horse giraffe”) and bounding boxes, (col52line65 “all of the images or maps displayed may indicate a bounding box”) along with a prediction confidence score associated with each predicted box. (col52 line39“probability score for each class.” And col32line47 “predicted class probability”) Claim 50. (New) Dasgupta teaches The method of claim 49, wherein the featurization further comprises: a class-wise and a region-wise evaluation of divergence between ground truth object bounding boxes and model prediction bounding boxes. (col72line66 “second saliency map indicates one or more other regions in the test sample that are salient to the ML model to classify the test sample to a different class from the prediction result.”) Claim 51. (New) Dasgupta teaches The method of claim 50, wherein the featurization further comprises: a test dataset of images, with each data sample in the test dataset of images assigned ground truth labels (col39line36 “truth label value”) comprising zero (figure17b Shows zero which is understood to be the same as the claimed zero in light of instant specifications [0144]) or more annotations, each containing an object class (fig 17b “cow, horse giraffe”) and bounding box; (col52line65 “all of the images or maps displayed may indicate a bounding box”) and model predictions for the AI model being evaluated obtained by running inferences on the test data (col39line36 “prediction label value determined by the classifier 1380.”) to obtain zero (figure17b Shows zero which is understood to be the same as the claimed zero in light of instant specifications [0144]) or more prediction annotations, each containing an object class, (fig 17b “cow, horse giraffe”) a bounding box, (col52line65 “all of the images or maps displayed may indicate a bounding box”)and an associated prediction confidence. (col52 line39“probability score for each class.” And col32line47 “predicted class probability”) Claim 52. (New) Dasgupta teaches The method of claim 1, wherein the reducing and clustering (col33line49“clustering technique”) comprises: selecting a clustering algorithm to group data points together from the group comprising a hierarchical density-based spatial clustering (HDBSCAN) algorithm, a k-segmentation algorithm, a hierarchical k-means algorithm, and a gaussian mixture model algorithm; selecting an embedding algorithm to plot and display the data points in two dimensions from the group comprising a uniform manifold approximation and projection for dimension reduction (UMAP) algorithm, a principal component analysis (PCA) algorithm, (col6line16 “MDE implements data visualization techniques such as PCA (principal component analysis)”) and a locally linear embedding algorithm; and selecting an order of whether an embedding to plot and display data points occurs before a clustering of data points or the clustering of data points occurs before the embedding to plot and display data points. (col36line23 “the user interface 1400 may include a refresh plot button 1450, which allow the scatter plot 1430 to be refreshed, after configuration changes are made to the feature extraction method, the plot features, or the clustering method.”) Claim 53. (New) Dasgupta teaches The method of claim 2, wherein the supporting of the interactive user interface (UI) further comprises: selecting one cluster of a plurality of data points; and splitting the one cluster into two or more subclusters of a two or more plurality of data points. (col44line34 “the graphical user interface may allow the user to change the way that the feature vectors are extracted or the way that the feature vectors are clusters”) Claim 56. (New) Dasgupta teaches The method of claim 1, wherein: the reducing and clustering operates on a subset of features of the output feature vector associated with each image. (col44line44“the clustering allows groups of images in the images set with similar feature sets.” ) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 54 are rejected under 35 U.S.C. 103 as being unpatentable over Dasgupta et al. (US11556746, hereinafter “Dasgupta”) and in view of Wu et al (US20230044233, hereinafter “Wu”) Claim 54. (New) Dasgupta teaches The method of claim 2, wherein the supporting of the interactive user interface (UI) further comprises: Dasgupta does not explicitly teach selecting one cluster of a plurality of data points; and merging the one cluster into a parent cluster of a plurality of data points. Wu teaches selecting one cluster of a plurality of data points; and merging the one cluster into a parent cluster of a plurality of data points.(“ The user may then use the user interface… merge samples in the pushed cluster with a cluster that the user has already reviewed and is being used for enrollment, or merge samples in the pushed cluster with a cluster that the user has already reviewed… merge the fifteen new face images of user Z to the existing cluster of ten face images of user Z in the enrolled user data store 240 and form a combined cluster”) It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Dasgupta to have selecting and merging that cluster into a parent cluster as taught by Wu to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been to (Wu [0109] “reduce costs while providing access to significant levels of useful visual information.”) Allowable Subject Matter Claims 3-9, and 55 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Yuan et al NPL “A survey of visual analytics techniques for machine learning” teaches a GUI showing clustering to verify the labeling of an AI system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OWAIS IQBAL MEMON/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Nov 13, 2023
Application Filed
Feb 20, 2024
Response after Non-Final Action
Nov 13, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597224
SYSTEM AND METHOD FOR FEATURE SUB-IMAGE DETECTION AND IDENTIFICATION IN A GIVEN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12591989
METHOD FOR DEPTH ESTIMATION AND HEAD-MOUNTED DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12592013
REAL SCENE IMAGE EDITING METHOD BASED ON HIERARCHICALLY CLASSIFIED TEXT GUIDANCE
2y 5m to grant Granted Mar 31, 2026
Patent 12586338
SYSTEM FOR UPDATING NEURAL NETWORK PARAMETERS BASED ON OBJECT DETECTION AREA OVERLAP SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12573069
SYSTEMS AND METHODS FOR GENERATING AND CODING MULTIPLE FOCAL PLANES FROM TEXTURE AND DEPTH
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
97%
With Interview (+22.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 101 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month