Prosecution Insights
Last updated: April 19, 2026
Application No. 18/281,663

System, Method, and Computer Program Product to Compare Machine Learning Models

Non-Final OA §103
Filed
Sep 12, 2023
Examiner
PONTIUS, JAMES M
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
VISA INTERNATIONAL SERVICE ASSOCIATION
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
404 granted / 514 resolved
+20.6% vs TC avg
Moderate +10% lift
Without
With
+9.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
17 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
32.7%
-7.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
25.9%
-14.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 514 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) 1-3, 9-11, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" US 2020/0034665 A1 (Ghanta et al.) to DATAROBOT, INC. (hereinafter 'Data Robot') in view of US 2021/0042659 A1 (Chu et al.) to SAS INSTITUTE INC. (hereinafter 'SAS') . As to claim 1, DataRobot teaches a system for comparing machine learning models, the system comprising: at least one processor programmed or configured to: receive a dataset of data instances, wherein each data instance comprises a feature value for each feature of a plurality of features ("The ML management apparatus 104, in one embodiment, however, evaluates the suitability (predictive performance) of a machine learning model, machine learning algorithm, and/or the like in the absence of labels, and is agnostic of the type of problem and algorithm used, the particular language or framework used, and/or the like by extracting statistics from features in the training data set," para [0042]); determine a first subset of the outputs of the first machine learning model and a second subset of outputs of the second machine learning model ("the secondary training module 306 enhances the error data set by including additional data to supplement the prediction error data. For instance, the secondary training module 306 may include data for additional features such as features of the data set itself (e.g., the secondary training module 306 may select all or a subset of the available features of the error data set itself)," para [0087]); generate a plurality of true label matrices based on true labels of the first set of grouped outputs and the second set of grouped outputs, wherein a first true label matrix includes true positive outputs of the plurality of outputs of the first machine learning model that satisfy the first condition and true positive outputs of the plurality of outputs of the second machine learning model that satisfy the first condition, and wherein a second true label matrix includes false positive outputs of the plurality of outputs of the first machine learning model that satisfy the first condition and false positive outputs of the plurality of outputs of the second machine learning model that satisfy the first condition ("the secondary validation module 308 may analyze the second machine learning algorithm using a confusion matrix. As used herein, a confusion matrix (also known as an error matrix) is a specific table layout that allows visualization of the performance of an algorithm. In machine learning, a confusion matrix is a table with two rows and two columns that reports the number of false positives, false negatives, true positives, and true negatives," para [0088]); train a first classifier based on the first true label matrix ("A method for determining validity of machine learning algorithms for datasets, in one embodiment, includes training a first machine learning model for a first machine learning algorithm using a training data set. A method, in certain embodiments, includes validating the first machine learning model using a validation data set. Output of a validation of a first machine learning model may comprise an error data set. A method, in some embodiments, includes training a second machine learning model for a second machine learning algorithm using an error data set. A second machine learning algorithm may be configured to predict a suitability of a first machine learning model for analyzing an inference data set," para [0004]); train a second classifier based on the second true label matrix (para [0004]); and determine an accuracy of the first machine learning model and an accuracy of the second machine learning model based on the first classifier and the second classifier ("the ML management apparatus 104 provides an improvement for machine learning systems by training a first or primary machine learning model for a first/primary machine learning algorithm using a training data set, validating the first machine learning model using a validation data set, the output of which is an error data set that describes the accuracy of the first machine learning model on the validation data set, and training a second machine learning model for a second/auxiliary machine learning algorithm using the error data set. The second machine learning algorithm is then used to predict, verify, validate, check, monitor, and/or the like the efficacy, accuracy, reliability, and/or the like of the first or primary machine learning model that is used to analyze an inference data set," para [0037]). DataRobot fails to explicitly teach such a system further comprising: generate outputs of a first machine learning model and outputs of a second machine learning model based on the dataset of data instances; generate a disagreement matrix that includes a first set of grouped outputs of the first machine learning model and the second machine learning model and a second set of grouped outputs of the first machine learning model and the second machine learning model, wherein the first set of grouped outputs comprises a plurality of outputs of the first machine learning model ·that satisfies a first condition and a plurality of outputs of the second machine learning model that does not satisfy the first condition, and wherein the second set of grouped outputs comprises a plurality of outputs of the first machine learning model that does not satisfy the first condition and a plurality of outputs of the second machine learning model that satisfies the first condition. However, SAS teaches such a system further comprising: generate outputs of a first machine learning model and outputs of a second machine learning model based on the dataset of data instances ("The instructions can cause the processing device to train the second machine-learning model using the training dataset. The instructions can cause the processing device to provide an input value from the training dataset to the first machine-learning model to determine a first output from the first machine-learning model. The instructions can cause the processing device to provide the input value from the training dataset to the second machine-learning model to determine a second output from the second machine-learning model. The instructions can cause the processing device to compare the first output from the first machine-learning model and the second output from the second machine-learning model to an output value in the training dataset to determine whether the first output or the second output is closer to the output value in the training dataset," para [0004]); generate a disagreement matrix that includes a first set of grouped outputs of the first machine learning model and the second machine learning model and a second set of grouped outputs of the first machine learning model and the second machine learning model, wherein the first set of grouped outputs comprises a plurality of outputs of the first machine learning model that satisfies a first condition and a plurality of outputs of the second machine learning model that does not satisfy the first condition, and wherein the second set of grouped outputs comprises a plurality of outputs of the first machine learning model that does not satisfy the first condition and a plurality of outputs of the second machine learning model that satisfies the first condition ("The instructions can cause the processing device to provide an input value from the training dataset to the first machine-learning model to determine a first output from the first machine learning model. The instructions can cause the processing device to provide the input value from the training dataset to the second machine-learning model to determine a second output from the second machine-learning model. The instructions can cause the processing device to compare the first output from the first machine-learning model and the second output from the second machine learning model to an output value in the training dataset to determine whether the first output or the second output is closer to the output value in the training dataset," para [0004); "The system can assign a high performance score to the champion model if the KP ls satisfy one or more predetermined criteria, or a low performance score to the champion model if the champion model does not satisfy the one or more predetermined criteria," para [0175]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the machine learning model validation system of DataRobot with the support for generating outputs and a disagreement matrix of SAS, because such systems and methods allow for generating and comparing outputs to compare machine learning models (SAS: para [0004], [0175]), and the SAS teachings would enhance the operability of the DataRobot teachings. Furthermore, both DataRobot and SAS are directed to systems and methods for the analysis of machine learning models. As to claim 2, the system of claim 1 is discussed above. Further, Data Robot teaches such a system wherein the first subset of the outputs of the first machine learning model and the second subset of outputs of the second machine learning model have a same number of values ("the training pipelines 204a-b may execute different training or learning algorithms on different or the same sets of training data," para [0070]). As to claim 3, the system of claim 1 is discussed above. Further, DataRobot teaches such a system wherein when determining the accuracy of the first machine learning model and the accuracy of the second machine learning model, the at least one processor is programmed or configured to: determine the accuracy of the first machine learning model and the accuracy of the second machine learning model based on a model interpretation technique that is performed on the first classifier and the second classifier ("the ML management apparatus 104 provides an improvement for machine learning systems by training a first or primary machine learning model for a first/primary machine learning algorithm using a training data set, validating the first machine learning model using a validation data set, the output of which is an error data set that describes the accuracy of the first machine learning model on the validation data set, and training a second machine learning model for a second/auxiliary machine learning algorithm using the error data set. The second machine learning algorithm is then used to predict, verify, validate, check, monitor, and/or the like the efficacy, accuracy, reliability, and/or the like of the first or primary machine learning model that is used to analyze an inference data set," para [0037 ] ). As to claims 9 and 17, DataRobot teaches a computer-implemented method and computer program product including instructions for causing a processor to perform the same, the method comprising: receiving, with at least one processor, a dataset of data instances, wherein each data instance comprises a feature value for each feature of a plurality of features (para [0042]); determining, with the at least one processor, a first subset of the outputs of the first machine learning model and a second subset of outputs of the second machine learning model (para [0087]); generating, with the at least one processor, a plurality of true label matrices based on true labels of the first set of grouped outputs and the second set of grouped outputs, wherein a first true label matrix includes true positive outputs of the plurality of outputs of the first machine learning model that satisfy the first condition and true positive outputs of the plurality of outputs of the second machine learning model that satisfy the first condition, and wherein a second true label matrix includes false positive outputs of the plurality of outputs of the first machine learning model that satisfy the first condition and false positive outputs of the plurality of outputs of the second machine learning model that satisfy the first condition (para [0088]); training, with the at least one processor, a first classifier based on the first true label matrix (para [0004]); training, with the at least one processor, a second classifier based on the second true label matrix (para [0004]); and determining, with the at least one processor, an accuracy of the first machine learning model and an accuracy of the second machine learning model based on the first classifier and the second classifier (para [0037 ] ). Data Robot fails to explicitly teach such a method computer-implemented method further comprising: generating, with the at least one processor, outputs of a first machine learning model and outputs of a second machine learning model based on the dataset of data instances; generating, with the at least one processor, a disagreement matrix that includes a first set of grouped outputs of the first machine learning model and the second machine learning model and a second set of grouped outputs of the first machine learning model and the second machine learning model, wherein the first set of grouped outputs comprises a plurality of outputs of the first machine learning model that satisfies a first condition and a plurality of outputs of the second machine learning model that does not satisfy the first condition, and wherein the second set of grouped outputs comprises a plurality of outputs of the first machine learning model that does not satisfy the first condition and a plurality of outputs of the second machine learning model that satisfies the first condition. However, SAS teaches such a method computer -implemented method further comprising: generating, with the at least one processor, outputs of a first machine learning model and outputs of a second machine learning model based on the dataset of data instances (para [0004]); generating, with the at least one processor, a disagreement matrix that includes a first set of grouped outputs of the first machine learning model and the second machine learning model and a second set of grouped outputs of the first machine learning model and the second machine learning model, wherein the first set of grouped outputs comprises a plurality of outputs of the first machine learning model that satisfies a first condition and a plurality of outputs of the second machine learning model that does not satisfy the first condition, and wherein the second set of grouped outputs comprises a plurality of outputs of the first machine learning model that does not satisfy the first condition and a plurality of outputs of the second machine learning model that satisfies the first condition (para [0004], [0175]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the machine learning model validation system of DataRobot with the support for generating outputs and a disagreement matrix of SAS, because such systems and methods allow for generating and comparing outputs to compare machine learning models (SAS: para [0004], [0175]), and the SAS teachings would enhance the operability of the DataRobot teachings. Furthermore, both DataRobot and SAS are directed to systems and methods for the analysis of machine learning models. As to claims 10 and 18, the computer-implemented method of claim 9 and computer program product of claim 17 are discussed above. Further, DataRobot teaches such a computer-implemented method wherein the first subset of the outputs of the first machine learning model and the second subset of outputs of the second machine learning model have a same number of values (para [0070 ] ). As to claim 11, the computer-implemented method of claim 9 is discussed above. Further, DataRobot teaches such a computer implemented method wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: determining the accuracy of the first machine learning model and the accuracy of the second machine learning model based on a model interpretation technique that is performed on the first classifier and the second classifier (para [0037]). Claim (s) 4-8, 12-16, 19-2 0 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" US 2020/0034665 A1 ( Ghanta et al.) to DATAROBOT, INC. (hereinafter 'Data Robot') in view of US 2021/0042659 A1 (Chu et al.) to SAS INSTITUTE INC. (hereinafter 'SAS') and US 2019/0378210 A1 (Merrill et al.) to ZESTFINANCE, INC. (hereinafter ' ZestFinance '). As to claim 4, DataRobot and SAS teach the system of claim 3, but fail to explicitly teach such a system wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values. However, ZestFinance teaches such a system wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values ("the model evaluation and explanation system (e.g., 120 of FIGS. 1A and 18) uses a non-differentiable model decomposition module (e.g., 121) to decompose scores generated by a model by computing at least one SHAP ( SHapley Additive exPlanation ) value. In some embodiments, decomposing scores includes: for each feature of a test data point, generating a difference value, the difference value for the test data point relative to a corresponding reference data point, the difference value being the decomposition value for the feature. In some embodiments, generating a difference value for a feature includes: computing a SHAP value (as described herein) of the non-differentiable model for the test data point and computing a SHAP value of the non-differentiable model for the corresponding reference data point, and subtracting the SHAP value for the reference data point from the SHAP value for the test data point to produce the difference value for the feature," para [0030]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the machine learning analysis systems of DataRobot and SAS with the support for generating machine learning model SHAP values of ZestFinance , because such systems and methods allow for using generated SHAP values to compare model features and determine model accuracy ( ZestFinance : para [0030]), and the ZestFinance teachings would enhance the operability of the DataRobot in view of SAS teachings. Furthermore, DataRobot , SAS and ZestFinance are directed to systems and methods for the analysis of machine learning models. As to claim 5, the system of claim 4 is discussed above. Further, ZestFinance teaches such a system wherein when determining the accuracy of the first machine learning model and the accuracy of the second machine learning model, the at least one processor is programmed or configured to: calculate a SHAP value for each feature value of each data instance of the dataset for the first classifier (para [0030]); and calculate a SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030 ] ). As to claim 6, the system of claim 5 is discussed above. Further, ZestFinance teaches such a system wherein when determining the accuracy of the first machine learning model and the accuracy of the second machine learning model, the at least one processor is programmed or configured to: generate a plot of the SHAP value for each feature value of each data instance of the dataset for the first classifier and the SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030]; "an identified feature is determined permissible based on leaving the feature out, retraining the model, and determining its impact on the approval rate for a protected class. In other embodiments the determination is based on an approval rate difference threshold or other tunable parameters. In some embodiments, the method 200 includes displaying partial dependence plots for identified variables, heat maps, and other visualizations on a display device of an operator device (e.g., 171)," para [0192 ] ). As to claim 7, the system of claim 5 is discussed above. Further, ZestFinance teaches such a system wherein when determining the accuracy of the first machine learning model and the accuracy of the second machine learning model, the at least one processor is programmed or configured to: generate a plot of a plurality of SHAP values for a plurality of feature values of a first feature of each data instance of the dataset for the first classifier and a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the second classifier (para [0030], [0192 ] ). As to claim 8, the system of claim 5 is discussed above. Further, ZestFinance teaches such a system wherein when determining the accuracy of the first machine learning model and the accuracy of the second machine learning model, the at least one processor is programmed or configured to: calculate an accuracy metric value associated with an accuracy metric of a first feature for the first classifier, wherein the accuracy metric value associated with the accuracy metric of the first feature for the first classifier is based on a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the first classifier ("In some embodiments, the model evaluation system 120 uses generated decompositions (as described herein) to generate lower-level interpretations of ensembled models, and provide the lower-level interpretations to the operator device 171. Traditionally, ensembling submodels reduces the inherent bias of a particular sub-model, e.g., mixed learner type, mixed hyper-parameters, mixed input data, by combining several sub-models that work in concert to generate a single output score. Generally, when data practitioners seek to contrast the behavior of the sub-models against the joint ensembled model, they are relegated to higher-level population statistics that often measure some accuracy metric that lacks a certain amount of fidelity, e.g., " ensembling two mediocre models resulted in a model that outperforms its substrates in nearly all circumstances." Conversely, decomposition offers lower-level detailed explanations of a given observation or a population of how the features (which are overlapping between any two or more sub-models) interact. These interactions provide value and may demonstrate weak and strong directional interactions on a feature-basis between the various sub-models. For example, given a population and an ensemble of two sub-models with consistent feature inputs, certain features may routinely create strong and positive influences for both sub-models (constructive interference), while other features may routinely create strong and positive influences in one sub-model but be counteracted by strong and negative influences by the other (destructive interference)," para [0136]; "the non-differentiable model decomposition module 121 performs decomposition by computing Shapley values (e.g., by using Equations 1 and 2 as disclosed herein) for each observation (test data point), as disclosed herein," para [0173 ] ); and calculate an accuracy metric value associated with the accuracy metric of the first feature for the second classifier, wherein the accuracy metric value associated with the accuracy metric of the first feature for the second classifier is based on a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the second classifier, wherein the accuracy metric comprises a metric associated with a measure of magnitude of a feature, a metric associated with a measure of consistency of a feature, a metric associated with a measure of contrast of a feature, or a metric associated with a measure of correlation of a feature (para [0136], [0173]). As to claim 12, DataRobot and SAS teach the computer-implemented method of claim 11, but fail to explicitly teach such a computer implemented method wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values. However, ZestFinance teaches such a computer-implemented method wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values (para [0030 ] ). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the machine learning analysis systems of DataRobot and SAS with the support for generating machine learning model SHAP values of ZestFinance , because such systems and methods allow for using generated SHAP values to compare model features and determine model accuracy ( ZestFinance : para [0030]), and the ZestFinance teachings would enhance the operability of the DataRobot in view of SAS teachings. Furthermore, DataRobot , SAS and ZestFinance are directed to systems and methods for the analysis of machine learning models. As to claim 13, the computer-implemented method of claim 12 is discussed above. Further, ZestFinance teaches such a computer implemented method wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: calculating a SHAP value for each feature value of each data instance of the dataset for the first classifier (para [0030 ] ); and calculating a SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030 ] ). As to claim 14, the computer-implemented method of claim 13 is discussed above. Further, ZestFinance teaches such a computer implemented method wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: generating a plot of the SHAP value for each feature value of each data instance of the dataset for the first classifier and the SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030], [0192]). As to claim 15, the computer-implemented method of claim 13 is discussed above. Further, ZestFinance teaches such a computer implemented method wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: generating a plot of a plurality of SHAP values for a plurality of feature values of a first feature of each data instance of the dataset for the first classifier and a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the second classifier (para [0030], [0192 ] ). As to claim 16, the computer-implemented method of claim 13 is discussed above. Further, ZestFinance teaches such a computer implemented method wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: calculating an accuracy metric value associated with an accuracy metric of a first feature for the first classifier, wherein the accuracy metric value associated with the accuracy metric of the first feature for the first classifier is based on a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the first classifier (para [0136], [0173 ] ); and calculating an accuracy metric value associated with the accuracy metric of the first feature for the second classifier, wherein the accuracy metric value associated with the accuracy metric of the first feature for the second classifier is based on a plurality of SHAP values for a plurality of feature values of the first feature of each data instance of the dataset for the second classifier, wherein the accuracy metric comprises a metric associated with a measure of magnitude of a feature, a metric associated with a measure of consistency of a feature, a metric associated with a measure of contrast of a feature, or a metric associated with a measure of correlation of a feature (para [0136], [0173]). As to claim 19, Data Robot and SAS teach the computer program product of claim 17, but fail to explicitly teach such a computer program product wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: determining the accuracy of the first machine learning model and the accuracy of the second machine learning model based on a model interpretation technique that is performed on the first classifier and the second classifier, wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values, and wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: calculating a SHAP value for each feature value of each data instance of the dataset for the first classifier; and calculating a SHAP value for each feature value of each data instance of the dataset for the second classifier. However, ZestFinance teaches such a computer program product wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: determining the accuracy of the first machine learning model and the accuracy of the second machine learning model based on a model interpretation technique that is performed on the first classifier and the second classifier, wherein the model interpretation technique is a model interpretation technique that involves Shapley additive explanations (SHAP) values, and wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: calculating a SHAP value for each feature value of each data instance of the dataset for the first classifier (para [0030]); and calculating a SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the machine learning analysis systems of DataRobot and SAS with the support for generating machine learning model SHAP values of ZestFinance , because such systems and methods allow for using generated SHAP values to compare model features and determine model accuracy ( ZestFinance : para [0030]), and the ZestFinance teachings would enhance the operability of the DataRobot in view of SAS teachings. Furthermore, DataRobot , SAS and ZestFinance are directed to systems and methods for the analysis of machine learning models. As to claim 20, the computer program product of claim 19 is discussed above. Further, ZestFinance teaches such a computer program product wherein determining the accuracy of the first machine learning model and the accuracy of the second machine learning model includes: generating a plot of the SHAP value for each feature value of each data instance of the dataset for the first classifier and the SHAP value for each feature value of each data instance of the dataset for the second classifier (para [0030], [0192]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT JAMES M PONTIUS whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7687 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-Th 8-4 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Sath V Perungavoor can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-7455 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES M PONTIUS/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Sep 12, 2023
Application Filed
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602934
VEHICULAR DRIVING ASSIST SYSTEM WITH TRAFFIC LIGHT RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12587726
ELECTRIC SHAVER WITH IMAGING CAPABILITY
2y 5m to grant Granted Mar 24, 2026
Patent 12583389
SYSTEM FOR PROVIDING THREE-DIMENSIONAL IMAGE OF VEHICLE AND VEHICLE INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12583400
SYSTEM AND METHOD FOR OPERATING A VEHICLE ACCESS POINT
2y 5m to grant Granted Mar 24, 2026
Patent 12587616
IMAGE CAPTURING SYSTEM AND VEHICLE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
88%
With Interview (+9.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 514 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month