Prosecution Insights
Last updated: April 18, 2026
Application No. 17/978,511

MODEL LEARNING SYSTEM AND MODEL LEARNING DEVICE

Final Rejection §103
Filed
Nov 01, 2022
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement submitted on 12/08/2025 and 04/02/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The Amendment filed 02/19/2026 has been entered. Claims 6-8 are new. Claims 1-8 remain pending in this application. Allowable Subject Matter Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Yeganeh et al. (US 10789548 B1 hereinafter Yeganeh) in view of NAKANO et al. (US 20210365813 A1 hereinafter Nakano) and Fano (US 20170323216 A1) As to independent claim 1, Yeganeh teaches a model learning system comprising: [System Fig. 5 500 Col. 3 ln. 32-47] a device in which a learning model is used; [system with devices with a learning model Col. 3 ln. 48-58 "computing system 500 receives an original machine learning model"] a data acquisition device configured to acquire data used for creating a training data set for the learning model; and a model learning device comprising: [Fig. 1 120 illustrates creating retraining data (training data) for the learning model (original) Col. 4 ln. 62-67 "computing system 500 may create retraining data by combining the received execution log and the observed user action."] a processor; a database; and a non-transitory memory storing instructions that cause the processor to: [processors, code and storage devices Col. 11 ln. 15-21, database Col. 8 ln. 12-20] automatically train the learning model using the training data set for the learning model created using the data, [trains a model to create a retrained model using retraining data Fig. 1 125 Col. 5 ln. 37-50 "retrain the original machine learning model using the retraining data to generate a retrained machine learning model"; automatic Col. 1 ln. 50-60 "updating or retraining a machine learning model automatically and learning an updated machine learning model automatically, such as in real time or near real time, or on a batch or periodic basis"] in response to a model accuracy after automatically training the learning model decreases or does not increase compared to before training, exclude the learning model from a target to be automatically trained, [when model accuracy (performance) after automatically learning (retrained model) is below threshold or negative exclude retrained model and use original Fig. 2 230 Col. 6-7 ln. 64-6 "If instead the performance improvement is not sufficiently large or is negative, the method 200 proceeds to block 230, where the computing system 500 generates subsequent predictions using the original machine learning model"] Yeganeh does not specifically teach evaluate the model accuracy of the automatically trained learning model after training to generate a result, and execute the exclusion of the learning model from the target to be automatically trained based on the result, wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model. However, Nakano teaches evaluate the model accuracy of the automatically trained learning model after training to generate a result, and [calculates and compares accuracy to a condition/sufficiency and gets a no result ¶49-51, ¶37, ¶98 "it is determined that the retraining cannot be executed because the retraining accuracy is not sufficient in the retraining necessity determination"] execute the exclusion of the learning model from the target to be automatically trained based on the result, [result leads to not retraining ¶37 "In the example of FIG. 2, since the “reference value” is the accuracy a1 of the current in-operation model 101a and the accuracy a2 when the number of data in the current new collected data set is n2 is less than the accuracy a1, it is determined that the retraining is not executed."] wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model, and [register for accuracy Fig. 5 S16 ¶52 "accuracy of the Machine learning model recorded in Step S16"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model retraining disclosed by Yeganeh by incorporating the evaluate the model accuracy of the automatically trained learning model after training to generate a result, and execute the exclusion of the learning model from the target to be automatically trained based on the result, wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model disclosed by Nakano because both techniques address the same field of machine learning and by incorporating Nakano into Yeganeh helps reduce the deteriorating of model accuracy and doing more timely retraining of models [Nakano ¶3, ¶67] Yeganeh and Nakano do not specifically teach storing information on whether the learning model is subject to retraining. However, Fano teaches storing information on whether the learning model is subject to retraining. [library of rules that specify if training is needed ¶23, “a library of retraining rules 224” ¶43 "retraining rules specify that at least one of the one or more predictive models should be retrained "] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model retraining disclosed by Yeganeh and Nakano by incorporating the storing information on whether the learning model is subject to retraining disclosed by Nakano because all techniques address the same field of machine learning and by incorporating Nakano into Yeganeh and Nakano enables models to be retrained at more appropriate times and save resources while avoiding obsolete models [Fano ¶6]. As to independent claim 5, Yeganeh teaches a model learning device comprising: a processor; a database; and a non-transitory memory storing instructions that cause the processor to: [processors, code and storage devices Col. 11 ln. 15-21, database Col. 8 ln. 12-20] automatically train a learning model using a training data set, [trains a model to create a retrained model using retraining data Fig. 1 125 Col. 5 ln. 37-50 "retrain the original machine learning model using the retraining data to generate a retrained machine learning model"; automatic Col. 1 ln. 50-60 "updating or retraining a machine learning model automatically and learning an updated machine learning model automatically, such as in real time or near real time, or on a batch or periodic basis"] in response to a model accuracy after automatically training the learning model decreases or does not increase compared to before training, exclude the learning model from a target to be automatically trained, [when model accuracy (performance) after automatically learning (retrained model) is below threshold or negative exclude retrained model and use original (exclude) Fig. 2 230 Col. 6-7 ln. 64-6 "If instead the performance improvement is not sufficiently large or is negative, the method 200 proceeds to block 230, where the computing system 500 generates subsequent predictions using the original machine learning model"] Yeganeh does not specifically teach evaluating the model accuracy of the automatically trained learning model after training to generate a result, and executing the exclusion of the learning model from the target to be automatically trained based on the result, wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model. However, Nakano teaches evaluating the model accuracy of the automatically trained learning model after training to generate a result, and [calculates and compares accuracy to a condition/sufficiency and gets a no result ¶49-51, ¶37, ¶98 "it is determined that the retraining cannot be executed because the retraining accuracy is not sufficient in the retraining necessity determination"] executing the exclusion of the learning model from the target to be automatically trained based on the result, [result leads to not retraining ¶37 "In the example of FIG. 2, since the “reference value” is the accuracy a1 of the current in-operation model 101a and the accuracy a2 when the number of data in the current new collected data set is n2 is less than the accuracy a1, it is determined that the retraining is not executed."] wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model, and [register for accuracy Fig. 5 S16 ¶52 "accuracy of the Machine learning model recorded in Step S16"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model retraining disclosed by Yeganeh by incorporating the evaluating the model accuracy of the automatically trained learning model after training to generate a result, and executing the exclusion of the learning model from the target to be automatically trained based on the result, wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model disclosed by Nakano because both techniques address the same field of machine learning and by incorporating Nakano into Yeganeh helps reduce the deteriorating of model accuracy and doing more timely retraining of models [Nakano ¶3, ¶67] Yeganeh and Nakano do not specifically teach storing information on whether the learning model is subject to retraining. However, Fano teaches storing information on whether the learning model is subject to retraining. [library of rules that specify if training is needed ¶23, “a library of retraining rules 224” ¶43 "retraining rules specify that at least one of the one or more predictive models should be retrained "] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the model retraining disclosed by Yeganeh and Nakano by incorporating the storing information on whether the learning model is subject to retraining disclosed by Nakano because all techniques address the same field of machine learning and by incorporating Nakano into Yeganeh and Nakano enables models to be retrained at more appropriate times and save resources while avoiding obsolete models [Fano ¶6]. As to dependent claim 6, Yeganeh, Nakano and Fano teach the method of claim 1 above that is incorporated, Yeganeh, Nakano and Fano further teach wherein the model learning device comprises a server. [Yeganeh server Col. 7 ln 22-37] As to dependent claim 7, Yeganeh, Nakano and Fano teach the method of claim 1 above that is incorporated, Yeganeh, Nakano and Fano further teach further comprising a vehicle that includes the device. [Fano vehicle ¶35] As to dependent claim 8, Yeganeh, Nakano and Fano teach the method of claim 1 above that is incorporated, Yeganeh, Nakano and Fano further teach wherein the processor is caused to prevent re-training of the learning model from being automatically repeated. [Nakano unit determines retraining is not executed which is a processor ¶37, ¶106 "processor 5300 executes a program in cooperation with the main storage device 5400 to realize the accuracy improvement prediction model generation unit 12, the retraining accuracy prediction unit 15, and the retraining determination unit 16."] Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yeganeh in view of Nakano and Fano, as applied in the rejection of claim 1 above, and further in view of Zhdanov et al. (US 11048979 B1 hereinafter Zhdanov) As to dependent claim 2, Yeganeh, Nakano and Fano teach the method of claim 1 above that is incorporated, Yeganeh, Nakano and Fano do not specifically teach wherein the processor is further caused to, in response to the learning model being excluded from the target to be automatically trained , transmit a model accuracy improvement request for the learning model to an external organization terminal belonging to an external organization. However, Zhdanov teaches wherein the processor is further caused to, in response to the learning model being excluded from the target to be automatically trained, transmit a model accuracy improvement request for the learning model to an external organization terminal belonging to an external organization. [When score below a threshold, send a request (pass) to a annotating service (external terminal/organization) Fig. 3 120, 306 Col. 9 ln .30-35, Col. 6 ln. 8-32 "If the label score is lower than the threshold then the data may be passed to additional annotators to be further annotated by annotating service 120"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the retraining disclosed by Yeganeh, Nakano and Fano by incorporating the wherein the processor is further caused to, in response to the learning model being excluded from the target to be automatically trained , transmit a model accuracy improvement request for the learning model to an external organization terminal belonging to an external organization disclosed by Zhdanov because all techniques address the same field of machine learning and by incorporating Zhdanov into Yeganeh, Nakano and Fano saves time and improves accuracy in training models for building better models [Zhdanov Col. 2 ln. 29-61] Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yeganeh in view of Nakano, Fano and Zhdanov, as applied in claim 2 above, and further in view of Shaikh et al. (US 11205138 B2 hereinafter Shaikh) As to dependent claim 3, Yeganeh Nakano, Fano and Zhdanov teach the method of claim 2 above that is incorporated, Yeganeh, Nakano, Fano and Zhdanov do not specifically teach wherein the model accuracy improvement request includes information on the learning model excluded from the target to be automatically trained , the training data set used for training the learning model, and the data acquisition device that acquired the data used for creating the training data set. However, Shaikh teaches wherein the model accuracy improvement request includes information on the learning model excluded from the target to be automatically trained, the training data set used for training the learning model, and the data acquisition device that acquired the data used for creating the training data set. [messages with model improvements and metadata (information on model) Col. 3 ln. 40-67 "recommendations regarding model quality improvements or related models may be provided to a user via a message or an alert created and provided by a model quality program"…" model metrics, training and testing datasets, feature sets, a usage history, reviews, ratings, feedback, ML algorithms that were used, model metadata such as names, description tags and categories, and deployment metadata such as samples classified and an outcome provided"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the retraining disclosed by Yeganeh Nakano, Fano and Zhdanov by incorporating the wherein the model accuracy improvement request includes information on the learning model excluded from the target to be automatically learned, the training data set used for learning the learning model, and the data acquisition device that acquired the data used for creating the training data set disclosed by Shaikh because all techniques address the same field of machine learning and by incorporating Shaikh into Yeganeh Nakano, Fano and Zhdanov enhances training with suggestions on how to train giving users better and more options for training with higher quality [Shaikh Col. 2 ln. 35-49]. Response to Arguments Applicant's arguments filed 02/19/2026, with respect to 112 and 101, these rejections have been withdrawn. Applicant's arguments filed 02/19/2026. In the remark, applicant argues that: (1) Yeganeh fails to teach “evaluating the model accuracy of the automatically trained learning model after training to generate a result, and executing the exclusion of the learning model from the target to be automatically trained based on the result, wherein the database is configured to store an index of accuracy that is configured to measure the model accuracy of the learning model, and information on whether the learning model is subject to retraining.” as recited by amended claim 1. As to point (1), Applicant’s arguments with respect to claim 1 have been considered but are moot in view of a new ground of rejection as set forth above of Yeganeh in view of Nakano and Fano. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Yang et al. (US 20230391357 A1) teaches retraining a model a number of times in a vehicle (see ¶35) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Nov 12, 2025
Non-Final Rejection — §103
Feb 19, 2026
Response Filed
Apr 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month