Prosecution Insights
Last updated: April 19, 2026
Application No. 17/893,671

MODEL MANAGEMENT DEVICE AND MODEL MANAGEMENT METHOD

Final Rejection §101§103§112
Filed
Aug 23, 2022
Examiner
TORGRIMSON, TYLER J
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
85%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
291 granted / 400 resolved
+17.8% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
20.2%
-19.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 400 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Introductory Remarks In response to communications filed on 3 September 2025, claim(s) 1, 3, 5, 6, 8-11, and 13 is/are amended per Applicant’s request. Claim(s) 2 and 7 is/are cancelled. Therefore, claims 1, 3-6, and 8-13 are presently pending in the application, of which, claim(s) 1, 6, and 13 is/are presented in independent form. No IDS has been received since the mailing of the last Office action. The previously raised objection(s) to the claims is/are withdrawn in view of the amendments to the claims. The previously raised 112 rejection of claims 1-12 is withdrawn in view of the amendments to the claims. Examiner’s Note The rejections below group claims that may not be identical, but whose language and scope are so substantively similar as to lend themselves to grouping, in the interests of clarity and conciseness. Any citation to the instant specification herein is made to the PGPub version (if applicable). The examiner notes that no statement has been entered regarding the inventorship of individual claims as required under 37 CFR 1.56, and therefore assumes that all claims have the same inventorship or are directed to inventions that were commonly owned as of the effective filing date of the invention. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 3 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 1 requires that the processor stop transmitting the information to remaining target areas, after having listed only a first target area and a second target area. Claim 3 purports to have the processor transfer results to “at least one target area other than the second target area” upon the same condition that claim 1 uses to trigger ceasing transmission. As claim 3 depends from claim 1, this is an improper broadening from the express statement of claim 1 that transmission is ceased. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 12 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “data related to human health” in claim 12 is a relative term which renders the claim indefinite. The term “related to” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Opinions will vary as to what is related to human health, and as there is no standard provided, the claim is indefinite to one of ordinary skill in the art. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-6, and 8-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) certain methods of organizing human activity comprising steps of determining the accuracy of a model, transmitting data about the model when accuracy is high enough, training the model, and not transmitting data about the model anywhere else when the model does not improve as applied to other data relative to a baseline. Further, the dependent claims offer additional abstract idea limitations providing the utilization of the best available learning model for the given parameters. In this analysis, only those claim limitations stipulated as additional elements are considered to be limitations distinct from the abstract idea itself. With respect to the independent claims, claim 1 is representative. The claim describes the fundamental idea of learning, and closely matches the scientific method. First, develop a hypothesis (a model); test it and analyze the data (determine accuracy of the model); report conclusions (transmit); observation and questioning (training the model on new data); and should the hypothesis fail (accuracy lower than a baseline) cease its use (stop transmitting to further areas). Claim 1 reflects this in the abstract limitations that follow: “a model management device … to determine an accuracy of a first [] model; transmit when a first [] model having an accuracy of a predetermined value or more is generated in a first target area, information about the first [] model to at least one target area different from the first target area; and train the first [] model, wherein: the at least one target area includes a second target area, and in a case where the [device] receives a result that the accuracy of the first [] model is not improved with respect to an existing [] model in the second target area when data acquired in the second target area is used, the processor stops transmitting the information about the first [] model to remaining target areas.” These limitations also parallel those at issue in the holding of TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016). The additional elements in the claim are: “a processor” and that the model is a “machine learning model”. When determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer, examiners may consider: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). With respect to the “machine learning model” additional element, there are no details about particular machine learning model or how the machine learning model operate to train for a specific task, only that it is being trained to do so. The machine learning models are used to generally apply the abstract idea without placing any limitation on how the models operate. The independent claims omit any details as to how the models solve a technical problem, and instead the claims recite only the idea of a solution or outcome. Also, the claims invoke generic machine learning models merely as a tool for making the recited mathematical calculation rather than purporting to improve the technology or a computer. See MPEP 2106.05(f). The judicial exception is not integrated into a practical application because the additional elements amount to nothing more than implementation of the abstract idea in a computer environment and/or is merely using a computer as a tool to perform the concept. See MPEP 2016.04(d)(I) and 2106.05(f). This corresponds to the decision of the Federal Circuit in Recentive Analytics, Inc. v. Fox Corp., No. 23-2437 (Fed. Cir. 2025). The claim does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional elements amount to nothing more than mere instructions to apply the exception using generic computer component(s). These cannot provide an inventive concept, and thus the claims are patent-ineligible. The other independent claims provide no further additional elements; therefore, they also do not integrate the abstract idea into a practical application nor amount to significantly more than the abstract idea. Dependent claims 3-5 and 8-12 add nothing more than additional abstract idea limitations, which again do nothing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. Dependent claim 6 adds the additional element of “relearn the first machine learning model using data acquired in the second target area”. This does not amount to a practical application of the abstract idea nor significantly more than the abstract idea as demonstrated by Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 10 (Fed. Cir. Apr. 18, 2025). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 4-6, 8-11, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ladkat et al. (U.S. Patent No. 11,853,391 B1) (hereinafter Ladkat) in view of Baker (U.S. PGPub No. 2021/0118133 A1) (hereinafter Benkert). As per claim 1, Ladkat teaches a model management device comprising a processor configured to: determine an accuracy of a first machine learning model (column 5 lines 17-20); transmit when the first machine learning model having the accuracy of a predetermined value or more is generated in a first target area (column 5 lines 17-20), information about the first machine learning model to at least one target area different from the first target area (column 5 lines 23-33); and train the first machine learning model (column 5 lines 34-49), wherein: the at least one target area includes a second target area (column 6 lines 4-8 and 19-21 – processors exchange information). Ladkat does not appear to explicitly disclose: in a case where the processor receives a result that the accuracy of the first machine learning model is not improved with respect to an existing machine learning model in the second target area when data acquired in the second target area is used, the processor stops transmitting the information about the first machine learning model to remaining target areas. However, Benkert does teach that if accuracy is below a threshold, then the data for the model is discarded. Benkert at 0058. It would have been obvious to one of ordinary skill in the art to incorporate the teachings of Benkert into the invention of Ladkat in order to have the the processor stops transmitting the information about the first machine learning model to remaining target areas in a case where the processor receives a result that the accuracy of the first machine learning model is not improved with respect to an existing machine learning model in the second target area when data acquired in the second target area is used. This would have been clearly advantageous as it would prevent the system from wasting time testing data that does not meet the required accuracy thresholds. The combination hereinafter LB. As per claim 4, LB teaches the model management device according to claim 1, wherein the first machine learning model is generated using a different kind of a machine learning model from an existing machine learning model in the first target area (Ladkat at column 4 lines 37-46). As per claim 5, LB teaches the model management device according to claim 1,wherein the processor is further configured to select a machine learning model to be used in the first target area (Ladkat at column 4 lines 37-46), wherein the first machine learning model is generated using a different kind of a machine learning model from the existing machine learning model in the first target area (Benkert at 0058), and when data acquired in each target area is used and the accuracy of the first machine learning model is improved with respect to the existing machine learning model in each target area in a predetermined number or more of target areas other than the first target area, the processor changes the machine learning model to be used in the first target area to the first machine learning model (Benkert at 0058). As per claim 6, see remarks regarding claim 1. As per claim 8, LB teaches the model management device according to claim 6, wherein the processor is further configured to further comprising: select a machine learning model to be used in the second target area (Ladkat at col. 5 lines 25-49 – a selection must be made in order for it to be distributed among the processors); and calculate the accuracy of the first machine learning model using data acquired in the second target area (Ladkat at col. 5 lines 17-20), wherein when the accuracy of the first machine learning model is higher than an accuracy of an existing machine learning model in the second target area, change the machine learning model to be used in the second target area to the first machine learning model (Benkert at 0058). As per claim 9, LB teaches the model management device according to claim 6, wherein the processor is further configured to: select a machine learning model to be used in the second target area (Ladkat at col. 5 lines 25-49 – a selection must be made in order for it to be distributed among the processors); and calculate the accuracy of the first machine learning model using data acquired in the second target area (Ladkat at col. 5 lines 17-20), wherein in a case where the accuracy of the first machine learning model is higher than an accuracy of an existing machine learning model in the second target area, and the accuracy of the first machine learning model is improved with respect to an existing machine learning model in each target area when data acquired in each target area is used in a predetermined number or more of target areas other than the second target area, the processor changes the machine learning model to be used in the second target area to the first machine learning model (Benkert at 0058). As per claim 10, LB teaches the model management device according to claim 8, wherein when the accuracy of the first machine learning model is equal to or less than the accuracy of the existing machine learning model in the second target area, the processor transmits a result to the first target area (Benkert at 0058 – notice of the discarding of the lower accuracy models would need to be transmitted within the distributed system of Ladkat). As per claim 11, LB does not appear to explicitly disclose the model management device according to claim 9, wherein the predetermined number differs depending on an output parameter of a machine learning model to be changed. However, this is merely a matter of design choice. One of ordinary skill in the art would readily recognize that more or fewer target areas can be utilized (see generally Ladkat at Fig. 5 and corresponding description) in order to yield the desired results in accordance with the resources available, and that this would need to be in the form of a parameter to be changed within the system. Therefore, this claim would have been obvious as a matter of mere design choice. See MPEP 2144.04. As per claim 13, see remarks regarding claim 1. Response to Arguments Applicant's arguments filed 3 September 2025, with respect to the rejections under 35 USC 101, have been fully considered but they are not persuasive. The applicant argues that the features are integrated into a practical application seemingly because they improve the accuracy of a machine learning model. This argument is unpersuasive, because as shown in Recentive, merely training a learning model does not amount to a practical application. Applicant’s arguments, see page 8 et seq., filed 3 September 2025, with respect to the rejection(s) of claim(s) 1, 4, 6, and 13 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Benkert. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER J TORGRIMSON whose telephone number is (571)270-5550. The examiner can normally be reached Monday - Friday 9 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksander Kerzhner can be reached at 571.270.1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER J TORGRIMSON/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Aug 23, 2022
Application Filed
Jun 09, 2025
Non-Final Rejection — §101, §103, §112
Aug 19, 2025
Applicant Interview (Telephonic)
Aug 27, 2025
Examiner Interview Summary
Sep 03, 2025
Response Filed
Dec 03, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566824
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DATA PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566803
SYSTEM AND METHOD FOR IMPLEMENTING A GUIDED COLLABORATION PLATFORM FOR SPECIFIC DOMAINS
2y 5m to grant Granted Mar 03, 2026
Patent 12561287
DUPLICATE FILE MANAGEMENT FOR CONTENT MANAGEMENT SYSTEMS AND FOR MIGRATION TO SUCH SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12561380
COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR ANALYZING CLUSTERS OF CODED DOCUMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12530624
PROVISIONING RESOURCE-EFFICIENT ARTIFICIAL INTELLIGENCE MODELS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
85%
With Interview (+12.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 400 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month