Prosecution Insights
Last updated: April 19, 2026
Application No. 18/246,205

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, COMPUTER PROGRAM, AND LEARNING SYSTEM

Final Rejection §101§103§112
Filed
Mar 22, 2023
Examiner
CHANNAVAJJALA, SRIRAMA T
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Group Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
518 granted / 690 resolved
+20.1% vs TC avg
Strong +33% interview lift
Without
With
+32.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
24 currently pending
Career history
714
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 690 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application 18/246,205, filed on 3/22/2023 (or after March 16, 2013), is being examined under the first inventor to file provisions of the AIA (First Inventor to File). In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application is a 371 of PCT/JP2021/030191 filed on 08/18/2021 DETAILED ACTION Response to Amendment Claims 1-3,5-9,11-15 are pending in this application. Examiner acknowledges applicant’s amendment filed on 2/19/2026 Drawings The Drawings filed on 3/22/2023 are acceptable for examination purpose. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/22/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner, PTO 1449 mailed on 12/2/2025 Priority Acknowledgment is made of applicant’s claim for JAPAN foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JAPAN application # 2020-172583 filed on 10/13/2020. Response to Arguments Applicant's arguments filed 2/19/2026 with respect to claims 1-3,5-9,11-15 have been fully considered but they are not persuasive, for examiner’s response, see discussion below: In view of amendment to the claims, 1-2,5-6, 7-10,12 canceling claim 4, 10, the rejection under 35 U.S.C. 112(f) as set forth in the previous office action is hereby withdrawn, however, examiner hereby presented 112(b) rejection based on the amendment to the claims filed on 2/19/2026. 35 USC 101: a)At page 13-18 , claim 1, applicant argues: Accordingly, unlike reciting mental concepts such as "observation", "evaluation", "judgment", and "opinion", amended independent claim 1 recites the above-recited technical features that are specific to storage of the plurality of pieces of specification information in association with the plurality of training methods and selection of the optimum training method that is within the range of the specification associated with the edge device. Further, the Applicant respectfully submits that the selection of the optimum training method cannot be performed by the human mind, nor can it be performed by pen and paper, because this step is not merely a choice, but a technical analysis of whether a training method is within the specification (for example, hardware specification, such as memory capacity, operation performance, or power) of the edge device. Furthermore, the Applicant respectfully submits that the selection of the optimum training method enables a model to be trained at the edge device in an efficient manner………… For example, amended independent claim 1 recites the features of "select an optimum training method for the task information ... store a plurality of pieces of specification information associated with a plurality of training methods, the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively ... select, among the plurality of training methods, the optimum training method within a range of a specification associated with the edge device, and in the edge device, the specification is available for training of the model." Further, the Applicant's Specification, for example, describes "in order to realize high-performance or high-accuracy model training, training data corresponding to a task is indispensable ... [a]n object of the present disclosure is to provide an information processing apparatus, an information processing method, a computer program, and a learning system that perform processing for efficiently training a model that performs a specific task ... there is a problem that the specification required for model training processing differs for each training method. Therefore, there is a possibility that a situation occurs in which the optimum training method selected on the basis of the task For example, amended independent claim 1 recites "select an optimum training method for the task information ... store a plurality of pieces of specification information associated with a plurality of training methods, the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively ... select, among the plurality of training methods, the optimum training method within a range of a specification associated with the edge device, and in the edge device, the specification is available for training of the model," that enable the model to be trained at the edge device in an efficient manner without exceeding the specification of the edge device……….. Examiners response: As to the argument (a), Examiner submits that the pending claims (as amended 2/19/2026) should pass the test set forth in the 2019 Revised Patent Subject Matter Eligibility Guidance published on January 7, 2019 (84 Fed. Reg. 50), as updated October 2019, referred to herein as the PEG 2019. Applicant will focus on Prong Two of Step 2A, in evaluating the pending claims using this section of the test set forth in the PEG 2019 As explained in the 2019 PEG, the evaluation of Prong Two of Step 2A requires the use of the considerations (e.g. improving technology, effecting a particular treatment or prophylaxis, implementing with a particular machine, etc.) identified by the Supreme Court and the Federal Circuit, to ensure that the claim as a whole “integrates [the] judicial exception into a practical application [that] will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception”. These considerations are set forth in the 2019 PEG, MPEP 2106.05(a) through (c), and MPEP 2106.05(e) through (h). Note, a specific way of achieving a result is not a stand-alone consideration in Step 2A Prong Two. However, the specificity of the claim limitations is relevant to the evaluation of several considerations including the use of a particular machine, particular transformation and whether the limitations are mere instructions to apply an exception. If the claim integrates the judicial exception into a practical application based upon evaluation of these considerations, the additional limitations impose a meaningful limit on the judicial exception, and the claim is eligible at Step 2A. For example, if the additional limitations as amended 2/19/2026 (the plurality of pieces of specification information is for implementation of the plurality of training methods…..) reflect an improvement in the functioning of a computer, or an improvement to another technology or technical field, the claim integrates the judicial exception into a practical application and thus imposes a meaningful limit on the judicial exception. No further analysis is required The pending claim 1 (as amended 2/19/2026) limitation “the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively, the plurality of training methods includes the training method, the circuitry is further configured to select, among the plurality of training methods, the optimum training method within a range of a specification associated with the edge device” do not provide “improvement to another technology or technical field”, for example, plularity of training methods, optimum training method within a range…….., is neither improvement nor solution to the technical problem, furthermore, the present claim(s) (as amended 2/19/2026) failed to include the components or steps of the invention that provide the improvement and/or solution to the technical problem described in the specification. The claim 1 limitations are recited as being performed by a computer, the recited computer is recited at a high level of generality i.e., as a generic computer performing generic computer functions. The limitation “training methods………plurality of pieces of specification………. does not provide any details about how the detection made and the plain meaning of “plurality of training methods” encompasses mental observations or evaluations, e.g., may be a computer programmer’s mental identification of an “training method(s)”. The step of configured to select, among the plurality of training methods, nothing about satisfying specific condition to obtain “optimum training method”. The claim does not include any additional details that explain specific training method may be selected in optimum training method. As discussed here, the broadest reasonable interpretation of above steps is that those steps fall within the mental process grouping of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgement and opinion. See MPEP 2106.04(a)(2) In view of The MPEP states that this consideration is only evaluated in step 2B “particularly the improvement consideration (see MPEP 2106.05(a)) “the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively, the plurality of training methods includes the training method, the circuitry is further configured to select, among the plurality of training methods, the optimum training method within a range of a specification associated with the edge device, and in the edge device, the specification is available for training of the model” the additional elements were identified as general-purpose machines which merely implement the abstract idea within a computing environment. The rules detailing the evaluation for general purpose machines is put forth in MPEP 2106.05(b) and does not require an evaluation of well understood, routine, or conventional. Examiner applies above arguments to Claim 5-7,11-13, includes similar features, claims 2-3,8-9,14-15, depend from claims 1,7, as such, the pending claims fail Prong Two of Step 2A-2B of the PEG 2019. Therefore, examiner maintains rejection under 35 U.S.C. § 101 b)Applicant’s arguments (page 18-25) with respect to claim(s) 1-3,5-9,11-15 filed on 2/19/2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. 35 U.S.C. 112(b): The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 5-7,11-13,15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention Claim 1,5-6 (as amended): “output the optimum training method to the edge device, wherein the memory is further configured to store a plurality of pieces of specification information associated with a plurality of training methods, the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively” claim 13 (as amended): a plurality of pieces of specification information associated with a plurality of training methods, the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively claim 7,11-12 (as amended) calculate a specification associated with the information processing apparatus for the training of the model; acquire an optimum training method for the task information from an external apparatus, wherein the acquired optimum training method is implementable within a range of the calculated specification; and train the model by using the acquired optimum training method claim 14: wherein the specification associated with the edge device includes at least one of claim 15 wherein each piece of specification information of the plurality of pieces of specification information includes at least one of: It is unclear what is meant by “specification information”, “calculate a specification”, “specification associated”, for compact prosecution, examiner assumed, treated specification information, specification available may training dataset(s) (Sharama: fig 5) in the office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3,5-9,11-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. Claim 1-13 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the 2019 Revised Patent Subject Matter Eligibility Guidance, Federal Register (84 FR 50) on January 7, 2019 hereinafter 2019 PEG Step 1. In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is noted that the method of claim 1,5-6, 7,11-12 directed to one of the eligible categories of subject matter and therefore satisfy Step 1. Claim 1,5-6 (Currently Amended) An information processing apparatus, comprising: a memory configured to store a correspondence relationship between a training method for a model and task information of the model; and circuitry configured to: select an optimum training method for the task information, wherein the task information is input from an edge device; and output the optimum training method to the edge device, wherein the memory is further configured to store a plurality of pieces of specification information associated with a plurality of training methods, the plurality of pieces of specification information is for implementation of the plurality of training methods, the plurality of pieces of specification information is stored in association with the plurality of training methods, respectively, the plurality of training methods includes the training method, the circuitry is further configured to select, among the plurality of training methods, the optimum training method within a range of a specification associated with the edge device, and in the edge device, the specification is available for training of the model”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, in the context of this claim, this limitation encompasses the user thinking of collection and/or data gathering, the plain meaning of these terms (training, task information, optimum) using algorithm, which compute training data using training model, further the claim does not provide any detail about how the “optimum training”, and the plain meaning of “optimum” encompasses mental observations or evaluations e.g., a computer programmer’s mental identification of task information in the training data, and claim does not include any additional details that explain the “optimum” training . The step outputting, data merely requires a generic output using the trained model and the claim does not impose any limit on how the data is output or require any particular components that are used to output the data If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG. Accordingly, the claim recites an abstract idea. With respect to Step 2A prong two of the 2019 PEG, the judicial exception is not integrated into a practical application. The additional elements are directed to method steps, however, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular data structure of training model, task information, select, optimum training to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, although these elements have been fully considered, they are directed to the use of generic computing elements (para 0014-0020, 0029-0031,0040, 0047-0049,0052-0055,0063-0069,0080-0091, of the instant specification make it clear that the disclosed functionality is implemented on well-known computing systems and general purpose computing devices) to perform the abstract idea, which is not sufficient to amount to a practical application (as noted in the 2019 PEG) and is amount to simply saying "apply it" using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment computer based operating environment) by using the computer as a tool to perform the abstract idea. Since the analysis of Step 2A prong one and prong two results in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. Claim 13, further elaborates on a plurality of pieces of specification information………….. claim 7,11-12, further elaborates on calculate a specification associated with the information processing apparatus for the training of the model”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, “calculate specification” i.e, , other than reciting “by a processor”, general-purpose computing, nothing in the claim element precludes the step from practically being performed in the mind. Consistent with the specification as at specification para 0016-0017, 0041,0056-0057, 0067,00734-0074 one can mentally calculate specification in the context of this claim limitation encompasses the user manually supplying parameter values covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “mental processes”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional method limitations are directed to a generic computer, at a very high level of generality and without imposing meaningful limitations on the scope of the claim. In addition para: 0014-0020, 0029-0031,0040, 0047-0049,0052-0055,0063-0069,0080-0091, of the instant specification describe generic off-the-shelf computer-based elements for implementing the claimed invention which does not amount to significantly more than the abstract idea and is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359-60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093-94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257-1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claim patent-eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".) The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well-understood, routine, and conventional manner. MPEP § 2106.05 (d)(II) sets forth the following: The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g. at a high level of generality) as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec...; TLI Communications LLC v. AV Auto. LLC...; OIP Techs., Inc., v. Amazon.com, Inc... ; buySAFE, Inc. v. Google, Inc...; Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life...; Electronic recordkeeping, Alice Corp...; Ultramercial... ; Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc...; Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank...; and A web browser's back and forward button functionality, Internet Patent Corp. v. Active Network, Inc... Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). Claim 2. (Currently Amended) further elaborates , The information processing apparatus according to claim 1, wherein the circuitry is further configured to select the optimum training method for the task information, based on a similarity of a feature vector representing the task information.”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 3. (Original) further elaborates , The information processing apparatus according to the claim 2, wherein the feature vector is calculated from a training data set of a relevant model by using meta- learning”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, “calculate” i.e, , other than reciting “by a processor”, general-purpose computing, nothing in the claim element precludes the step from practically being performed in the mind. Consistent with the specification as at specification para 0016-0017, 0041,0056-0057, 0067,00734-0074 one can mentally calculate in the context of this claim limitation encompasses the user manually supplying parameter values covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “mental processes”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 4. (Canceled) Claim 8. (Currently Amended) further elaborates “The information processing apparatus according to claim 7, wherein the circuitry is further configured to perform inference by using the trained model”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 9. (Currently Amended) further elaborates, The information processing apparatus according to claim 7, wherein the circuitry is further configured to: “calculate a feature vector representing a second data set, wherein the second data set is extracted as the task information by using meta-learning; and acquire optimum training method based on task information having a feature vector similar to the calculated feature vector”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, “calculate” i.e, , other than reciting “by a processor”, general-purpose computing, nothing in the claim element precludes the step from practically being performed in the mind. Consistent with the specification as at specification para 0016-0017, 0041,0056-0057, 0067,00734-0074 one can mentally calculate in the context of this claim limitation encompasses the user manually supplying parameter values covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “mental processes”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(d)(II) listing operations including “receiving or transmitting data”, “storing and retrieving data in memory”, and “performing repetitive calculations” as WURC Claim 10. (Canceled) Claim 14. (New) further elaborates The information processing apparatus according to claim 1, wherein the specification associated with the edge device includes at least one of: a memory capacity associated with the edge device, an operation performance associated with the edge device, an operation time associated with the edge device, or a power associated with the edge device”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 15. (New) further elaborates The information processing apparatus according to claim 1, wherein each piece of specification information of the plurality of pieces of specification information includes at least one of: a memory capacity for implementation of a corresponding training method of the plurality of training methods, an operation performance for the implementation of the corresponding training method of the plurality of training methods, an operation time for the implementation of the corresponding training method of the plurality of training methods, or a power for the implementation of the corresponding training method of the plurality of training methods”, which have been determined to be extra-solution activity that does not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(b)(I). Even in combination, the additional details recited in these claims do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3,5-9,11-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al., (hereafter Sharma), US Pub. No. 2021/0042645 based on provisional application filed Aug, 2019 in view of Zhou et al., US Pub. No. 2021/0089944 filed Mar, 2020 As to claim 1,5-6 (Currently Amended) Sharma teaches a system which including “An information processing apparatus, comprising:” (Sharma: fig 1) a memory configured to store a correspondence relationship (Sharma: fig 13,0211) between a training method for a model and task information of the model; (Sharma: fig 1) and circuitry configured to: (Sharma: fig 13,0211-0212) PNG media_image1.png 97 119 media_image1.png Greyscale select an training method for the task information, wherein the task information is input from an edge device (Sharma: Abstract, fig 1, 0055-0056 – Sharma teaches training models implementing on training data sets that including training datasets using plurality of edge devices); and output the training method to the edge device, (Sharma: 0056, 0064) wherein the memory is further configured to store a plurality of pieces of specification information associated with a plurality of training methods” (Sharma: fig 1-2,0075 – Sharma teaches training acquisition parameters from various sources and plurality of specification information corresponds to different types of model data, each of which proposing training acquisition parameters) PNG media_image2.png 98 120 media_image2.png Greyscale “the plurality of pieces of specification information is for implementation of the plurality of training methods” (Sharma: fig 2, fig 3-4, 0106-0111 – Sharma teaches overall workflow models defining cloud learning modules where multiple machine learning models associated with training data sets and plurality databases) , PNG media_image3.png 219 140 media_image3.png Greyscale “the plurality of pieces of specification information is stored in association with the plurality of training methods” (Sharma: fig 3-4, 0111, - Sharma teaches training data sets with respect to multiple model owners and model aggregator stored in the database element 318 and 418 of workflow specification information respectively, “the plurality of training methods includes the training method” (Sharma: 0123-0124 – Sharma teaches multiple data models operating respective training data and training models stored in model repository element 536), PNG media_image4.png 172 251 media_image4.png Greyscale the circuitry is further configured to select, among the plurality of training methods, the training method within a specification associated with the edge device” (Sharma: 0053, 0056-0057,0064), and “in the edge device, the specification is available for training of the model” (Sharma: 0056-0057,0060 – Sharma teaches training datasets of respective machine learning models stored plurality of edge devices accompanied with model metadata). It is however, noted that Sharma does not teach “select an optimum training method”, “optimum training method within a range of a specification associated with the edge device” although Sharma teaches selecting, train model, model aggregator and repository (fig 1, fig 4) and model optimizer fig 7, element 726. PNG media_image5.png 210 139 media_image5.png Greyscale On the other hand, Zhou et al., disclosed “select an optimum training method” (Zhou: Abstract, fig 7, element 770, 0130 – Zhou teaches selecting an optimum machine learning models from two or more machine learning models as detailed in fig 7, two or more machine learning models as detailed in fig 1C) PNG media_image6.png 51 209 media_image6.png Greyscale PNG media_image7.png 183 145 media_image7.png Greyscale Zhou disclosed “optimum training method within a range of a specification associated with the edge device” (Zhou: Abstract, 0029, 0039, fig 7,0135-0136 – Zhou teaches machine learning models learning from evaluation of respective training data sets and processing one or more forecasts to generate respective evaluation core with respect to selected range for example as detailed in 0029,0039) It would have been obvious to a person of ordinary skill in the art at the time of filing the claimed invention selecting optimum machine learning models from list of machine learning models of Zhou et al., into matching models with the training datasets based on evaluating the training parameters of Sharma et al., because both Sharma, Zhou teaches multiple machine learning models configured to generate training data sets (Sharma: Abstract, fig 2, fig 5; Zhou: Abstract, fig 1B-1C) and they both are from the same field of endeavor. Because both Sharma, Zhou teaches multiple machine learning models in generating training data sets, it would have been obvious to one skill ed in the art to substitute and/or modify one method for the other particularly, forecasting training data from various training models and selectg] optimum machine learning model to provide accurate prediction in evaluation of databased on various parameters including time parameters (Zhou:0004-0005) thereby improves quality and reliability of the machine learning model(s). As to claim 2, the combination of Sharma, Zhou disclosed wherein the circuitry is further configured to select the optimum training method for the task information, based on a similarity of a feature vector representing the task information” (Zhou: 0024, Abstract, fig 7, element 770, 0130). As to claim 3, 9, the combination of Sharma, Zhou disclosed “training data set of a relevant model by using meta- learning “ (Sharma: 0053, fig 10,1002). The prior art of Zhou disclosed “wherein the feature vector is calculated from a training data set of a relevant model by using meta- learning” (Zhou: 0024,0076-0077 – Zhou teaches using vector machine classifier technique in training machine learning models in various categories including calculating, comparison of respective training data sets from relevant machine learning models) . Claim 4. (Canceled) As to Claim 7,11,12. (Currently Amended), Sharma teaches a system which including “an information processing apparatus, comprising: (Sharma: fig 1) circuitry configured to: (Sharma: fig 13,0211-0212) PNG media_image1.png 97 119 media_image1.png Greyscale “collect a first data set for training of a model” (Sharma: fig 5 – Sharma teaches model repository element 536 stores multiple machine learning models); PNG media_image4.png 172 251 media_image4.png Greyscale “extract task information of the model based on the collected first data set” (Sharma: 0058, Abstract, fig 1-2, Sharma teaches training models implementing on training data sets from multiple model sources acquiring respective task parameters); PNG media_image2.png 98 120 media_image2.png Greyscale “calculate a specification associated with the information processing apparatus for the training of the model” (Sharma: fig 5-6, 0124, 0129-0134 – Sharma teaches security setting, log records and respective metrics maintained as statistical information particularly with respect to training data sets to train the selected model on training data in the database as shown in fig 5, further Sharma teaches aggregator configuring the not only data path for training model, but also used in calculating aggregation in improving training model as detailed in fig 6) PNG media_image8.png 224 129 media_image8.png Greyscale “acquire an training method for the task information from an external apparatus, (0058, Abstract, fig 1-2, Sharma teaches training models implementing on training data sets from multiple model sources acquiring respective task parameters) wherein the acquired training method is implementable within a range of the calculated specification” (Sharma: fig 5-6, 0124, 0129-0134 – Sharma teaches security setting, log records and respective metrics maintained as statistical information particularly with respect to training data sets to train the selected model on training data in the database as shown in fig 5, further Sharma teaches aggregator configuring the not only data path for training model, but also used in calculating aggregation in improving training model as detailed in fig 6and “train the model by using the acquired training method” (Sharma: fig 6-7 – Sharma teaches multiple training models acquiring respective training data data set in model building in training models that including model optimizer fig 7, element 726) It is however, noted that Sharma does not teach “train the model by using the acquired optimum training method “, “acquire an optimum training method for the task information from an external apparatus, acquired optimum training method is implementable within a range of the calculated specification”, although Sharma teaches selecting, train model, model aggregator and repository (fig 1, fig 4) and model optimizer fig 7, element 726. PNG media_image5.png 210 139 media_image5.png Greyscale On the other hand, Zhou disclosed “acquire an optimum training method for the task information from an external apparatus acquired optimum training method is implementable within a range of the calculated specification” (Zhou: Abstract, 0029, 0039, fig 7,0135-0136 – Zhou teaches machine learning models learning from evaluation of respective training data sets and processing one or more forecasts to generate respective evaluation core with respect to selected range for example as detailed in 0029,0039) “train the model by using the acquired optimum training method “ (Zhou: Abstract, fig 7, element 770); It would have been obvious to a person of ordinary skill in the art at the time of filing the claimed invention selecting optimum machine learning models from list of machine learning models of Zhou et al., into matching models with the training datasets based on evaluating the training parameters of Sharma et al., because both Sharma, Zhou teaches multiple machine learning models configured to generate training data sets (Sharma: Abstract, fig 2, fig 5; Zhou: Abstract, fig 1B-1C) and they both are from the same field of endeavor. Because both Sharma, Zhou teaches multiple machine learning models in generating training data sets, it would have been obvious to one skill ed in the art to substitute and/or modify one method for the other particularly, forecasting training data from various training models and selectg] optimum machine learning model to provide accurate prediction in evaluation of databased on various parameters including time parameters (Zhou:0004-0005) thereby improves quality and reliability of the machine learning model(s). As to claim 8, the combination of Sharma, Zhou disclosed “wherein the circuitry is further configured to perform inference by using the trained model” (Sharma: fig 2, fig 5) Claim 10. (Canceled) Claim 13. (Currently Amended) Sharma teaches a system which including “A learning system, comprising” (Sharma: fig 2-3, 0053,0062,0064) PNG media_image2.png 98 120 media_image2.png Greyscale “a first apparatus including first circuitry configured to collect a data set and train a model” (Sharma: fig 2 – Sharma teaches training acquisition data from multiple sources); and a second apparatus including second circuitry configured to output a training method for the model to the first apparatus, (Sharma: fig 2 – Sharma teaches training acquisition data from multiple sources) wherein the first circuitry is further configured to extract task information of the model based on the collected data set, (Sharma: Abstract, fig 1, fig 5, 0055-0056,0122-0123 – Sharma teaches training models implementing on training data sets that including training datasets) and “the second circuitry is further configured to” (Sharma: fig 2 Sharma teaches training acquisition data from multiple sources): “select an training method for the task information of the first apparatus by using a database, wherein the database stores” (Sharma: 0123-0124 – Sharma teaches multiple data models operating respective training data and training models stored in model repository element 536) PNG media_image4.png 172 251 media_image4.png Greyscale “a correspondence relationship between the training method of the for a model and the task information of the model” (Sharma: fig 5 – Sharma teaches training datasets stored in database element 552 accessed by model element 514 to train model element 524 sent to the model repository where multiple models able to select respective task model), and “a plurality of pieces of specification information associated with a plurality of training methods” (Sharma: fig 1-2,0075 – Sharma teaches training acquisition parameters from various sources and plurality of specification information corresponds to different types of model data, each of which proposing training acquisition parameters) “the plurality of pieces of specification information is for implementation of the plurality of training methods” (Sharma: fig 1-2,0075 – Sharma teaches training acquisition parameters from various sources and plurality of specification information corresponds to different types of model data, each of which proposing training acquisition parameters) “the plurality of pieces of specification information is stored in association with the plurality of training methods respectively,” (fig 1-2, fig 5 – Sharma teaches multiple training datasets in relationship with multiple model), “the plurality of training methods includes the training method, the training method is selected among the plurality of training methods” (Sharma: fig 1-2,0075 – Sharma teaches training acquisition parameters from various sources and plurality of specification information corresponds to different types of model data, each of which proposing training acquisition parameters), “in the first apparatus, the specification is available for the training of the model” (Sharma: 0056-0057,0060 – Sharma teaches training datasets of respective machine learning models stored plurality of edge devices accompanied with model metadata).: and “output the training method to the first apparatus” (Sharma: fig 6-7 – Sharma teaches model improved version with respect to validating (fig 6) and outputting respective training method associated with model as detailed in fig 7) It is however, noted that Sharma does not teach “select an optimum training method”, “the selected optimum training method is within a range of a specification associated with the first apparatus”, although Sharma teaches selecting, train model, model aggregator and repository (fig 1, fig 4) and model optimizer fig 7, element 726. PNG media_image5.png 210 139 media_image5.png Greyscale On the other hand, Zhou et al., disclosed “select an optimum training method” (Zhou: Abstract, fig 7, element 770, 0130 – Zhou teaches selecting an optimum machine learning models from two or more machine learning models as detailed in fig 7, two or more machine learning models as detailed in fig 1C) PNG media_image6.png 51 209 media_image6.png Greyscale PNG media_image7.png 183 145 media_image7.png Greyscale Zhou disclosed “the selected optimum training method is within a range of a specification associated with the first apparatus” (Zhou: Abstract, 0029, 0039, fig 7,0135-0136 – Zhou teaches machine learning models learning from evaluation of respective training data sets and processing one or more forecasts to generate respective evaluation core with respect to selected range for example as detailed in 0029,0039) It would have been obvious to a person of ordinary skill in the art at the time of filing the claimed invention selecting optimum machine learning models from list of machine learning models of Zhou et al., into matching models with the training datasets based on evaluating the training parameters of Sharma et al., because both Sharma, Zhou teaches multiple machine learning models configured to generate training data sets (Sharma: Abstract, fig 2, fig 5; Zhou: Abstract, fig 1B-1C) and they both are from the same field of endeavor. Because both Sharma, Zhou teaches multiple machine learning models in generating training data sets, it would have been obvious to one skill ed in the art to substitute and/or modify one method for the other particularly, forecasting training data from various training models and selectg] optimum machine learning model to provide accurate prediction in evaluation of databased on various parameters including time parameters (Zhou:0004-0005) thereby improves quality and reliability of the machine learning model(s). As to claim 14, the combination of Sharma, Zhou disclosed a memory capacity associated with the edge device (Sharma: fig 13, 0056,0064), an operation performance associated with the edge device (Sharma: fig 1-2, 0056,0064), “an operation time associated with the edge device” (Sharma: fig 1-2), or “a power associated with the edge device” (Sharma: 0056,0064, edge devices including user endpoint and servers configured to respective power to the edge devices). As to claim 15, the combination of Sharma, Zhou disclosed a memory capacity for implementation of a corresponding training method of the plurality of training methods (Sharma: fig 2-5, fig 13, 0056,0064), “an operation performance for the implementation of the corresponding training method of the plurality of training methods” (Sharma: 0072, 0122, fig 2-5) “an operation time for the implementation of the corresponding training method of the plurality of training methods” (Sharma: fig 1-2,0075), or a power for the implementation of the corresponding training method of the plurality of training methods” (Sharma: 0056,0064, fig 2,5) Conclusion The prior art made of record a. US Pub. No. 2021/0042645 b. US Pub. No. 2021/0089944 Examiner's Note: Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. SEE MPEP 2141.02 [R-5] VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS: A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984) In re Fulton, 391 F.3d 1195, 1201,73 USPQ2d 1141, 1146 (Fed. Cir. 2004). >See also MPEP §2123. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. The prior art made of record, listed on form PTO-892, and not relied upon, if any, is considered pertinent to applicant's disclosure Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Srirama Channavajjala whose telephone number is 571-272-4108. The examiner can normally be reached on Monday-Friday from 8:00 AM to 5:30 PM Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gorney, Boris, can be reached on (571) 270- 5626. The fax phone numbers for the organization where the application or proceeding is assigned is 571-273-8300 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free) /Srirama Channavajjala/Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Mar 22, 2023
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §103, §112
Feb 19, 2026
Response Filed
Mar 30, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601600
Interpreting and Resolving Map-Related Queries using a Language Model
2y 5m to grant Granted Apr 14, 2026
Patent 12596957
BIAS DETECTION AND REDUCTION IN MACHINE-LEARNING TECHNIQUES
2y 5m to grant Granted Apr 07, 2026
Patent 12596701
DATABASE SYSTEM FOR TRIGGERING EVENT NOTIFICATIONS BASED ON UPDATES TO DATABASE RECORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12591547
SYNCHRONIZING CONFIGURATION OF PARTNER OBJECTS ACROSS DISTRIBUTED STORAGE SYSTEMS USING TRANSFORMATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12579480
SYSTEMS AND METHODS TO GENERATE DATA MESSAGES INDICATING A PROBABILITY OF EXECUTION FOR DATA TRANSACTION OBJECTS USING MACHINE LEARNING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+32.6%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 690 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month