Prosecution Insights
Last updated: April 19, 2026
Application No. 18/709,919

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §101§103§112
Filed
May 14, 2024
Examiner
GUILIANO, CHARLES A
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
74%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
122 granted / 336 resolved
-15.7% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
34 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 336 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of the Application The following is a Final Office Action. In response to Examiner's communication of July 25, 2025, Applicant, on November 25, 2025, amended claims 1, 7, 8, & 10, canceled claims 2, 9, & 13, and added claim 14. Claim 11 was previously canceled. Claims 1, 3-8, 10, 12, & 14 are now pending in this application and have been rejected below. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment Applicant's amendments are sufficient to overcome the 35 USC 112, second paragraph, rejections set forth in the previous action. Therefore, these rejections are withdrawn. Applicant's amendments are not sufficient to overcome the 35 USC 101 rejections set forth in the previous action. Therefore, these rejections are updated as necessitated by Applicant’s amendments and maintained below. Applicant's amendments are not sufficient to overcome the 35 USC 103 rejections set forth in the previous action. Therefore, these rejections are updated as necessitated by Applicant’s amendments and maintained below. Response to Arguments - 35 USC § 101 Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive. Applicant argues the claims are not directed to abstract planning or mental processes because embodiments are directed to a concrete physical system of a water distribution network consisting of water purification plants, water supply plants, pipes, and pumps, and the data handled are time-series sensor measurements of power consumption, water level, pressure, and demand and the process is a technical optimization that simultaneously determines pump ON/OFF (binary operation pattern) and pipe flow rate while satisfying hydraulic and operational constraints such as flow conservation at each node, pressure range at each site, upper and lower limits of storage volume, non-use of construction zones, and a certain percentage of supply volume exceeding demand at discrete time intervals, and that controls pump operation in accordance with the generated operation plan, and therefore, the claims cannot be performed with papers and pencils or in the mind. Examiner respectfully disagrees. Pursuant to 2019 Revised Patent Subject Matter Eligibility Guidance, in order to determine whether a claim is directed to an abstract idea, under Step 2A, we first (1) determine whether the claims recite limitations, individually or in combination, that fall within the enumerated subject matter groupings of abstract ideas (mathematical concepts, certain methods of organizing human activity, or mental processes), and (2) determine whether any additional elements beyond the recited abstract idea, individually and as an ordered combination, integrate the judicial exception into a practical application. 84 Fed. Reg. 52, 54-55. Next, if a claim (1) recites an abstract idea and (2) does not integrate that exception into a practical application, in order to determine whether the claim recites an “inventive concept,” under Step 2B, we then determine whether any of the additional elements beyond the recited abstract idea, individually and in combination, are significantly more than the abstract idea itself. 84 Fed. Reg. 56. Here, pursuant to prong 1 of Step 2A, claim 1, and similarly claims 3-8, 12, & 14, recites “an acquisition process comprising acquiring target data regarding a target water distribution plan; a generation process comprising generating an operation plan regarding the target water distribution plan by solving an optimization problem that uses:(i) a cost function determined by inverse reinforcement learning which uses reference data regarding a reference water distribution plan; and (ii) the target data acquired in the acquisition process; and … in accordance with the generated operation plan, wherein the cost function includes cost terms including variables corresponding to respective items included in the reference data, and wherein in the generation process, the at least one processor generates the operation plan regarding the target water distribution plan by solving the optimization problem which uses the cost function, in which the target data acquired in the acquisition process is regarded as a fixed variable, and in which a variable that is among the variables included in the cost terms included in the cost function and that is different from the fixed variable, is regarded as a manipulated variable.” Claims 1, 3-8, 10, 12, & 14, in view of the claim limitations, recite the abstract idea of a process for generating a water distribution plan by acquiring target data and reference data for a water distribution plan and generating a plan by solving an optimization problem using the reference data, the target data, and a cost function comprising fixed and manipulated variables corresponding to respective items included in the reference data. A claim recites mental processes when the claim recites concepts performed in the human mind (including an observation, evaluation, judgment, opinion), wherein if the claim, under its broadest reasonable interpretation, covers the claim being practically performed in the mind but for the recitation of generic computer components, then the claim is in the mental process category. 84 Fed. Reg. 52 n.14. Here, as a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, despite Applicant’s assertions to the contrary, the broadest reasonable interpretation of the recited process for generating a water distribution plan by acquiring target data and reference data for a water distribution plan and generating a plan by solving an optimization problem using the reference data, the target data, and a cost function comprising fixed and manipulated variables corresponding to respective items included in the reference data could all, including the limitations referred to by Applicant, be reasonably interpreted as a human making observations of information regarding target data and reference data for a water distribution plan, a human performing an evaluation and a human using judgment and performing an evaluation based on the observed information to generate a plan by optimizing a problem using a cost function and the reference and target data manually and/or with a pen and paper. Therefore, the claims, including the limitations referred to by Applicant, recite mental processes. In addition, a claim recites certain methods of organizing human activity when the claim recites fundamental economic principles or practices (including hedging, insurance, mitigating risk), commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations), managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). 84 Fed. Reg. at 52. Here, each of these above limitations together, including the limitations referred to by Applicant, recite a process to generate a plan that optimizes costs and benefits of a water distribution business, which manages business interactions, fundamental economic practice, and sales activity of water distribution businesses. Therefore, the claims, including the limitations referred to by Applicant, recite certain methods of organizing human activity. Accordingly, since the claims recite a certain method of organizing human activity and mental processes, the claims recite an abstract idea under the first prong of Step 2A. Applicant argues the alleged abstract calculations are not simply performed by computers but are integrated into practical applications because embodiments perform optimization to satisfy specific physical and safety constraints such as threshold range of storage volume, excess demand ratio, flow conservation at each node, pressure range at each site, and exclusion of construction zones, as a result, pump operation pattern Pk, 1 (t) (0/1) and pipe flow rate qi (t) at each time interval are generated, this schedule is output in a form executable by SCADA, and is a sequence of facility control commands intended to drive actual pumps, and in other words, the technical flow of measurement, evaluation function construction, constrained optimization, and control output is inseparably linked to the operation of physical infrastructure, rather than mere data analysis and result display. Examiner respectfully disagrees. It is noted that the features upon which applicant relies (i.e., satisfying specific physical and safety constraints such as threshold range of storage volume, excess demand ratio, flow conservation at each node, pressure range at each site, and exclusion of construction zones, as a result, pump operation pattern Pk, 1 (t) (0/1) and pipe flow rate qi (t) at each time interval are generated, and this schedule is output in a form executable by SCADA, and is a sequence of facility control commands intended to drive actual pumps) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The claims merely generate a generic optimization plan, and the plan is not necessarily in a form executable by SCADA or actual pumps. In view of the specification (see the 35 USC 112(a) rejection set forth below, the claimed invention do not actually control actual, but rather, at best, the claims generate a plan with variables controlled based on an operation rule, such as a pump threshold, and this data is merely indicative of a history of decisions made by a person who prepared a reference operation plan in the past. Spec. [0047]. Merely generating a generic plan that simply includes variables, such as a threshold for a pump, that were decided by a person in the past is not the same as the invention itself actually driving or controlling the actual pumps. Furthermore, satisfying specific physical and safety constraints such as threshold range of storage volume, excess demand ratio, flow conservation at each node, pressure range at each site, and exclusion of construction zones, as a result, pump operation pattern Pk, 1 (t) (0/1) and pipe flow rate qi (t) at each time interval are generate are abstract mental processes that can all be performed mentally and is a certain method of organizing human activity for the reasons set forth above. The only additional elements beyond the recited abstract idea actually recited in the claims are the recitations of “[a]n information processing apparatus comprising at least one processor, the at least one processor being configured to execute …,” “inverse reinforcement learning,” and “a control process comprising controlling pump operation” in claim 1, and similarly claim 10, and individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components, which is not sufficient to integrate an abstract idea into a practical application. Furthermore, these additional elements merely general link the abstract idea to a field of use or technical environment, namely a generic apparatus to generically implement generic inverse reinforcement learning and generically control a generic pump, which is also not sufficient to integrate an abstract idea into a practical application. Applicant argues that even if abstract elements are included, claim-specific additional elements and their ordered combinations constitute "significantly more” because weights ai are identified by inverse reinforcement learning from the behavior history of reference data (Pump operation, valves, personnel), reduced to weighted linear costs of c=Eai-fi (xi), minimized under hydraulic and operational constraints, and simultaneously optimized binary pump operation patterns and pipe flow rates, and output as time-series commands executable by SCADA, this integration goes beyond the mere application of general-purpose computer calculations and work planning, and technically improves the balance between supply safety, energy efficiency, and maintenance by quantifying the intent of experts as weights, and since there is no evidence that such integration (IRL weights + optimization with hydraulic constraints + binary schedules +SCADA linkage) has been common use in the industry, the claims should be recognized as more than "mere application." Examiner respectfully disagrees. As above, the features upon which applicant relies (i.e., weights ai are identified by inverse reinforcement learning from the behavior history of reference data (Pump operation, valves, personnel), reduced to weighted linear costs of c=Eai-fi (xi), minimized under hydraulic and operational constraints, and simultaneously optimized binary pump operation patterns and pipe flow rates, and output as time-series commands executable by SCADA) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The claims merely generate a plan by solving an optimization problem that uses a cost function determined by inverse reinforcement learning which uses reference data regarding a reference water distribution plan and target data, and the plan is not necessarily in a form executable by SCADA or actual pumps. In view of the specification (see the 35 USC 112(a) rejection set forth below, the claimed invention do not actually control actual, but rather, at best, the claims generate a plan with variables controlled based on an operation rule, such as a pump threshold, and this data is merely indicative of a history of decisions made by a person who prepared a reference operation plan in the past. Spec. [0047]. Merely generating a generic plan that simply includes variables, such as a threshold for a pump, that were decided by a person in the past is not the same as the invention itself actually driving or controlling the actual pumps. Furthermore, identifying weights from the behavior history of reference data (Pump operation, valves, personnel), reduced to weighted linear costs of c=Eai-fi (xi), minimized under hydraulic and operational constraints, and simultaneously optimized binary pump operation patterns and pipe flow rates are abstract mathematical concepts comprising mathematical relationships, calculations, and equations, abstract mental processes mental processes that can all be performed mentally, and an abstract certain method of organizing human activity for the reasons set forth above. As noted above, the only additional elements beyond the recited abstract idea actually recited in the claims are the recitations of “[a]n information processing apparatus comprising at least one processor, the at least one processor being configured to execute …,” “inverse reinforcement learning,” and “a control process comprising controlling pump operation” in claim 1, and similarly claim 10, and individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components, which is not sufficient to amount to significantly more than an abstract idea. Furthermore, these additional elements merely general link the abstract idea to a field of use or technical environment, namely a generic apparatus to generically implement generic inverse reinforcement learning and generically control a generic pump, which is also not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0088]-[0090] (describing a part or all of the functions of each of the information processing apparatuses may be realized by a computer including at least one processor, memory, a program in the memory for causing the computer to operate as each of the information processing apparatuses, wherein the processor can be a CPU), which describes the inventions components at such a high level of generality that the specification does not provide support for these components to be anything beyond well-understood, routine, and conventional. Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Response to Arguments - 35 USC § 103 Applicant’s arguments with respect to the 35 USC 103 rejections have been fully considered, but they are not persuasive. Applicant argues that Higa does not teach or suggest: ... wherein the cost function includes cost terms including variables corresponding to respective items included in the reference data, and wherein in the generation process, the at least one processor generates the operation plan regarding the target water distribution plan by solving the optimization problem which uses the cost function, in which the target data acquired in the acquisition process is regarded as a fixed variable, and in which a variable that is among the variables included in the cost terms included in the cost function and that is different from the fixed variable, is regarded as a manipulated variable, as recited in claim 1, and similarly claim 10, because unlike amended claim 1, Higa discloses that, by accumulating expert data regarding water supply infrastructure W1 and performing sequential reward learning, an adapted first model can be generated, and by correcting the first model by the correction model, a second model may be adapted to the system such as water supply infrastructure W2 to W5 (see e.g., Higa 0139); however, Higa does not teach or suggest the above-identified features. Examiner respectfully disagrees. Firstly, Higa discloses “wherein the cost function includes cost terms including variables corresponding to respective items included in the reference data” in paragraphs [0038], [0044]-[0046], [0048], wherein the expert data set 110 is behavior data such as a combination of the action 103 in the skilled agent 102 and the state 104 in the system A 100 and at that time, sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110 that includes a process of updating the design of the reward function based on the imitation learning and the designed reward function to learn a policy function, which outputs the action 103 to be performed by the agent 102 according to the state 104 of the target environment 101, paragraph [0050], which discusses that state is s and the action is a in the reward function r (s, a) [0100]-[0102], [0104], [0107], [0109], fig. 8, in adaptation method according to the third Embodiment, the sequential reward learning unit 310 adapts the model A 342 to system A 100 by sequential reward learning using the expert data set 110 (S 21), the model correction unit 320 an extracts the reward function r(s, a) as an evaluation reference formula from the model A 342 (S 22a), which can be expressed as the following, equation 16 PNG media_image1.png 200 400 media_image1.png Greyscale , and corrects the parameter of the evaluation reference expression using correction model 343 a to generate models B 345 (S 22b), e.g., the correction model 343 is generated based on conditions B 344, e.g., the model correction unit 320 an adds the correction parameter delta to the reward function, and thereafter, the adaptation unit 330 an operates the system B 200 using the model B 345 (S 23). Here, s state and a action of the reward function extracted as an evaluation function in equation 16 are variables corresponding to the state and action data of the expert data. Further, Higa discloses “wherein in the generation process, the at least one processor generates the operation plan regarding the target water distribution plan by solving the optimization problem which uses the cost function” in paragraphs [0045]-[0047], [0050], the sequential reward learning unit 310 of the present embodiment performs learning of the reward function through sequential reward learning of the policy function using the expert dataset 110, when the policy function is learned to be ideal, the policy function outputs the optimal a action to be performed by the agent in accordance with the s state of the target environment. Here, the process solve the optimization problem which uses the cost function by outputting the optimal a action to be performed by the agent in accordance with the s state using the reward function discussed above. Moreover, Higa discloses the optimization uses “the cost function in which the target data acquired in the acquisition process is regarded as a fixed variable, and in which a variable that is among the variables included in the cost terms included in the cost function and that is different from the fixed variable, is regarded as a manipulated variable” in paragraphs [0138]-[0139], when the water service infrastructure is captured as a system, state is represented by a variable describing dynamics of a network that cannot be explicitly operated by an operator, such as the voltage, water level, pressure, water amount of each base, the action to be performed by the agent needs to supply water, and therefore, the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump, and where the city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target. Here, Higa expressly discloses that the state s, which as disclosed above is a variable of the reward/cost function, is represented by a variable “that cannot be explicitly operated by an operator” and the action a, which as disclosed above is also a variable of the reward/cost function and also an output by the policy function, performed by the agent is a behavior represented by “a variable that can be controlled. Further, one of skill in the art would understand that Higa disclosing the policy function outputs the optimal a action to be performed by the agent in accordance with the s state of the target environment discloses that the action is manipulatable and the state is remains fixed. Claim Rejections - 35 USC § 112, First Paragraph The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1, 3-8, 12, & 14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “a control process comprising controlling pump operation in accordance with the generated operation plan.” However, Applicant’s specification does not in expressly or inherently require a control process comprising controlling pump operation in accordance with the generated operation plan, as the claims require. In order to satisfy the written description requirement, each claim limitation must be expressly or inherently supported by the disclosure. MPEP 2163 (emphasis added). “The 'written description' requirement implements the principle that a patent must describe the technology that is sought to be patented; the requirement serves both to satisfy the inventor's obligation to disclose the technologic knowledge upon which the patent is based, and to demonstrate that the patentee was in possession of the invention that is claimed.” Capon v. Eshhar, 76 USPQ2d 1078, 1084 (Fed. Cir. 2005). Further, the written description requirement promotes the progress of the useful arts by ensuring that patentees adequately describe their inventions in their patent specifications in exchange for the right to exclude others from practicing the invention for the duration of the patent's term. See MPEP 2163 (emphasis added). For claims directed toward computer-implemented functions, like the presently claimed invention, “[i]f the specification does not provide a disclosure of the computer and algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention including how to program the disclosed computer to perform the claimed function, a rejection under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, for lack of written description must be made.” MPEP 2161.01 (emphasis added). It is not enough that one skilled in the art could write a program to achieve the claimed function because the written description requirement requires that the specification explains how the inventor intends to achieve the claimed function. Examining Claims for Compliance with 35 USC 112(a) - PowerPoint of Computer Based Training, Slides 20 & 21, (emphasis added) available at http://www.uspto.gov/ sites/default/files/documents/uspto_112a_ part1_17aug2015.pptx. The ability of one skilled in the art to make and use the claimed invention does not satisfy the written description requirement if details of how the function is to be performed are not disclosed. Id. at Slide 20. With respect to the recitation of “a control process comprising controlling pump operation in accordance with the generated operation plan,” nothing in the Specification expressly or inherently requires a control process comprising controlling pump operation in accordance with the generated operation plan, as the claims require. The control discussed in the Specification is not the invention controlling pump operation in accordance with the generated operation plan, but instead “reference data RD includes data represented by a variable(s) that is/are controlled on the basis of an operation rule, such as valve opening and closing, drawing in of water, and/or a pump threshold, and “[s]uch data can be said to be data indicative of a history of decision making by, for example, a skilled person who has prepared a reference operation plan.” [0047]. Similarly, in another portion of the Specification, “the action data is represented by a variable(s) that is/are controlled on the basis of an operation rule, such as valve opening and closing, drawing in of water, and/or a pump threshold.” [0020]. That is, the discussed in the Specification control is merely a control of variables indicative of a history of decision making by a person who prepared a plan in the past. Simply controlling variables in set of reference data or a plan for a pump is not the same as the invention actually controlling a pump. Further, the “control section” disclosed by the Specification does not control a pump, but rather the Specification merely discloses that “various pieces of information under control by the control section 10A” and the “control section 10A includes an acquisition section 11A, a generation section 12A, and a determination section 22A as illustrated in Fig. 5.” [0036]-[0037]. Here, the specification does not disclose the control section controls a pump and only that is controls “various pieces of information” and it includes “an acquisition section[], a generation section[], and a determination section,” which are not pumps. For the reasons set forth above, although the Specification discusses controlling mere variables in a plan and a control section that controls merely information and includes acquisition, generation, and determination sections, the Specification does not inherently nor expressly support control process comprising controlling pump operation in accordance with the generated operation plan, as required by the claims. Claims 3-8, 12, & 14 depend on claim 1 and do not cure the aforementioned deficiencies, and thus, these claims are rejected for the reasons set forth above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10, 12, & 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1, and similarly claims 3-8, 12, & 14, recites “an acquisition process comprising acquiring target data regarding a target water distribution plan; a generation process comprising generating an operation plan regarding the target water distribution plan by solving an optimization problem that uses:(i) a cost function determined by inverse reinforcement learning which uses reference data regarding a reference water distribution plan; and (ii) the target data acquired in the acquisition process; and … in accordance with the generated operation plan, wherein the cost function includes cost terms including variables corresponding to respective items included in the reference data, and wherein in the generation process, the at least one processor generates the operation plan regarding the target water distribution plan by solving the optimization problem which uses the cost function, in which the target data acquired in the acquisition process is regarded as a fixed variable, and in which a variable that is among the variables included in the cost terms included in the cost function and that is different from the fixed variable, is regarded as a manipulated variable.” Claims 1, 3-8, 10, 12, & 14, in view of the claim limitations, recite the abstract idea of a process for generating a water distribution plan by acquiring target data and reference data for a water distribution plan and generating a plan by solving an optimization problem using the reference data, the target data, and a cost function comprising fixed and manipulated variables corresponding to respective items included in the reference data. As a whole, in view of the claim limitations, but for the computer components and systems performing the claimed functions, the broadest reasonable interpretation of the recited process for generating a water distribution plan by acquiring target data and reference data for a water distribution plan and generating a plan by solving an optimization problem using the reference data, the target data, and a cost function comprising fixed and manipulated variables corresponding to respective items included in the reference data could all be reasonably interpreted as a human making observations of information regarding target data and reference data for a water distribution plan, a human performing an evaluation and a human using judgment and performing an evaluation based on the observed information to generate a plan by optimizing a problem using a cost function and the reference and target data manually and/or with a pen and paper; therefore, the claims recite mental processes. In addition, each of the above limitations manage business interactions, fundamental economic practice, and sales activity of water distribution businesses generating a water distribution plan while optimizing costs and benefits; thus, the claims recite certain methods of organizing human activity. Further, with respect to the dependent claims, aside from the additional elements beyond the recited abstract idea addressed below under the second prong of Step 2A and 2B, the limitations of dependent claims 3-8, 12, & 14 recite similar further abstract limitations to those discussed above that narrow the abstract idea recited in the independent claims because, aside from the computer components and systems performing the claimed functions the limitations of claims recite mental processes that can be practically performed mentally by observing, evaluating, and judging information mentally and/or with a pen and paper and recite a certain method of organizing human activity that manages business interactions. Accordingly, since the claims recite a certain method of organizing human activity and mental processes, the claims recite an abstract idea under the first prong of Step 2A. This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea of “[a]n information processing apparatus comprising at least one processor, the at least one processor being configured to execute …,” “inverse reinforcement learning,” and “a control process comprising controlling pump operation” in claim 1 and “[a]n information processing method comprising …” and “inverse reinforcement learning” in claim 10; however, individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea on a computer (i.e. apply it), and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements merely general link the abstract idea to a field of use or technical environment, namely a generic apparatus to generically implement generic inverse reinforcement learning and generically control a generic pump. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 3-8, 12, & 14 do not integrate the abstract idea into a practical application because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea, as an order combination, are no more than mere instructions to implement the idea using generic computer components (i.e. apply it), and further, generally link the abstract idea to a field of use, which is not sufficient to amount to significantly more than an abstract idea; therefore, the additional elements are not sufficient to amount to significantly more than an abstract idea. Additionally, these recitations as an ordered combination, simply append the abstract idea to recitations of generic computer structure performing generic computer functions that are well-understood, routine, and conventional in the field as evinced by Applicant’s Specification at [0088]-[0090] (describing a part or all of the functions of each of the information processing apparatuses may be realized by a computer including at least one processor, memory, a program in the memory for causing the computer to operate as each of the information processing apparatuses, wherein the processor can be a CPU), which describes the inventions components at such a high level of generality that the specification does not provide support for these components to be anything beyond well-understood, routine, and conventional. Furthermore, as an ordered combination, these elements amount to generic computer components performing repetitive calculations, receiving or transmitting data over a network, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d); July 2015 Update, p. 7. Moreover, aside from the aforementioned additional elements, the remaining elements of dependent claims 3-8, 12, & 14 do not transform the recited abstract idea into a patent eligible invention because these claims merely recite further limitations that provide no more than simply narrowing the recited abstract idea. Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components and recitations of generic computer structure that perform well-understood, routine, and conventional computer functions that are used to “apply” the recited abstract idea. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claims as a whole amount to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1, 3-8, 10, 12, & 14 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-8, 10, 12, & 14 are rejected under 35 U.S.C. 103 as being unpatentable over Higa, et al. (WO 2020065808 A1), hereinafter Higa, in view of Wee, et al. (US 20190196419 A1), hereinafter Wee. Regarding claim 1, Higa discloses an information processing apparatus comprising at least one processor, the at least one processor being configured to execute ([0029]-[0032], [0078]): an acquisition process comprising acquiring target data ([0038], [0045], wherein the expert data set 110 is behavior data such as a combination of the action 103 in the skilled agent 102 in the system A 100 and the state 104 at that time, sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition (i.e. Examiner interprets the target data and the reference data to include the disclosed expert data and first conditions including the agent and state of the a first system and the different second conditions of the second system including the different environment and/or agent)) regarding a target water distribution plan ([0135], [0138]-[0139], application example of the embodiment above is a water infrastructure, wherein the target environment is represented as a set of the state of the water infrastructure (e.g. the water distribution network, the capacity of the pump, the state of the water distribution pipe, etc.) and the agent corresponds to an operator who performs the action based on the decision making and an external system, and where the city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target); a generation process comprising generating an operation plan regarding the target water distribution plan by solving an optimization problem that uses: (i) a cost function determined by … reinforcement learning ([0045]-[0046], [0048], sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, wherein the "sequential reward learning" is a method including a process of updating the design of the reward function based on the imitation and the designed reward function, first the sequential reward learning unit 310 generates a policy function by sequential reward learning, wherein the policy function is function outputting the action 103 to be performed by the agent 102 according to the state 104 of the target environment 101, when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, and the sequential reward learning unit 310 of the present embodiment performs learning of the reward function through sequential reward learning of the policy function, [0100]-[0102], [0107], [0109], fig. 8, in adaptation method according to the third Embodiment, the sequential reward learning unit 310 adapts the model A 342 to system A 100 by sequential reward learning using the expert data set 110 (S 21), the model correction unit 320 an extracts an evaluation reference formula from the model A 342 (S 22a) and corrects the parameter of the evaluation reference expression using correction model 343 a to generate models B 345 (S 22b), e.g., the correction model 343 is generated based on conditions B 344, e.g., the model correction unit 320 an adds the correction parameter delta to the reward function, and thereafter, the adaptation unit 330 an operates the system B 200 using the model B 345 (S 23)) which uses reference data regarding a reference water distribution plan; and (ii) the target data acquired in the acquisition process (Examiner interprets the target data and the reference data to include the disclosed expert data and first conditions including the agent and state of the a first system and the different second conditions of the second system including the different environment and/or agent, disclosed in, [0045]-[0046], [0048], sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition, [0139], where the city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target, and the adaptation unit 330 an or the like can achieve highly accurate control in various areas or conditions); and a control process comprising controlling pump operation ([0138], when the water supply infrastructure is considered as a system, the action that the agent should take is to supply water to the demand area on the water distribution network without excess or shortage, and the behavior is represented by variables that can be controlled, such as threshold values for pumps) in accordance with the generated operation plan ([0046]-[0047], when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, wherein the policy function mimics the behavior data within which the state vector s and the action a are associated with each other), wherein the cost function includes cost terms including variables corresponding to respective items included in the reference data ([0038], [0044]-[0046], [0048], wherein the expert data set 110 is behavior data such as a combination of the action 103 in the skilled agent 102 and the state 104 in the system A 100 and at that time, sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110 that includes a process of updating the design of the reward function based on the imitation learning and the designed reward function to learn a policy function, which outputs the action 103 to be performed by the agent 102 according to the state 104 of the target environment 101, paragraph [0050], which discusses that state is s and the action is a in the reward function r (s, a) [0100]-[0102], [0104], [0107], [0109], fig. 8, in adaptation method according to the third Embodiment, the sequential reward learning unit 310 adapts the model A 342 to system A 100 by sequential reward learning using the expert data set 110 (S 21), the model correction unit 320 an extracts the reward function r(s, a) as an evaluation reference formula from the model A 342 (S 22a), which can be expressed as equation 16, as follows PNG media_image1.png 200 400 media_image1.png Greyscale , and corrects the parameter of the evaluation reference expression using correction model 343 a to generate models B 345 (S 22b), e.g., the correction model 343 is generated based on conditions B 344, e.g., the model correction unit 320 an adds the correction parameter delta to the reward function, and thereafter, the adaptation unit 330 an operates the system B 200 using the model B 345 (S 23), [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition), and wherein in the generation process, the at least one processor generates the operation plan regarding the target water distribution plan by solving the optimization problem which uses the cost function ([0045]-[0047], sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, and the sequential reward learning unit 310 of the present embodiment performs learning of the reward function through sequential reward learning of the policy function), in which the target data acquired in the acquisition process is regarded as a fixed variable, and in which a variable that is among the variables included in the cost terms included in the cost function and that is different from the fixed variable, is regarded as a manipulated variable ([0138]-[0139], when the water service infrastructure is captured as a system, state is represented by a variable describing dynamics of a network that cannot be explicitly operated by an operator, such as the voltage, water level, pressure, water amount of each base, and the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump, and where the city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target). While Higa discloses all of the above, including a generation process comprising generating an operation plan regarding the target water distribution plan by solving an optimization problem that uses: (i) a cost function determined by … reinforcement learning which uses reference data regarding a reference water distribution plan; and (ii) the target data acquired in the acquisition process (as above), and suggests inverse reinforcement learning is one kind of imitation learning that can be used to design a reward function ([0018]), Higa does not expressly require that the reinforcement learning in this embodiment is necessarily inverse reinforcement learning, which however, is taught by further teachings in Wee. Wee teaches a cost function determined by inverse reinforcement learning ([0033], [0035]-[0037], the expert model is a machine-learning model constructed from expert data, wherein the expert model can be constructed using machine learning techniques such as inverse reinforcement learning, and the transformer constructs metrics or error measures, or cost function most appropriate from the predicted control actions from the expert model). Higa and Wee are analogous fields of invention because both address the problem of generating a model for operating a system using expert data. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Higa the ability to determine a cost function by inverse reinforcement learning as taught by Wee since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of determining a cost function by inverse reinforcement learning, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Higa with the aforementioned teachings of Wee in order to produce the added benefit of optimizing operation of the control systems using expert decision making and control. [0001]-[0002]. Regarding claim 3, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 1 (as above). Further, Higa discloses wherein the target data ([0038], the expert data set 110 is behavior data such as a combination of the action 103 in the skilled agent 102 in the system A 100 and the state 104 at that time, [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition, [0139], the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, and the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target) includes information indicative of a state of a target waterworks infrastructure ([0138], when the water service infrastructure is captured as a system, a set of the state of the water infrastructure (e.g. the water distribution network, the capacity of the pump, the state of the water distribution pipe, etc.)). Regarding claim 4, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 3 (as above). Further, Higa discloses wherein the target data ([0038], the expert data set 110 is behavior data such as a combination of the action 103 in the skilled agent 102 in the system A 100 and the state 104 at that time) includes information pertaining to at least one selected from the group consisting of a pump, a water distribution network, a water pipeline, and a demand point in the target waterworks infrastructure ([0138], when the water service infrastructure is captured as a system, the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump). Regarding claim 5, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 3 (as above). Further, Higa discloses wherein the operation plan generated in the generation process ([0038], behavior is combination of the action and the state, [0046], "policy function" output the action 103 to be performed by the agent 102 according to the state 104 of the target environment 101, and when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, [0049], a policy (policy), which is a rule for selecting the action a by the agent, is represented by pi, and the probability of selecting the action an in the state s is expressed as pi (s, a) based on the policy) includes information pertaining to an operation pattern of a pump in the target waterworks infrastructure ([0136], e.g., in order to make a facility maintenance plan for improving the efficiency of business management of a water infrastructure, it is conceivable to perform down-sizing so as to reduce the amount of water by replacing the pump of a facility that supplies water excessively, [0138], the agent corresponds to an operator who performs the action, and therefore, the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump). Regarding claim 6, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 3 (as above). Further, Higa discloses wherein the operation plan generated in the generation process ([0038], behavior is combination of the action and the state, [0046], "policy function" output the action 103 to be performed by the agent 102 according to the state 104 of the target environment 101, and when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, [0049], a policy (policy), which is a rule for selecting the action a by the agent, is represented by pi, and the probability of selecting the action an in the state s is expressed as pi (s, a) based on the policy) includes information pertaining to personnel involved in the target waterworks infrastructure ([0072], it can be said that the policy function outputs an action to be performed by the specific agent in a state represented by the state vector by inputting the output value of the reward function when the state vector is input, [0138], when the target environment is represented as a set of the state of the water infrastructure, the agent corresponds to an operator who performs the action based on the decision making and an external system, and the action to be performed by the agent needs to supply water to the demand area on the water distribution network without excess or deficiency, [0139], in a diagram where a city water model of a city water infrastructure in a region is applied to another water supply station, W1 is a city water infrastructure of a city water station in a certain area, and then the operation by the skilled staff in the water service infrastructure W1 and the state of the environment at that time can be said to be expert data). Regarding claim 7, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 1 (as above). Further, while Higa discloses wherein, in the acquisition process, the at least one processor acquires the reference data, and wherein the at least one processor is further configured to carry out a determination process comprising determining the cost function by … reinforcement learning that refers to the reference data ([0045]-[0046], [0048], sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, the "sequential reward learning" is a method including a process of updating the design of the reward function based on the imitation and the designed reward function to learn a policy function, [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition, [0100]-[0102], [0107], [0109], fig. 8, in adaptation method according to the third Embodiment, the sequential reward learning unit 310 adapts the model A 342 to system A 100 by sequential reward learning using the expert data set 110 (S 21), the model correction unit 320 an extracts an evaluation reference formula from the model A 342 (S 22a) and corrects the parameter of the evaluation reference expression using correction model 343 a to generate models B 345 (S 22b), e.g., the correction model 343 is generated based on conditions B 344, e.g., the model correction unit 320 an adds the correction parameter delta to the reward function, and thereafter, the adaptation unit 330 an operates the system B 200 using the model B 345 (S 23), [0139], when city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the operation by the skilled staff in the water service infrastructure W 1 and the state of the environment at that time can be said to be expert data, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, and here, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target, and the adaptation unit 330 an or the like can achieve highly accurate control in various areas or conditions), and suggests inverse reinforcement learning is one kind of imitation learning that can be used to design a reward function ([0018]), Higa does not expressly require that the reinforcement learning in this embodiment is necessarily inverse reinforcement learning, which however, is taught by further teachings in Wee. Wee teaches wherein, in the acquisition process, the at least one processor acquires the reference data, and wherein the at least one processor is further configured to carry out a determination process comprising determining the cost function by inverse reinforcement learning that refers to the reference data ([0033], [0035]-[0037], the expert model is a machine-learning model constructed from expert data, wherein the expert model can be constructed using machine learning techniques such as inverse reinforcement learning, and the transformer constructs metrics or error measures, or cost function most appropriate from the predicted control actions from the expert model). Higa and Wee are analogous fields of invention because both address the problem of generating a model for operating a system using expert data. At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to include in the system of Higa the ability to determine a cost function by inverse reinforcement learning as taught by Wee since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the combination would produce the predictable results of determining a cost function by inverse reinforcement learning, as claimed. Further, it would have been obvious to one of ordinary skill in the art to have modified Higa with the aforementioned teachings of Wee in order to produce the added benefit of optimizing operation of the control systems using expert decision making and control. [0001]-[0002]. Regarding claim 8, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 7 (as above). Further, Higa discloses wherein the reference data ([0045]-[0046], [0048], sequential reward learning unit 310 performs sequential reward learning of the model A 342 in the system A 100 using the expert dataset 110, the "sequential reward learning" is a method including a process of updating the design of the reward function based on the imitation and the designed reward function, wherein the imitation learning is a process of simulating an action of an expert (expert) to learn a policy function, [0011], [0021], [0024]-[0025], in a model adaptation method, a first model adapted to a first system operated based on a first condition the environment of the first system and the agent created using an expert data, and the second model is adapted to a second system operated based on a second condition is different in at least one of a specific environment or a specific agent included in the first condition) comprises: information pertaining to at least one selected from the group consisting of a pump, a water distribution network, a water pipeline, and a demand point in a reference waterworks infrastructure; and information pertaining to at least one selected from the group consisting of an operation pattern of the pump and personnel in the reference waterworks infrastructure ([0136], [0138]-[0139], e.g., in order to make a facility maintenance plan for improving the efficiency of business management of a water infrastructure, it is conceivable to perform down-sizing so as to reduce the amount of water by replacing the pump of a facility that supplies water excessively, the target environment is represented as a set of the state of the water infrastructure (e.g. the water distribution network, the capacity of the pump, the state of the water distribution pipe, etc.), an operator who performs the action, and therefore, the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump, and e.g., where the city water infrastructure W 1 is a city water infrastructure of a city water station in a certain area, the second model can be generated by correcting the first model by the correction model by the model correction unit 320 an or the like, the water service infrastructure W 2 to W 5 is a condition that is a region different from the city water infrastructure W 1 or a future downsizing target, and the adaptation unit 330 an or the like can achieve highly accurate control in various areas or conditions). Regarding claim 10, this claim is substantially similar to claim 1, respectively, and is, therefore, rejected on the same basis as claim 1. While claim 10 is directed toward a method, Higa discloses a method, as claimed. [0011], [0013]. Regarding claim 12, the combined teachings of Higa and Wee the information processing apparatus according to claim 1 (as above). Further, Higa discloses a non-transitory computer-readable storage medium storing therein a program for causing a computer to function as ([0012], [0141]-[0142]) the information processing apparatus according to claim 1 (as above regarding claim 1), the program causing the computer to carry out the acquisition process and the generation process ([0012], [0141]-[0142]). Regarding claim 14, the combined teachings of Higa and Wee teach the information processing apparatus according to claim 1 (as above). Further, Higa discloses wherein the operation plan is generated to support decision making ([0046]-[0047], when the policy function is learned to be ideal, the policy function outputs the optimal action to be performed by the agent in accordance with the state of the target environment, wherein the policy function mimics the behavior data within which the state vector s and the action a are associated with each other) in water distribution management ([0138], when the water service infrastructure is captured as a system, the action that the agent should take is to supply water to the demand area on the water distribution network without excess or shortage, and the behavior is represented by a variable that can be controlled on the basis of an operation rule such as opening/closing of the valve, drawing of water, and a threshold of the pump). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES A GUILIANO whose telephone number is (571)272-9859. The examiner can normally be reached Mon-Fri 10:00 am - 6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHARLES GUILIANO Primary Examiner Art Unit 3623 /CHARLES GUILIANO/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

May 14, 2024
Application Filed
Jul 23, 2025
Non-Final Rejection — §101, §103, §112
Oct 28, 2025
Applicant Interview (Telephonic)
Nov 02, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Feb 10, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591507
MODEL LIFECYCLE MANAGEMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12561704
System for Managing Remote Presentations
2y 5m to grant Granted Feb 24, 2026
Patent 12536481
METHODS AND SYSTEMS FOR HOLISTIC MEDICAL STUDENT AND MEDICAL RESIDENCY MATCHING
2y 5m to grant Granted Jan 27, 2026
Patent 12504971
Enterprise Application Integration Leveraging Non-Fungible Token
2y 5m to grant Granted Dec 23, 2025
Patent 12493846
CURTAILING A CARBON FOOTPRINT TO ACHIEVE CARBON REDUCTION GOALS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
74%
With Interview (+37.6%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 336 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month