DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to communications filed on 11/10/2025. Claims 1-20 are pending and have been examined.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “storage device…configured to store…” in claims 1-14).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Response to Arguments
Previous objections to the claims have been withdrawn in view of amendments.
Previous rejections under 35 USC 112 have been withdrawn in view of amendments.
Previous rejections under 35 USC 101 have been withdrawn in view of amendments.
Applicant’s arguments with respect to claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. See Nakajima et al. (US 20200109063 A1) below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 7-11, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ariyoshi et al. (US 20150112900 A1) in view of Ganti et al. (US 20200005191 A1) and Nakajima et al. (US 20200109063 A1).
As per independent claim 1, Ariyoshi teaches a device for selecting an optimal model, the device comprising:
a processor (e.g. in paragraph 166, “CPU”);
a memory connected to the processor and storing instructions (e.g. in paragraphs 166 and 168, “stored in a computer-readable recording medium in the form of a program… memories”); and
a storage device connected to the processor and configured to store model data (e.g. in paragraphs 60, 166 and 168, “storage unit 31 stores various kinds of data, such as… model… a storage device, such as a hard disk”), wherein the processor is configured to execute instructions to:
store a first model in a first model storage place (e.g. in paragraph 60, “storage unit 31 stores various kinds of data, such as…a previous prediction model”) and an existing optimal model in an optimal model storage place (e.g. in paragraphs 50 and 60, “storage unit 31 stores various kinds of data, such as…a prediction model used for demand prediction”, e.g. “the latest prediction model… with the smallest error”);
generate a variable model using training data (e.g. in paragraphs 49-50 and 61, “no-cluster prediction model…cluster-specific prediction model…[and/or] average use prediction model… learning data used for the generation of a prediction model, and further divides the learning data into training data used for the model learning of the latest prediction model”);
prepare evaluation data, and use the evaluation data to select a champion model from among a plurality of evaluation target models including the first model, the existing optimal model, and the variable model by evaluating the plurality of evaluation target models (e.g. in paragraphs 63 and 143, “evaluation unit 35 calculates prediction results based on various prediction models using the test data for evaluation… selects a prediction model with the smallest error… writes the selected latest prediction model in the storage unit”)
predict, using the selected champion model and real-time data, information (e.g. in paragraph 55, “prediction device calculates a prediction result…based on the selected prediction model using the prediction data (time-series data X.sub.n-2, X.sub.n-1, and X.sub.n), and outputs the calculated prediction result”),
but does not specifically teach controlling chemical dosage in a water treatment plant, wherein the first model includes a seed model and wherein the information includes a state of treated water, and derive a control value that minimizes the chemical dosage required for maintaining the state of the treated water within a predetermined normal range; and cause a control device, based on the control value, to control the chemical dosage in a water treatment process in the water treatment plant.
However, Ganti teaches a first model including a seed model (e.g. in paragraphs 8 and 57-58, “using as seed model may include less central processing unit (CPU) resources required to train a new machine learning model… using the seed model may include less storage capacity required, and less network/bus bandwidth required due to a smaller amount of training data required to generate the new machine learning model… a catalog of pre-trained machine learning models to select a trained machine learning model to be used as a seed model for customizing and re-training based on a new input dataset… a candidate seed model is stored along with its corresponding deep hash code generator (H) and semantic label (S)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Ariyoshi to include the teachings of Ganti because one of ordinary skill in the art would have recognized the benefit of allowing new relevant models to be generated with less resources,
but does not specifically teach controlling chemical dosage in a water treatment plant, wherein the information includes a state of treated water, and derive a control value that minimizes the chemical dosage required for maintaining the state of the treated water within a predetermined normal range; and cause a control device, based on the control value, to control the chemical dosage in a water treatment process in the water treatment plant.
However, Nakajima teaches controlling chemical dosage in a water treatment plant (e.g. in abstract and paragraphs 14, 97 and 120-121, “a water treatment system including a water system, a plurality of chemical tanks that retain chemicals having different components, a plurality of chemical feed pumps that supply the chemicals retained respectively in the plurality of chemical tanks to the water system, and the chemical feed control device… the chemical feed control device 110 determines the feed amount of each of a plurality of chemicals having different components, so that a minimum feed amount of each of the chemicals corresponding to each of the disruptive factors can be determined”), predicting, using a model and real time data, information including a state of treated water (e.g. in paragraphs 161 and 164, “water quality index prediction unit [i.e. model]… predicts the water quality index values… during the specific period starting from the present time”, i.e. real-time), deriving a control value that minimizes the chemical dosage required for maintaining the state of the treated water within a predetermined normal range (e.g. in paragraphs 97 and 120-121, “the chemical feed control device 110 determines the feed amount of each of a plurality of chemicals having different components, so that a minimum feed amount of each of the chemicals corresponding to each of the disruptive factors can be determined… determines whether or not a difference between the water quality index value (actual index value) obtained in Step S31 and the water quality index value (target index value) related to the target water quality is equal to or larger than a specific threshold [i.e. threshold defines predetermined normal range]… When the difference between the actual index value and the target index value is equal to or larger than the threshold (Step S35: YES), the updating unit 1107 corrects the feed amount of the chemical determined by the determination unit 1105”, i.e. maintains the predetermined normal range for water quality); and causing a control device, based on the control value, to control the chemical dosage in a water treatment process in the water treatment plant (e.g. in paragraphs 14, 97 and 120-121, “a water treatment system including a water system, a plurality of chemical tanks that retain chemicals having different components, a plurality of chemical feed pumps that supply the chemicals retained respectively in the plurality of chemical tanks to the water system, and the chemical feed control device… the chemical feed control device 110 determines the feed amount of each of a plurality of chemicals having different components, so that a minimum feed amount of each of the chemicals corresponding to each of the disruptive factors can be determined… corrects the feed amount of the chemical determined by the determination unit 1105 in Step S32, based on the difference between the actual index value and the target index value”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Nakajima because one of ordinary skill in the art would have recognized the benefit of optimizing predictions/models for well-known water treatment application.
As per claim 2, the rejection of claim 1 is incorporated and the combination further teaches receive the training data created from raw data received within a predetermined period of time from a time point of evaluation, detect, from the received training data, input data and output data related to the input data, set the output data as an expected value, and set the input data and the expected value as the evaluation data (e.g. Ariyoshi, in paragraphs 21, 69, and 93-94, “time-series data... an observation value… sets the time-series data X.sub.n-7, X.sub.n-6, X.sub.n-5, X.sub.n-4, and X.sub.n-3 (from the newest data of the learning data) of 5 days as a sum of the prediction use period of 3 days and the prediction target period of 2 days, among the learning data,…as test data for evaluation… first feature amount, which includes feature amounts x.sub.1 to x.sub.m extracted for training data for prediction D.sub.0 to D.sub.dn used for the learning of the approximation model, and the power demand acquired from the corresponding training data for correct answer acquisition”).
As per claim 3, the rejection of claim 2 is incorporated and the combination further teaches input the input data to each of the plurality of evaluation target models, and in response to performing operation on the input data by each of the plurality of evaluation target models to calculate a prediction value, calculate a difference between the expected value and the prediction value of each of the plurality of evaluation target models as an error of each of the plurality of evaluation target models, and select the evaluation target model having the smallest error among the plurality of evaluation target models as the champion model (e.g. Ariyoshi, in paragraphs 21, 63-64, 69, and 74, “a deviation amount detection unit that detects a record deviation amount that is a difference between an observation value and the predicted value calculated by the prediction unit… evaluation unit 35 calculates prediction results based on various prediction models using the test data for evaluation acquired by the acquisition unit 32, collates the calculated prediction results with test data for correct verification, and selects a prediction model with the smallest error”).
As per claim 7, the rejection of claim 1 is incorporated and the combination further teaches in response to selecting the seed model as the champion model, maintain a state in which the seed model selected as the champion model is stored in the seed model storage place (e.g. Ariyoshi, in paragraphs 63 and 143, “evaluation unit 35 calculates prediction results based on various prediction models using the test data for evaluation… selects a prediction model with the smallest error”; Ganti, in paragraphs 57-58, “a catalog of pre-trained machine learning models … a candidate seed model is stored”).
As per claim 8, the rejection of claim 1 is incorporated and the combination further teaches generate the variable model through training with the training data created from raw data collected within a predetermined period of time from a time point of generation (e.g. Ariyoshi, in paragraphs 9 and 47, “prediction model generation unit that generates a prediction model to calculate time-series data, which is an observation value predicted based on given time-series data, using the training data… sets the time-series data X.sub.1, X.sub.2, . . . , X.sub.n-3 of the remaining 362 days as learning data used for the learning of a prediction model”), the variable model being based on design information of the seed model (e.g. Ganti, in paragraphs 8 and 57-58, “using the seed model…to generate the new machine learning model… a catalog of pre-trained machine learning models to select a trained machine learning model to be used as a seed model for customizing and re-training based on a new input dataset”).
Claims 9-11 and 14 correspond to claims 1-3 and 8, and are rejected under the same reasons set forth, and the combination further teaches in response to generation of a variable model, use evaluation data (e.g. Ariyoshi, in paragraphs 49-50, 61, 63, and 143, “no-cluster prediction model…cluster-specific prediction model…[and/or] average use prediction model… learning data used for the generation of a prediction model, and further divides the learning data into training data used for the model learning of the latest prediction model… evaluation unit 35 calculates prediction results based on [the] various prediction models using the test data for evaluation… selects a prediction model with the smallest error”) and store the champion model in the optimal model storage place (e.g. Ariyoshi, in paragraphs 63 and 143, “evaluation unit 35 calculates prediction results based on various prediction models using the test data for evaluation… selects a prediction model with the smallest error… writes the selected latest prediction model in the storage unit”).
Claims 15-17 are the method claims corresponding to device claims 1-3 and are rejected under the same reasons set forth and the combination further teaches
maintaining a state in which a seed model is stored in a seed model storage place (e.g. Ariyoshi, in paragraph 60, “storage unit 31 stores various kinds of data, such as…a previous prediction model”; Ganti, in paragraphs 57-58, “a catalog of pre-trained machine learning models… a candidate seed model is stored”) and an existing optimal model is stored in an optimal model storage place (e.g. Ariyoshi, in paragraphs 50 and 60, “storage unit 31 stores various kinds of data, such as…a prediction model used for demand prediction”, e.g. “the latest prediction model… with the smallest error”).
Claims 4-6, 12-13, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ariyoshi et al. (US 20150112900 A1) in view of Ganti et al. (US 20200005191 A1) and Nakajima et al. (US 20200109063 A1), and further in view of Iwamoto et al. (US 20030014382 A1).
As per claim 4, the rejection of claim 1 is incorporated and the combination further teaches in response to selecting the variable model as the champion model, store the variable model selected as the champion model as the optimal model in the optimal model storage place (e.g. Ariyoshi, in paragraph 143, “evaluation unit 35 updates the storage content of the storage unit 31 based on the selection… When a no-cluster prediction model is selected, the evaluation unit 35 writes the no-cluster prediction model in the storage unit … On the other hand, when a cluster-specific prediction model is selected, the evaluation unit 35 writes the cluster-specific prediction model in the storage unit”), but does not specifically teach in a First-In-First-Out manner. However, Iwamoto teaches storing in a First-In-First-Out manner (e.g. in paragraphs 22-23, “When a residual of storage capacity becomes insufficient, data is deleted in chronological order, that is, the oldest data is deleted first. Therefore, it is possible to reserve an area for storing new data”, i.e. FIFO). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Iwamoto because one of ordinary skill in the art would have recognized the benefit of reserving an area for storing new data.
As per claim 5, the rejection of claim 1 is incorporated and the combination further teaches in response to selecting the variable model as the champion model, store the variable model selected as the champion model as the optimal model in the optimal model storage place (e.g. Ariyoshi, in paragraphs 63 and 143, “selects a prediction model with the smallest error… evaluation unit 35 updates the storage content of the storage unit 31 based on the selection… When a no-cluster prediction model is selected, the evaluation unit 35 writes the no-cluster prediction model in the storage unit … On the other hand, when a cluster-specific prediction model is selected, the evaluation unit 35 writes the cluster-specific prediction model in the storage unit”), but does not specifically teach decide whether there is an insufficiency of a storage space of the optimal model storage place, and deciding the insufficiency of the storage space of the optimal model storage place, delete the existing optimal model in chronological order of storage according to a First-In-First-Out (FIFO) manner and store in a FIFO manner. However, Iwamoto teaches deciding whether there is an insufficiency of a storage space of a storage place and, deciding insufficiency of the storage space of the storage place, deleting an existing data in chronological order of storage according to a First-In-First-Out (FIFO) manner and storing data in the FIFO manner (e.g. in paragraphs 22-23, “When a residual of storage capacity becomes insufficient, data is deleted in chronological order, that is, the oldest data is deleted first. Therefore, it is possible to reserve an area for storing new data”, i.e. FIFO). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Iwamoto because one of ordinary skill in the art would have recognized the benefit of reserving an area for storing new data.
As per claim 6, the rejection of claim 1 is incorporated and the combination further teaches in response to selecting the existing optimal model as the champion model, extract the existing optimal model selected as the champion model from the optimal model storage place and store the existing optimal model again in the optimal model storage place (e.g. Ariyoshi, in paragraph 63, “When the selected prediction model is the latest prediction model, the evaluation unit 35 writes the selected latest prediction model in the storage unit”), but does not specifically teach in a First-In-First-Out manner. However, Iwamoto teaches storing in a First-In-First-Out manner (e.g. in paragraphs 22-23, “When a residual of storage capacity becomes insufficient, data is deleted in chronological order, that is, the oldest data is deleted first. Therefore, it is possible to reserve an area for storing new data”, i.e. FIFO). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Iwamoto because one of ordinary skill in the art would have recognized the benefit of reserving an area for storing new data.
Claims 12-13 correspond to claims 5-6, and are rejected under the same reasons set forth.
Claims 18-20 are the method claims corresponding to device claims 4-6, and are rejected under the same reasons set forth.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
For example,
Chu (US 8214308 B2) teaches “the competing models' performance (with respect to predicting both data sources 1A and 1B) can be compared with champion model 1's prediction performance using the performance decay indexes. Based upon the comparison, if a competing model outperforms the champion model, then a corrective action can include replacing at 270 the champion model 222 with the competing model” (e.g. in column 4 lines 59-66).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/W.W/Examiner, Art Unit 2144 02/27/2026
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144