DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 have been reviewed and are under consideration by this office action.
Notice to Applicant
The following is a Final Office action. Applicant, on 12/09/2025, amended claims. Claims 1-20 are pending in this application and have been rejected below.
Response to Amendment
Applicant’s amendments are received and acknowledged.
Response to Arguments - 35 USC § 101
Applicant’s arguments with respect to the 35 USC 101 rejections have been fully considered, but they are not persuasive.
Applicant contends that the claims do not recite an abstract idea.
Examiner respectfully disagrees. The claims recite the abstract idea of identifying a target node, determine a sequence of operations for the nodes, determining a parameter of a first function, determining a sequence of operations that minimizes downtime, and outputting the sequence. The additional elements are separated and addressed in Steps 2A-Prong2 and 2B.
Applicant contends that the claims improve the technical field by executing machine learning in two layers and further points to the specification to provide evidence of the alleged improvements.
Examiner respectfully disagrees. The use of each layer of machine learning is recited at a high level of generality wherein the second layer model is based on the output of the first model. The additional elements are performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
The 101 rejection is updated and maintained below.
Response to Arguments - 35 USC § 103
Applicant’s arguments with respect to the 35 USC 103 rejections have been fully considered, but they are moot in view of the new line of 103 Rejections seen below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claim(s) is/are directed to statutory categories.
Step 2A, Prong One – The claims are found to recite limitations that set forth the abstract idea(s), namely in independent claims recite a series of steps for the abstract idea recited below.
Regarding independent claims, (additional elements bolded)
Regarding Claim(s) 1, 9, and 17 A system, comprising: a processor; and a non-transitory memory storing instructions that, when executed, cause the processor to::/ A computer-implemented method, comprising:/ A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform operations comprising:
receive an request identifying a target production node;
determine a sequence of operation for one or more sub-nodes of the target production node based at least partially by.
executing a first layer machine learning model to determine at least one first parameter of a first objective function that maximizes an adherence percentage with respect to a total production of the target production node,
executing a second layer machine learning model to determine the sequence of operation utilizing a second objective function, which minimizes a total downtime for the one or more sub-nodes of the target production node based on the at least one first parameter determined by the first layer machine learning model; and
output a production data structure including the optimized sequence of operation of the one or more sub-nodes of the target production node.
As drafted, this is, under its broadest reasonable interpretation, within the Abstract idea groupings of “Mental processes—concepts performed in the human mind” (observation, evaluation, judgment, opinion) as the claims are directed towards receiving a request, maximize adherence, minimize a total downtime, optimiz(ing) a sequence of operation nodes, and outputting an optimized sequence all of which are concepts capable of being performed in the human mind (i.e. via pen and paper).
Further the claims are directed towards the abstract idea grouping of “Certain methods of organizing human activity” — commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and/or managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as the claims are directed towards optimizing sequences for production including resource quantity, timeframes, and/or sequence of operations (See Specification, [04-06]).
Step 2A, Prong Two - This judicial exception is not integrated into a practical application. The independent claims utilize at least an A system, comprising: a non-transitory memory; a processor communicatively coupled to the non-transitory memory, wherein the processor/ A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform operations comprising; a first layer machine learning model; a second layer machine learning model; and data structure. The additional elements are performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Step 2B - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are just “apply it” on a computer. (See MPEP 2106.05(f) – Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h).
Regarding Claim(s) 2-6, 10-16, and 18-20, the claim further narrows the abstract idea or recite additional elements previously addressed in the independent claims.
Regarding Claim(s) 7-8, 15-16, and 20, the claim further recite the additional element(s) of train the first layer machine learning model and the second layer machine learning model; and processing the training dataset based on normalization and augmentation. This element(s) is performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) in Steps 2A-Prong 2 and 2B.
Regarding Claim(s) 8 and 16, the claim further recite the additional element(s) of train the first layer machine learning model and the second layer machine learning model; and processing the training dataset based on normalization and augmentation. This element(s) is performing the steps would be no more than mere instructions to apply the exception using a generic computer component. See MPEP 2106.05(f) and/or amounts to no more than generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) in Steps 2A-Prong 2 and 2B.
Accordingly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 9-11, and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Le et al. (US 11429912 B1) in view of Wang et al. (US 20250371414 A1).
Regarding Claim(s) 1, 9, and 17, Le teaches: A system, comprising: a processor; and a non-transitory memory storing instructions that, when executed, cause the processor to::/ A computer-implemented method, comprising:/ A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause at least one device to perform operations comprising: (Le, [c. 21, l. 32-45]; Computer system 700 further includes non-volatile memory such as read only memory (ROM) 708 or other static storage device coupled to I/O subsystem 702 for storing information and instructions for processor 704. The ROM 708 may include various forms of programmable ROM … for storing information and instructions. Storage 710 is an example of a non-transitory computer-readable medium that may be used to store instructions and data which when executed by the processor 704 cause performing computer-implemented methods to execute the techniques herein).
receive an request identifying a target production node; (Le, [c. 15, l. 1-4]; At step 530 the computing system may be programmed to use the solution from phase one of optimization as an input to determine a neighborhood optimization plan for the selected neighborhood and Le, co. 14, l. 54-64]; A first neighborhood may be defined by grouping modules, equipment, workstations, or assets that can produce the same raw materials and finished goods into a neighborhood (e.g., infrastructure data). A second neighborhood may be defined by grouping raw materials and finished goods which are part of a larger product family (e.g., share a common one or more raw materials comprising the one or more finished goods) into a neighborhood. A third neighborhood may be defined by grouping consecutive periods or sub-periods into a neighborhood)..
determine a sequence of operation for one or more sub-nodes of the target production node based at least partially by: … determine at least one first parameter of a first objective function that maximizes an adherence percentage with respect to a total production of the target production node, (Le, [c. 3, l. 45-54]; one or more production plans comprising a plurality of binary variables including at least the changeover cost values and the inventory cost values; based on the one or more production plans, calculating one or more optimized production plans for the time period by, for each of the optimized production plans: determining a sub-period optimization plan for a first sub-period of the time period by adjusting one or more of the plurality of binary variables over the first subperiod; generating, using the sub-period optimization plan, a neighborhood optimization plan by adjusting one or more of the plurality of binary variables for a predefined neighborhood and Le, [c. 6, l. 21-26]; Embodiments disclosed herein provide real-time solutions to produce optimized production plans for a site based on provided internal and external factors, for example and not by way of limitation, market demand, production costs, inventory holdings, and available internal resources (e.g., manpower, equipment, etc.). Examiner interprets the optimized plan that meets lowest ending inventory as an adherence parameter.
…to determine the sequence of operation utilizing a second objective function, which minimizes a total downtime for the one or more sub-nodes of the target production node based on the at least one first parameter determined by the first layer…; and (Le, [c. 11, l. 27-40]; The particular costs associated with changeover may differ depending on the sequence of finished goods produced on the particular equipment, workstation, module, etc. In particular embodiments the computing system may utilize one or more changeover matrices as part of the input data to generate an optimal sequence of finished goods for production on a particular module in order to minimize the total time required for changeover (e.g., setup and cleanup) during a particular period. In particular embodiments the computing system may utilize the changeover matrix to determine an average value of the changeover times and costs on a per product per workstation basis prior to optimization and Le, [co. 15, l. 51-61]; As another example, the computing system may be programed to add a constraint to the network optimization model to guarantee that the total revenue in the subsequent iterations must be greater or equal to the revenue of the previous iteration. The objective of these optimization criteria is to minimize the total change over and inventory cost relative to the first run and Le, [c. 13, l. 32-36]; At this first phase, the computing system may find a sub-period optimal production plan by solving problems over a series of smaller and overlapping optimization sub-periods or windows (e.g., a rolling horizon) and Le, [co. 14, 39-44]; At this second phase of optimization, the computing system may improve upon the sub-period optimization plan by searching for better solutions over the whole model horizon by defining one or more neighborhoods). Examiner notes that Le teaches an iterative approach wherein the changeover data is adjusted in the steps to determine production plans which optimize plans for demand and inventory then further optimizes for minimize changeover time (i.e. downtime). Examiner further notes that Wang below more explicitly teaches using a determined parameter to optimize a second parameter as seen in the citation below.
output a production data structure including the optimized sequence of operation of the one or more sub-nodes of the target production node. (Le, [c. 7, l. 48-55]; Result analysis instructions 130 are programmed to receive digital data input from one or more input devices 16 and to format output data for transmission to output device 18 for rendering at the output device. For example, presentation layer instructions 135 may be programmed to generate dynamic HTML for presentation as web pages at output device 18 as part of delivering execution of instructions 100, 600 as SaaS and Le, [co. 8, 17-21]; the computing system is programmed to generate an output at step 135 comprising one or more optimized production plans that describe an optimal way to produce a quantity of finished goods at a particular site or group of sites.).
While Le does teach determining parameters that maximize adherence percentage and further minimizing downtime, Le does not appear to teach a machine learning model that utilizes multiple layers. However, Le in view of the analogous art of Wang (i.e. machine learning optimization) does teach the entirety of the limitation: executing a first layer machine learning model to determine and executing a second layer machine learning model to determine… (Wang, [139]; As shown in FIG. 6, the first layer layer1 of the cloud sub model processes the cloud training feature to obtain an output O1 of the first layer layer1, and the second layer layer2 of the cloud sub model processes the output O1 of the first layer to obtain an output O2 of the second layer layer2 and Wang, [141]; the parameters of the machine learning model are updated by using a parameter optimizer to complete one round of training. As shown in FIG. 6, in the backward propagation of the cloud sub model, first, a parameter gradient GL2 of the second layer layer2 is calculated based on the cloud output gradient GO of the terminal sub model and the output O2 of the second layer layer2).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le including determining parameters that maximize adherence percentage and further minimizing downtime with the teachings of Wang in order to generate a model that is not as computationally intense on a single terminal (Wang, [39]; The terminal sub model is relatively simple and is composed of several uppermost neural network layers of the original machine learning model, thereby being suitable for a terminal with a small computing power and avoiding an increase in the computing power burden on the terminal. Different terminal sub models may be used for different terminals, that is, the terminal sub models on the terminals may use different structures as required.
Regarding Claim(s) 2 and 10, Le/Wang teaches The system of claim 1, wherein the objective function is configured to optimize a plurality of parameters including a final resource type, a final resource quantity, a production timeframe, and a sub-node selection. (LE, [c. 8, l. 11-21]; Using this information, at step 125 the computing system is programmed to generate one or more optimized production plans. At step 130, the computing system is programmed to analyze the results of the one or more optimized production plans based on predetermined user criteria. Using these rankings, the computing system is programmed to generate an output at step 135 comprising one or more optimized production plans that describe an optimal way to produce a quantity of finished goods at a particular site or group of sites and Le, [c. 11, l. 40-52]; In particular embodiments the computing system may be programed to generate one or more production plans based on the input data and infrastructure data that when implemented at the site produce the particular quantity of the one or more finished goods over a desired time period. The production plan may further detail one or more production processes, each having a plurality of production process steps, which need to be implemented at the site. In particular embodiments the one or more production plans may comprise a plurality of binary variables, including at least binary variables that define changeover cost values at the site, and inventory cost values of a particular production plan and Le, [c. 14, l. 52-55]; At step 510, the computing system may utilize input data to determine one or more relevant neighborhoods for optimization at the second phase. In particular embodiments three relevant neighborhoods may be optimized).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Bhatta including optimizing layer/levels (i.e. sub-nodes) with the teachings of Le including the optimization functions for different parameters in order to provide an efficient way to optimize a variety of parameters in a function. (Le, [c.6, l. 6-20]; efficiently generate an optimal production plan, advanced processing equipment, large storage capacity, memory, and power may be required to quickly process and produce optimal production plans for a particular site. Additionally, consumers or site managers may require customizable and detailed constraints unique to the site (e.g., equipment considerations, cleanup costs, available manpower, or other variables specific to a site) to provide a feasible and accurate solution that is tailored to the individual consumer. This customized detail enables optimization of complex interconnectivity of multiple variables defining a particular site, for example a manufacturer that receives a vast array of raw materials and produces a plethora of different types of finished goods).
Regarding Claim(s) 3, 11, and 18, Le/Wang teaches: The system of claim 2, wherein the first layer machine learning model is configured to optimize the plurality of parameters and a second layer machine learning model is configured to receive optimized values for the plurality of parameters and generate the sequence of operation. (Le, [c. 13, l. 25-30]; In the optimization phase, the goal is to create a production plan that optimizes one or more variables, for example, maximizing customer revenue, minimizing total changeover and inventory cost, or any other variable that may be important to the consumer and Le, [c. 11, l. 27-40]; The particular costs associated with changeover may differ depending on the sequence of finished goods produced on the particular equipment, workstation, module, etc. In particular embodiments the computing system may utilize one or more changeover matrices as part of the input data to generate an optimal sequence of finished goods for production on a particular module in order to minimize the total time required for changeover (e.g., setup and cleanup) during a particular period. In particular embodiments the computing system may utilize the changeover matrix to determine an average value of the changeover times and costs on a per product per workstation basis prior to optimization). Examiner notes that Wang is relied upon to teach the multi-layered machine learning system and further teaches use the parameter output by the first layer.
Claim(s) 4-5, 12-13, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Le et al. (US 11429912 B1) in view of Wang et al. (US 20250371414 A1), and Zhang et al. (US 20240289828 A1).
Regarding Claim(s) 4 and 12, While Le/Wang teach optimizing parameters, neither appears to explicitly teach an OTIF parameter. However Le/Wang in view of the analogous art of Zhang (i.e. business metrics) does teach: The system of claim 1, wherein the first objective function is configured to optimize an on- time in-full (OTIF) percentage. (Zhang, [06]; In one or more embodiments, determining the respective initial multipliers comprises applying a supply forecasting model to forecast a supply metric for the series of future time periods, applying a demand forecasting model to forecast a demand metric for the series of future time periods, and determining the supply and demand metric as a ratio of the supply metric and the demand metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le teaching optimizing parameters with the teachings of Zhang including an OTIF parameter in order to determine if the expected values maintain budget neutrality (Zhang, [76]; the optimization module 308 may determine if the candidate surge pricing model sufficiently maintains budget neutrality (i.e., the operational metric falls within a predefined range). If the optimization criterion is met, the optimization module 308 outputs the final multipliers 310 that achieve the optimization criterion).
Regarding Claim(s) 5 and 13, While Le/Wang teach optimizing parameters, neither appears to explicitly teach an OTIF parameter. However Le/Wang in view of the analogous art of Zhang (i.e. business metrics) does teach: The system of claim 4, wherein the OTIF percentage includes a ratio of predicted supply to predicted demand for one or more production timeframes. (Zhang, [06]; In one or more embodiments, determining the respective initial multipliers comprises applying a supply forecasting model to forecast a supply metric for the series of future time periods, applying a demand forecasting model to forecast a demand metric for the series of future time periods, and determining the supply and demand metric as a ratio of the supply metric and the demand metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Ble/Wang teaching optimizing parameters with the teachings of Zhang including an OTIF parameter in order to determine if the expected values maintain budget neutrality (Zhang, [76]; the optimization module 308 may determine if the candidate surge pricing model sufficiently maintains budget neutrality (i.e., the operational metric falls within a predefined range). If the optimization criterion is met, the optimization module 308 outputs the final multipliers 310 that achieve the optimization criterion).
Regarding Claim(s) 19, While Le/Wang teach optimizing parameters, neither appears to explicitly teach an OTIF parameter. However Le/Wang in view of the analogous art of Zhang (i.e. business metrics) does teach: The non-transitory computer readable medium of claim 17, wherein the first objective function is configured to optimize an on-time in-full (OTIF) percentage including a ratio of predicted supply to predicted demand for one or more production timeframes. (Zhang, [06]; In one or more embodiments, determining the respective initial multipliers comprises applying a supply forecasting model to forecast a supply metric for the series of future time periods, applying a demand forecasting model to forecast a demand metric for the series of future time periods, and determining the supply and demand metric as a ratio of the supply metric and the demand metric).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le/Wang teaching optimizing parameters with the teachings of Zhang including an OTIF parameter in order to determine if the expected values maintain budget neutrality (Zhang, [76]; the optimization module 308 may determine if the candidate surge pricing model sufficiently maintains budget neutrality (i.e., the operational metric falls within a predefined range). If the optimization criterion is met, the optimization module 308 outputs the final multipliers 310 that achieve the optimization criterion).
Claim(s) 6-7, 14-15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Le et al. (US 11429912 B1) in view of Wang et al. (US 20250371414 A1), and McEntire et al. (US 11704581 B1).
Regarding Claim(s) 6 and 14, While Le/Wang teach optimizing layers of nodes and optimizing a dataset (Le, [c. 6, l. 16-20, c. 13, l. 25-30]), neither appear to explicitly teach multiple subsets of data. However, Le/Wang in view of the analogous art of McEntire (i.e. data optimization) does teach the entirety of the limitation: The system of claim 1, wherein the first layer machine learning model is configured to optimize a first subset of parameters and the second layer machine learning model is configured to optimize a second subset of parameters. (McEntire, [c. 5, l. 3-11]; An agriculture management system and method is described that integrates a supervised machine learning architecture using one or more multi-dimensional input data-sets, i.e., covariates or feature sets, including past crop yield performance to define soil chemistry characteristics and farmland environment. Input data-sets (also referred to as “features”) are ingested, pre-processed, and run through at least one or more Machine Learning (ML) training model that is used to predict outcomes of yield performance based on one or more predicted response surfaces as described by the present embodiment and McEntire, [c. 6, l. 35-39]; The illustrative system and method apply at least one data-set of soil chemistry, spatial boundaries, previous planted crop-type, cover crops and previously recorded crop-yield data-sets as independent input variables to a machine learning (ML) training model and McEntire, [c. 8, l. 43-45]; The introduction of a hyper parameter tuning loop optimizes the coefficients used in the model to optimize the estimated soil characteristic predictions).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le/Wang teach optimizing layers of machine learning and optimizing a dataset with the teachings of McEntire including multiple dataset optimization in order to optimize a plurality of covariates or features for optimization purposes (McEntire, [5, l. 3-8]; An agriculture management system and method is described that integrates a supervised machine learning architecture using one or more multi-dimensional input data-sets, i.e., covariates or feature sets, including past crop yield performance to define soil chemistry characteristics and farmland environment).
Regarding Claim(s) 7 and 15, While Le teaches optimizing plans, Le does not appear to teach machine learning nor training the machine learning. However, Le/Wang does teach: The system of claim 6, wherein the instructions, when executed, further cause the processor to train the first layer machine learning model and the second layer machine learning model based at least partially by: obtaining a training dataset including labeled data (Wang, [118]; the first terminal may store a training sample set for training the first terminal sub model, the training sample set includes a plurality of terminal training samples, and each terminal training sample includes a terminal training feature and a sample label).
While Le teaches updating parameters to improve the output (Le, c. 3, l. 43-51]), Le does not appear to teach an iterative training process. However, Le in view of Wang/McEntire (cited for multiple datasets) does teach: adjusting the first subset of parameters and the second subset of parameters with respect to the target production node using the processed training dataset in an iterative training process. (Wang, [148-149]; The parameter optimizer may receive the combined parameter gradient and the parameter gradient of the cloud sub model, and then adjust a parameter of the cloud sub model based on the parameter gradient of the cloud sub model to update the parameter of the cloud sub model; and adjust parameters of the terminal sub model 10, the terminal sub model 20, and the terminal sub model 30 based on the combined parameter gradient to update the parameter of the terminal sub model 10, the parameter of the terminal sub model 20, and the parameter of the terminal sub model 30. In this way, one round of model training is completed… the parameter optimizer adjusts the parameters of the terminal sub models respectively based on the parameter gradients of the terminal sub models. For example, in this case, the parameter optimizer may adjust the parameter of the terminal sub model 10 based on the parameter gradient GP1 of the terminal sub model 10 to update the parameter of the terminal sub model 10; adjust the parameter of the terminal sub model 20 based on the parameter gradient GP2 of the terminal sub model 20 to update the parameter of the terminal sub model 20 and Wang, [101]; When the terminal sends the training progress query request to the server again to initiate training, terminal training samples that have completed training in a previous round may be filtered, that is, the terminal training sample 1 to the terminal training sample 8 may be filtered, and therefore the terminal sub model is trained based on the terminal training sample 9).
While Le/Wang teaches optimizing a data set, neither appear to teach: processing the training dataset based on normalization and augmentation and However, Le/Wang in view of McEntire does teach: (McEntire, [c. 12, l. 46-52]; In some instances, differences of chemical characteristics may be a result of different sampling depths at which the soil was sampled. To minimize the impact of such differences, the data sets may be preprocessed by one or more computing devices or manually manipulated to normalize the samples in some data sets and McEntire, [c. 59, l. 1-6]; In another embodiment, additional feature sets may be used to further augment the results. In still another embodiment, each RF model is built for a specific crop-type with the purpose of generating crop yield predictions based on the input variables supplied within that specific crop-type).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le including optimizing plans with the teachings of Wang including identifying training data and retraining a model in order to identify relationships between input and output data and improve performance. (Wang, [117]; A structure of the terminal sub model run on the terminal is relatively small, so that the terminal sub model can be adapted to a terminal with a small computing power, and the federated machine learning can be applied to the terminal with the small computing power, thereby further expanding the application scope and application scenarios of the federated machine learning, and effectively helping a plurality of terminals to use data and perform machine learning modeling while meeting requirements of user privacy protection and data security. In addition, since a plurality of terminals are combined for federated training, accuracy and precision of the machine learning model obtained through training can be improved.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le/Wang including optimizing a data set with the teachings of McEntire including augmenting and normalizing datasets in order to minimize impacts of sampling errors and include additional features in calculations. (McEntire, [c. 12, l. 46-52]; In some instances, differences of chemical characteristics may be a result of different sampling depths at which the soil was sampled. To minimize the impact of such differences, the data sets may be preprocessed by one or more computing devices or manually manipulated to normalize the samples in some data sets and McEntire, [c. 59, l. 1-6]; In another embodiment, additional feature sets may be used to further augment the results. In still another embodiment, each RF model is built for a specific crop-type with the purpose of generating crop yield predictions based on the input variables supplied within that specific crop-type).
Regarding Claim(s) 20, While Le/Wang teach optimizing layers of nodes and optimizing a dataset (Le, [c. 6, l. 16-20, c. 13, l. 25-30]), neither appear to explicitly teach multiple subsets of data. However, Le/Wang in view of the analogous art of McEntire (i.e. data optimization) does teach the entirety of the limitation: The system of claim 17, wherein the first layer machine learning model is configured to optimize a first subset of parameters and the second layer machine learning model is configured to optimize a second subset of parameters. (McEntire, [c. 5, l. 3-11]; An agriculture management system and method is described that integrates a supervised machine learning architecture using one or more multi-dimensional input data-sets, i.e., covariates or feature sets, including past crop yield performance to define soil chemistry characteristics and farmland environment. Input data-sets (also referred to as “features”) are ingested, pre-processed, and run through at least one or more Machine Learning (ML) training model that is used to predict outcomes of yield performance based on one or more predicted response surfaces as described by the present embodiment and McEntire, [c. 6, l. 35-39]; The illustrative system and method apply at least one data-set of soil chemistry, spatial boundaries, previous planted crop-type, cover crops and previously recorded crop-yield data-sets as independent input variables to a machine learning (ML) training model and McEntire, [c. 8, l. 43-45]; The introduction of a hyper parameter tuning loop optimizes the coefficients used in the model to optimize the estimated soil characteristic predictions).
While Le teaches optimizing plans, Le does not appear to teach machine learning nor training the machine learning. However, Le/Wang does teach: The system of claim 6, wherein the instructions, when executed, further cause the processor to train the first layer machine learning model and the second layer machine learning model based at least partially by: obtaining a training dataset including labeled data (Wang, [67]; The order size estimation model 412 can be trained using historical data that can be obtained from the customer purchase and browse history database 404 and the web feeds and external event database 408 and/or any other data source or database as described herein).
While Le/Wang teaches optimizing a data set, neither appear to teach: processing the training dataset based on normalization and augmentation and However, Le/Wang in view of McEntire does teach: (McEntire, [c. 12, l. 46-52]; In some instances, differences of chemical characteristics may be a result of different sampling depths at which the soil was sampled. To minimize the impact of such differences, the data sets may be preprocessed by one or more computing devices or manually manipulated to normalize the samples in some data sets and McEntire, [c. 59, l. 1-6]; In another embodiment, additional feature sets may be used to further augment the results. In still another embodiment, each RF model is built for a specific crop-type with the purpose of generating crop yield predictions based on the input variables supplied within that specific crop-type).
While Le teaches updating parameters to improve the output (Le, c. 3, l. 43-51]), Le does not appear to teach an iterative training process. However, Le in view of Wang/McEntire (cited for multiple datasets) does teach: adjusting the first subset of parameters and the second subset of parameters with respect to the target production node using the processed training dataset in an iterative training process. (Wang, [71]; In addition, the order size estimation model 412 can be revised, retrained or updated in an attempt to improve its performance. The health monitor 418 can compare the performance of the customer order prediction system 400 by comparing the KPIs both before and after a change, retraining or update to the order size estimation model 412. Action can then be taken either to accept the change, retraining or update or to reject).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le including optimizing plans with the teachings of Wang including identifying training data and retraining a model in order to identify relationships between input and output data and improve performance. (Wang, [65]; During the training of the order size estimation model 412, the feature data can be fed into one or more machine learning, artificial intelligence or other algorithms to identify the relationships between the features and the resulting effect and Wang, [71]; In addition, the order size estimation model 412 can be revised, retrained or updated in an attempt to improve its performance).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le/Wang including optimizing a data set with the teachings of McEntire including augmenting and normalizing datasets in order to minimize impacts of sampling errors and include additional features in calculations. (McEntire, [c. 12, l. 46-52]; In some instances, differences of chemical characteristics may be a result of different sampling depths at which the soil was sampled. To minimize the impact of such differences, the data sets may be preprocessed by one or more computing devices or manually manipulated to normalize the samples in some data sets and McEntire, [c. 59, l. 1-6]; In another embodiment, additional feature sets may be used to further augment the results. In still another embodiment, each RF model is built for a specific crop-type with the purpose of generating crop yield predictions based on the input variables supplied within that specific crop-type).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the disclosed invention to have combined the teachings of Le/Wang teach optimizing layers of machine learning and optimizing a dataset with the teachings of McEntire including multiple dataset optimization in order to optimize a plurality of covariates or features for optimization purposes (McEntire, [5, l. 3-8]; An agriculture management system and method is described that integrates a supervised machine learning architecture using one or more multi-dimensional input data-sets, i.e., covariates or feature sets, including past crop yield performance to define soil chemistry characteristics and farmland environment).
Examining Claims with Respect to Prior Art
Claims 8 and 16, though directed to non-statutory subject matter, are deemed to define over the currently known prior art under 35 USC 102 and 103. Examiner interprets based upon the claim limitations that there is no currently known prior art that discloses the features relating to: “The system of claim 7, wherein adjusting the first subset of parameters and the second subset of parameters comprises: during each iteration step of the iterative training process: generating a revised first subset of parameters to increase the adherence percentage with respect to the total production of the target production node, generating a revised second subset of parameters to decrease the total downtime for the one or more sub-nodes of the target production node based on the revised first subset of parameters, determining whether the iterative training process is complete based on at least one of: a predetermined number of iterations, a maximum of the adherence percentage, or a minimum of the total downtime; and evaluating the first layer machine learning model and the second layer machine learning model based on one or more evaluation metrics.”
The reason to withdraw the 35 USC 103 rejection of claims 1-20 in the instant application is because the prior art of record fails to teach the overall combination as claimed. Therefore, it would not have been obvious to one of ordinary skill in the art to modify the prior art to meet the combination above without unequivocal hindsight and one of ordinary skill would have no reason to do so. Upon further searching the examiner could not identify any prior art to teach these limitations. The prior art on record, alone or in combination, neither anticipates, reasonably teaches, not renders obvious the Applicant’s claimed invention.
Known Prior Art (patent)
US 11429912 B1
Le et al.
US 20250371414 A1
Wang et al.
US 20240289828 A1
Zhang et al.
US 11704581 B1
McEntire et al.
US 20150317646 A1
Bhattacharya et al.
US 20250173672 A1
Rezaeian et al.
US 20110004506 A1
May et al.
US 20230096633 A1
Thayaparan et al.
US 20240242145 A1
Luo et al.
US 20240020556 A1
Zou et al.
Known Prior Art (NPL)
M. Leißau and C. Laroque, "Reverse Engineering the Future – An Automated Backward Simulation Approach to on-Time Production in the Semiconductor Industry," 2023 Winter Simulation Conference (WSC), San Antonio, TX, USA, 2023, pp. 2040-2051,
Known Prior Art (foreign)
WO 2022/005459 Al
Wang et al.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L GUNN whose telephone number is (571)270-1728. The examiner can normally be reached Monday - Friday 6:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached at (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEREMY L GUNN/Examiner, Art Unit 3624