DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 08/19/2025 have been fully considered but they are not persuasive.
Applicant argues the technical advantages are not achieved or suggested by the systems or methods disclosed in De Masi and Cella.
Examiner respectfully disagrees with Applicant’s argument. Only limitations that are explicitly recited in the claims are rejected under prior art. Technical advantages not recited explicitly are not rejected.
Applicant further argues De Masi fails to teach the root mean square error (RMSE) of the predictive model, applying a supervised machine learning technique that assigns separate neural networks to each of the four modules, and that De Masi has practical shortcomings.
Examiner respectfully disagrees with Applicant’s argument. The root mean square error (RMSE) of the predictive model is not claimed and is therefore not addressed by the prior art rejection. Additionally, examiner explicitly recites that De Masi teaches four modules, see pages three and four of the office action mail date 2/21/25. Examiner notes that no details regarding the explicitly functioning of the modules is claimed.
Applicant further argues Cella's system is not structured or configured to predict corrosion in pipelines in the manner claimed in the subject application. There is no disclosure in Cella of a data circuit or system configured to analyze corrosion-specific data within pipelines. Therefore, the intended use, technical purpose, and structural configuration of Cella differ fundamentally from those of the subject matter. Cella's teachings do not render the subject matter of the claimed subject matter obvious, either alone or in combination with De Masi.
Examiner respectfully disagrees with Applicant’s argument. There is no recitation in the claims of predicting corrosion, therefore applicant’s argument is moot.
Applicant further argues Amer does not clearly disclose a dedicated concatenate layer structure such as that employed in the subject application. Furthermore, Amer is directed toward identifying corrosion under insulation (CUI) in stationary structures, which is a fundamentally different problem than the one addressed in the subject application, namely, the prediction of corrosion within pipelines, which often operate in dynamic and high-pressure environments.
Examiner respectfully disagrees with Applicant’s argument. Under broadest reasonable interpretation, the amalgamation created using concatenating variable taught by Amer reads on concatenated layers. In response to applicant's argument that Amer is nonanalogous art, it has been held that a prior art reference must either be in the field of the inventor' s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, Amer is relied upon because it is solving the problem of manipulating data so that it can be effectively utilized by a neural network.
Applicant further argues Zheng does not depict any explicit hidden layers, nor does the text provide a clear description of the complete layer structure.
Applicant’s argument is persuasive. The rejection of claims 12, 32, 52, and 72 under 35 USC 103 is withdrawn.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-10, 13-30, 33-50, 53-70, and 73-80 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Masi et al. (De Masi, Giulia, et al. "A neural network predictive model of pipeline internal corrosion profile." Proceedings of the 1st International Conference on Systems Informatics, Modeling and Simulation. 2014, provided by applicant) in view of Cella et al. (US 20180284758 A1)
Regarding claim 1, De Masi teaches A method for predicting pipeline corrosion (Abstract, “metal loss and corrosion rate”) comprising steps of:
generating a predictive model (100) based on a neural network (Fig. 1) comprising:
obtaining a set of input data (page 20, “Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
providing four modules (page 19, “Multiphase flow modelling is based on OLGA software”; “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; de Waard model; and NORSOK model) relevant to pipeline corrosion including a water condensation rate module (106) (page 19, “Water plays a crucial role for corrosion, enhancing corrosion rate depending on its hold-up and velocity, gas flow rate, pressure and temperature and pipeline inclination. In our specific case, water can be considered a phase separated from gas, at the bottom of pipe.”), a flow regime module (108) (page 19, ”This program provides information on temperature profile along the pipeline, pressure profile, velocity profiles of each phase, phase hold-ups and flow regimes, given boundary pressure, temperature values and flow composition.”; “Fluid regime is described by a discrete number as follows: 1: stratified flow 2: annular flow 3: slug flow 4: bubble flow”), a corrosion rate module (110) (de Waard model), and an operating data module (112) (page 19, “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; Equations 1 and 2, Fig. 2);
dividing the input data and feeding the divided input data to said four modules (see II. Methodology: A-C; page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
concatenating said four modules to output a depth of metal loss rate (122) (page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7);
applying a supervised machine learning technique for training the predictive model (100) generated from step i) (Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction[15]. Two (or more) layer fitting networks can fit any finite input-output nonlinear relationship arbitrarily well, given enough hidden neurons);
applying the predictive model (100) from step ii) to other set of the input data in order to predict a depth of metal loss rate (122) (page 21, “Three quantities are predicted by FNN: CR, metal loss and area of defects. For each variable, a FNN is implemented. CR value derives from the dataset of comparison between 2005 and 2012.”)
The referenced Levenberg-Marquardt back propagation algorithm is a supervised machine learning technique, as evidenced by Denizhan (Onur Denizhan, Comparison of different supervised learning algorithms for position analysis of the slider-crank mechanism, Alexandria Engineering Journal, Volume 92, 2024, Pages 39-49, ISSN 1110-0168, https://doi.org/10.1016/j.aej.2024.02.055.) (Abstract: “the application of following three different supervised learning algorithms to the position analysis of the slider-crank (R-RRT) mechanism is investigated using analytical solution datasets: the Levenberg-Marquardt Backpropagation (LM) algorithm, Bayesian Regularization (BR) algorithm, and Scaled Conjugate Gradient Descent (SCG) algorithm.”).
De Masi does not teach the method, wherein, step ii), applying the supervised machine learning technique includes applying a first neural network (600) to the water condensation rate module (106), applying a second neural network (700) to the flow regime module (108), and applying a third neural network (800) to the corrosion rate module (110), in order to obtain initial weights of each module.
Cella teaches an analogous method of utilizing a neural network to analyze input data, wherein, step ii), applying the supervised machine learning technique ([0905] lines 1-10, “In embodiments, the foregoing neural network may be configured to connect with a DAQ instrument and other data collectors that may receive analog signals from one or more sensors. The foregoing neural networks may also be configured to interface with, connect to, or integrate with expert systems that can be local and/or available through one or more cloud networks. In embodiments, FIGS. 110 through 136 depict exemplary neural networks and FIG. 109 depicts a legend showing the various components of the neural networks depicted throughout FIGS. 110 to 136.”) includes applying a first neural network (600) to the water condensation rate module (106) ([1272] on page 204 left column, “A moisture sensing device can detect the liquid, condensation or H2O content of the target or its environment.”), applying a second neural network (700) to the flow regime module (108) ([1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”), and applying a third neural network (800) to the corrosion rate module (110) ([2140] lines 1-4, “In embodiments, the methods and systems disclosed herein may include, connect with or be integrated with sensors that may monitor interconnections for corrosion or other conditions”), in order to obtain initial weights of each module ([0211] lines 28-51, “Where sufficient understanding of the underlying structure or behavior of a system is not known, insufficient data is not available, or in other cases where preferred for various reasons, machine learning may also be undertaken in the absence of an underlying model; that is, input sources may be weighted, structured, or the like within a machine learning facility without regard to any a priori understanding of structure, and outcomes (such as those based on measures of success at accomplishing various desired objectives) can be serially fed to the machine learning system to allow it to learn how to achieve the targeted objectives. For example, the system may learn to recognize faults, to recognize patterns, to develop models or functions, to develop rules, to optimize performance, to minimize failure rates, to optimize profits, to optimize resource utilization, to optimize flow (such as flow of traffic), or to optimize many other parameters that may be relevant to successful outcomes (such as outcomes in a wide range of environments). Machine learning may use genetic programming techniques, such as promoting or demoting one or more input sources, structures, data types, objects, weights, nodes, links, or other factors based on feedback (such that successful elements emerge over a series of generations).”)
The separate neural networks are taught to be applied to separate modules ([0920] “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary. Each of the independent neural networks in the modular neural network may work with separate inputs, accomplishing subtasks that make up the task the modular network as whole is intended to perform. For example, a modular neural network may comprise a recurrent neural network for pattern recognition, such as to recognize what type of industrial machine is being sensed by one or more sensors that are provided as input channels to the modular network and an RBF neural network for optimizing the behavior of the machine once understood. The intermediary may accept inputs of each of the individual neural networks, process them, and create output for the modular neural network, such an appropriate control parameter, a prediction of state, or the like.”). The modular neural networks are first, second, and third neural networks. On of ordinary skill in the art would recognize that the modular network working with separate inputs (e.g. the data from each module) is the application of the first, second, and third neural networks to their respective modules. Cella recites several embodiments, the elements of which may be configured and combined in manners that would be obvious to one of ordinary skill in the art ([2186] “While the foregoing written description enables one skilled in the art to make and use what is considered presently to be the best mode thereof, those skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the disclosure.”)
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of De Masi to include the first, second, and third neural networks of Cella because it would yield predictable and advantageous results. The neural networks of Cella are taught to be used with, alongside, or in place of some functions of a single neural network (see paragraph [0921]), and thus would yield predictable results when integrated into the neural network model of De Masi. The application of separate neural networks to each of the modules, which represent different types of sensor information, would yield advantageous results of having each neural network trained on a type of sensor information, thereby increasing the accuracy of the model.
Regarding claim 2, De Masi in view of Cella teaches The method according to Claim 1, wherein step i) the input data comprises an empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and a pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
Regarding claim 3, De Masi in view of Cella teaches The method according to Claim 2, wherein the empirical data (102) is selected from distances, pipe diameters, export pressures, export temperatures, gas flow rates, water flow rates, condensate flow rates, amounts of CO2, amounts of H2S, pipeline corrosion allowance, pipeline design life, pipeline nominal thickness, concrete thickness, insulation thickness, or combinations thereof (Cella: [1048] lines 19-37, “The data collector 10804 may include the data collection circuit 10808. The ambient environment condition or local sensors include one or more of a noise sensor, a temperature sensor, a flow sensor, a pressure sensor, a chemical sensor, a vibration sensor, an acceleration sensor, an accelerometer, a Pressure sensor, a force sensor, a position sensor, a location sensor, a velocity sensor, a displacement sensor, a temperature sensor, a thermographic sensor, a heat flux sensor, a tachometer sensor, a motion sensor, a magnetic field sensor, an electrical field sensor, a galvanic sensor, a current sensor, a flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, a heat flow sensor, a particulate flow sensor, a level sensor, a proximity sensor, a toxic gas sensor, a chemical sensor, a CBRNE sensor, a pH sensor, a hygrometer, a moisture sensor, a densitometer, an imaging sensor, a camera, an SSR, a triax probe, an ultrasonic sensor, a touch sensor, a microphone, a capacitive sensor, a strain gauge, an EMF meter, and the like.”; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 4, De Masi in view of Cella teaches The method according to Claim 2, wherein the pipeline variable (104) is obtained from the empirical data (102) by means selected from theoretical equation, algorithm, software simulation, or machine learning (De Masi: Equations 1 and 2; Cella: position sensor, location sensor). One of ordinary skill in the art would recognize the pipeline inclination and concavity is determined by the equations and positional data, such as from the position or location sensors of Cella.
Regarding claim 5, De Masi in view of Cella teaches The method according to Claim 4, wherein the pipeline variable (104) is selected from gas velocities, liquid densities, liquid velocities, liquid viscosities, pressures, superficial gas velocities, superficial liquid velocities, temperatures, or combinations thereof (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”).
Regarding claim 6, De Masi in view of Cella teaches The method according to Claim 2, wherein step i) the empirical data (102) and the pipeline variable (104) including gas velocities, pressures, temperatures and pipe diameters (Cella: sensors of paragraph [1048], including a temperature sensor, a flow sensor, a pressure sensor, a gaseous flow sensor, a hygrometer, a moisture sensor) are fed to the water condensation rate module (106).
Regarding claim 7, De Masi in view of Cella teaches The method according to Claim 2, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid densities, liquid viscosities, superficial gas velocities, superficial liquid velocities, temperatures and pipe diameters are fed to the flow regime module (108) (Cella: [1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”; De Masi: page 19 “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”).
Regarding claim 8, De Masi in view of Cella teaches The method according to Claim 2, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid velocities, liquid viscosities, pressures, CO2 pressures and temperatures are fed to the corrosion rate module (110) ) (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”; Cella: sensors of paragraph [1048], including flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, pressure sensor, temperature sensor; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 9, De Masi in view of Cella teaches The method according to Claim 2, wherein step i) the empirical data (102) is fed to the operating data module (112) (De Masi: Equations 1 and 2, Fig. 2).
Regarding claim 10, De Masi in view of Cella teaches The method according to Claim 1, wherein step i) the water condensation rate module (106), the flow regime module (108), the corrosion rate module (110) and the operating data module (112) comprise n hidden layers, where n is selected from an integer of 2 to 10 (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”). The neural network having multiple hidden layers would include having 2 to 10 hidden layers.
Even if De Masi in view of Cella does not explicitly teach 2 to 10 hidden layers, It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the number of appropriate hidden layers for each module, since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233.
Regarding claim 13, De Masi in view of Cella teaches The method according to Claim 1, wherein step ii) the supervised machine learning technique is selected from a back propagation means, a gradient descent means or a logistic regression means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 14, De Masi in view of Cella teaches The method according to Claim 13, wherein the supervised machine learning technique is a back propagation means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 15, De Masi in view of Cella teaches The method according to Claim 1, wherein step ii) the first neural network (600) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 16, De Masi in view of Cella teaches The method according to Claim 15, wherein the first neural network (600) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the first neural network (600) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the water condensation rate module (106) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 17, De Masi in view of Cella teaches The method according to Claim 1, wherein step ii) the second neural network (600) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 18, De Masi in view of Cella teaches The method according to Claim 15, wherein the second neural network (700) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the second neural network (700) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of flow regime module (108) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 19, De Masi in view of Cella teaches The method according to Claim 1, wherein step ii) the third neural network (800) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 20, De Masi in view of Cella teaches The method according to Claim 15, wherein the third neural network (800) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the third neural network (800) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the corrosion rate module (110) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 21, De Masi teaches A machine-learning system configured to predicted pipeline corrosion (Abstract) comprising one or more receiving sections configured to acquire (page 19, “The neural network model here proposed integrates geometrical characteristics of a pipeline (an application case is considered), corrosion deterministic models and simulations of multiphase flow velocity and transport, as schematized in Figure 1”), input data from one or more pipelines (page 21, “In the present study an application to a pipeline 20 km long in Mediterranean Sea is investigated”); to perform a prediction of pipeline corrosion comprising steps of:
generating a predictive model (100) based on a neural network (Fig. 1) comprising:
obtaining a set of input data (page 20, “Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
providing four modules (page 19, “Multiphase flow modelling is based on OLGA software”; “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; de Waard model; and NORSOK model) relevant to pipeline corrosion including a water condensation rate module (106) (page 19, “Water plays a crucial role for corrosion, enhancing corrosion rate depending on its hold-up and velocity, gas flow rate, pressure and temperature and pipeline inclination. In our specific case, water can be considered a phase separated from gas, at the bottom of pipe.”), a flow regime module (108) (page 19, ”This program provides information on temperature profile along the pipeline, pressure profile, velocity profiles of each phase, phase hold-ups and flow regimes, given boundary pressure, temperature values and flow composition.”; “Fluid regime is described by a discrete number as follows: 1: stratified flow 2: annular flow 3: slug flow 4: bubble flow”), a corrosion rate module (110) (de Waard model), and an operating data module (112) (page 19, “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; Equations 1 and 2, Fig. 2);
dividing the input data and feeding the divided input data to said four modules (see II. Methodology: A-C; page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
concatenating said four modules to output a depth of metal loss rate (122) (page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7);
applying a supervised machine learning technique for training the predictive model (100) generated from step i) (Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction[15]. Two (or more) layer fitting networks can fit any finite input-output nonlinear relationship arbitrarily well, given enough hidden neurons);
applying the predictive model (100) from step ii) to other set of the input data in order to predict a depth of metal loss rate (122) (page 21, “Three quantities are predicted by FNN: CR, metal loss and area of defects. For each variable, a FNN is implemented. CR value derives from the dataset of comparison between 2005 and 2012.”)
The referenced Levenberg-Marquardt back propagation algorithm is a supervised machine learning technique, as evidenced by Denizhan (Onur Denizhan, Comparison of different supervised learning algorithms for position analysis of the slider-crank mechanism, Alexandria Engineering Journal, Volume 92, 2024, Pages 39-49, ISSN 1110-0168, https://doi.org/10.1016/j.aej.2024.02.055.) (Abstract: “the application of following three different supervised learning algorithms to the position analysis of the slider-crank (R-RRT) mechanism is investigated using analytical solution datasets: the Levenberg-Marquardt Backpropagation (LM) algorithm, Bayesian Regularization (BR) algorithm, and Scaled Conjugate Gradient Descent (SCG) algorithm.”).
De Masi does not teach the system, comprising one or more data storing sections configured to store the input data; one or more computer processor;
wherein, step ii), applying the supervised machine learning technique includes applying a first neural network (600) to the water condensation rate module (106), applying a second neural network (700) to the flow regime module (108), and applying a third neural network (800) to the corrosion rate module (110), in order to obtain initial weights of each module.
Cella teaches an analogous system of utilizing a neural network to analyze input data, comprising one or more data storing sections configured to store the input data (Fig. 3, data storage capabilities (e.g., data pools 60, or distributed ledger 62); one or more computer processor ([0922] lines “One or more hardware nodes may be configured to stream output data resulting from the activity of the neural net. Hardware nodes, which may comprise one or more chips, microprocessors, integrated circuits, programmable logic controllers, application-specific integrated circuits, field-programmable gate arrays, or the like, may be provided to optimize the speed, input/output efficiency, energy efficiency, signal to noise ratio, or other parameter of some part of a neural net of any of the types described herein. ”); and
wherein, step ii), applying the supervised machine learning technique ([0905] lines 1-10, “In embodiments, the foregoing neural network may be configured to connect with a DAQ instrument and other data collectors that may receive analog signals from one or more sensors. The foregoing neural networks may also be configured to interface with, connect to, or integrate with expert systems that can be local and/or available through one or more cloud networks. In embodiments, FIGS. 110 through 136 depict exemplary neural networks and FIG. 109 depicts a legend showing the various components of the neural networks depicted throughout FIGS. 110 to 136.”) includes applying a first neural network (600) to the water condensation rate module (106) ([1272] on page 204 left column, “A moisture sensing device can detect the liquid, condensation or H2O content of the target or its environment.”), applying a second neural network (700) to the flow regime module (108) ([1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”), and applying a third neural network (800) to the corrosion rate module (110) ([2140] lines 1-4, “In embodiments, the methods and systems disclosed herein may include, connect with or be integrated with sensors that may monitor interconnections for corrosion or other conditions”), in order to obtain initial weights of each module ([0211] lines 28-51, “Where sufficient understanding of the underlying structure or behavior of a system is not known, insufficient data is not available, or in other cases where preferred for various reasons, machine learning may also be undertaken in the absence of an underlying model; that is, input sources may be weighted, structured, or the like within a machine learning facility without regard to any a priori understanding of structure, and outcomes (such as those based on measures of success at accomplishing various desired objectives) can be serially fed to the machine learning system to allow it to learn how to achieve the targeted objectives. For example, the system may learn to recognize faults, to recognize patterns, to develop models or functions, to develop rules, to optimize performance, to minimize failure rates, to optimize profits, to optimize resource utilization, to optimize flow (such as flow of traffic), or to optimize many other parameters that may be relevant to successful outcomes (such as outcomes in a wide range of environments). Machine learning may use genetic programming techniques, such as promoting or demoting one or more input sources, structures, data types, objects, weights, nodes, links, or other factors based on feedback (such that successful elements emerge over a series of generations).”)
The separate neural networks are taught to be applied to separate modules ([0920] “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary. Each of the independent neural networks in the modular neural network may work with separate inputs, accomplishing subtasks that make up the task the modular network as whole is intended to perform. For example, a modular neural network may comprise a recurrent neural network for pattern recognition, such as to recognize what type of industrial machine is being sensed by one or more sensors that are provided as input channels to the modular network and an RBF neural network for optimizing the behavior of the machine once understood. The intermediary may accept inputs of each of the individual neural networks, process them, and create output for the modular neural network, such an appropriate control parameter, a prediction of state, or the like.”). The modular neural networks are first, second, and third neural networks. On of ordinary skill in the art would recognize that the modular network working with separate inputs (e.g. the data from each module) is the application of the first, second, and third neural networks to their respective modules. Cella recites several embodiments, the elements of which may be configured and combined in manners that would be obvious to one of ordinary skill in the art ([2186] “While the foregoing written description enables one skilled in the art to make and use what is considered presently to be the best mode thereof, those skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the disclosure.”)
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of De Masi to include the first, second, and third neural networks of Cella because it would yield predictable and advantageous results. The neural networks of Cella are taught to be used with, alongside, or in place of some functions of a single neural network (see paragraph [0921]), and thus would yield predictable results when integrated into the neural network model of De Masi. The application of separate neural networks to each of the modules, which represent different types of sensor information, would yield advantageous results of having each neural network trained on a type of sensor information, thereby increasing the accuracy of the model.
Regarding claim 22, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step i) the input data comprises an empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and a pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
Regarding claim 23, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein the empirical data (102) is selected from distances, pipe diameters, export pressures, export temperatures, gas flow rates, water flow rates, condensate flow rates, amounts of CO2, amounts of H2S, pipeline corrosion allowance, pipeline design life, pipeline nominal thickness, concrete thickness, insulation thickness, or combinations thereof (Cella: [1048] lines 19-37, “The data collector 10804 may include the data collection circuit 10808. The ambient environment condition or local sensors include one or more of a noise sensor, a temperature sensor, a flow sensor, a pressure sensor, a chemical sensor, a vibration sensor, an acceleration sensor, an accelerometer, a Pressure sensor, a force sensor, a position sensor, a location sensor, a velocity sensor, a displacement sensor, a temperature sensor, a thermographic sensor, a heat flux sensor, a tachometer sensor, a motion sensor, a magnetic field sensor, an electrical field sensor, a galvanic sensor, a current sensor, a flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, a heat flow sensor, a particulate flow sensor, a level sensor, a proximity sensor, a toxic gas sensor, a chemical sensor, a CBRNE sensor, a pH sensor, a hygrometer, a moisture sensor, a densitometer, an imaging sensor, a camera, an SSR, a triax probe, an ultrasonic sensor, a touch sensor, a microphone, a capacitive sensor, a strain gauge, an EMF meter, and the like.”; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 24, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein the pipeline variable (104) is obtained from the empirical data (102) by means selected from theoretical equation, algorithm, software simulation, or machine learning (De Masi: Equations 1 and 2; Cella: position sensor, location sensor). One of ordinary skill in the art would recognize the pipeline inclination and concavity is determined by the equations and positional data, such as from the position or location sensors of Cella.
Regarding claim 25, De Masi in view of Cella teaches The machine-learning system according to Claim 24, wherein the pipeline variable (104) is selected from gas velocities, liquid densities, liquid velocities, liquid viscosities, pressures, superficial gas velocities, superficial liquid velocities, temperatures, or combinations thereof (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”).
Regarding claim 26, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein step i) the empirical data (102) and the pipeline variable (104) including gas velocities, pressures, temperatures and pipe diameters (Cella: sensors of paragraph [1048], including a temperature sensor, a flow sensor, a pressure sensor, a gaseous flow sensor, a hygrometer, a moisture sensor) are fed to the water condensation rate module (106).
Regarding claim 27, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid densities, liquid viscosities, superficial gas velocities, superficial liquid velocities, temperatures and pipe diameters are fed to the flow regime module (108) (Cella: [1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”; De Masi: page 19 “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”).
Regarding claim 28, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid velocities, liquid viscosities, pressures, CO2 pressures and temperatures are fed to the corrosion rate module (110) (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”; Cella: sensors of paragraph [1048], including flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, pressure sensor, temperature sensor; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 29, De Masi in view of Cella teaches The machine-learning system according to Claim 22, wherein step i) the empirical data (102) is fed to the operating data module (112) (De Masi: Equations 1 and 2, Fig. 2).
Regarding claim 30, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step i) the water condensation rate module (106), the flow regime module (108), the corrosion rate module (110) and the operating data module (112) comprise n hidden layers, where n is selected from an integer of 2 to 10 (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”). The neural network having multiple hidden layers would include having 2 to 10 hidden layers.
Even if De Masi in view of Cella does not explicitly teach 2 to 10 hidden layers, It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the number of appropriate hidden layers for each module, since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233.
Regarding claim 33, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step ii) the supervised machine learning technique is selected from a back propagation means, a gradient descent means or a logistic regression means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 34, De Masi in view of Cella teaches The machine-learning system according to Claim 33, wherein the supervised machine learning technique is a back propagation means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 35, De Masi in view of Cella teaches The machine-learning system according to Claim 21,wherein step ii) the first neural network (600) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 36, De Masi in view of Cella teaches The machine-learning system according to Claim 35, wherein the first neural network (600) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the first neural network (600) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the water condensation rate module (106) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 37, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step ii) the second neural network (700) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 38, De Masi in view of Cella teaches The machine-learning system according to Claim 37, wherein the second neural network (700) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the second neural network (700) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the flow regime module (108) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 39, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step ii) the third neural network (800) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 40, De Masi in view of Cella teaches The machine-learning system according to Claim 39, wherein the third neural network (800) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the third neural network (800) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the corrosion rate module (110) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 41, De Masi teaches A(n) non-transitory computer readable medium containing instruction configured for execution by one or more processors in order to cause the processors to:
generating a predictive model (100) based on a neural network (Fig. 1) comprising:
obtaining a set of input data (page 20, “Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
providing four modules (page 19, “Multiphase flow modelling is based on OLGA software”; “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; de Waard model; and NORSOK model) relevant to pipeline corrosion including a water condensation rate module (106) (page 19, “Water plays a crucial role for corrosion, enhancing corrosion rate depending on its hold-up and velocity, gas flow rate, pressure and temperature and pipeline inclination. In our specific case, water can be considered a phase separated from gas, at the bottom of pipe.”), a flow regime module (108) (page 19, ”This program provides information on temperature profile along the pipeline, pressure profile, velocity profiles of each phase, phase hold-ups and flow regimes, given boundary pressure, temperature values and flow composition.”; “Fluid regime is described by a discrete number as follows: 1: stratified flow 2: annular flow 3: slug flow 4: bubble flow”), a corrosion rate module (110) (de Waard model), and an operating data module (112) (page 19, “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; Equations 1 and 2, Fig. 2);
dividing the input data and feeding the divided input data to said four modules (see II. Methodology: A-C; page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
concatenating said four modules to output a depth of metal loss rate (122) (page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7);
applying a supervised machine learning technique for training the predictive model (100) generated from step i) (Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction[15]. Two (or more) layer fitting networks can fit any finite input-output nonlinear relationship arbitrarily well, given enough hidden neurons);
applying the predictive model (100) from step ii) to other set of the input data in order to predict a depth of metal loss rate (122) (page 21, “Three quantities are predicted by FNN: CR, metal loss and area of defects. For each variable, a FNN is implemented. CR value derives from the dataset of comparison between 2005 and 2012.”)
The referenced Levenberg-Marquardt back propagation algorithm is a supervised machine learning technique, as evidenced by Denizhan (Onur Denizhan, Comparison of different supervised learning algorithms for position analysis of the slider-crank mechanism, Alexandria Engineering Journal, Volume 92, 2024, Pages 39-49, ISSN 1110-0168, https://doi.org/10.1016/j.aej.2024.02.055.) (Abstract: “the application of following three different supervised learning algorithms to the position analysis of the slider-crank (R-RRT) mechanism is investigated using analytical solution datasets: the Levenberg-Marquardt Backpropagation (LM) algorithm, Bayesian Regularization (BR) algorithm, and Scaled Conjugate Gradient Descent (SCG) algorithm.”).
De Masi does not teach the non-transitory computer readable medium containing instruction configured for execution by one or more processors, wherein, step ii), applying the supervised machine learning technique includes applying a first neural network (600) to the water condensation rate module (106), applying a second neural network (700) to the flow regime module (108), and applying a third neural network (800) to the corrosion rate module (110), in order to obtain initial weights of each module.
Cella teaches an analogous instructions of utilizing a neural network to analyze input data, comprising non-transitory computer readable medium containing instruction configured for execution by one or more processors ([0963] lines 1-15, “In embodiments, one or more non-transitory computer-readable media comprising computer executable instructions that, when executed, may cause at least one processor to perform actions comprising: providing a data collector communicatively coupled to a plurality of input channels; providing a data storage structured to store a plurality of collector route templates and sensor specifications for sensors that correspond to the input channels, wherein the plurality of collector route templates each comprise a different sensor collection routine; providing a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and providing a data analysis circuit structured to receive output data from the plurality of input channels”) wherein, step ii), applying the supervised machine learning technique ([0905] lines 1-10, “In embodiments, the foregoing neural network may be configured to connect with a DAQ instrument and other data collectors that may receive analog signals from one or more sensors. The foregoing neural networks may also be configured to interface with, connect to, or integrate with expert systems that can be local and/or available through one or more cloud networks. In embodiments, FIGS. 110 through 136 depict exemplary neural networks and FIG. 109 depicts a legend showing the various components of the neural networks depicted throughout FIGS. 110 to 136.”) includes applying a first neural network (600) to the water condensation rate module (106) ([1272] on page 204 left column, “A moisture sensing device can detect the liquid, condensation or H2O content of the target or its environment.”), applying a second neural network (700) to the flow regime module (108) ([1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”), and applying a third neural network (800) to the corrosion rate module (110) ([2140] lines 1-4, “In embodiments, the methods and systems disclosed herein may include, connect with or be integrated with sensors that may monitor interconnections for corrosion or other conditions”), in order to obtain initial weights of each module ([0211] lines 28-51, “Where sufficient understanding of the underlying structure or behavior of a system is not known, insufficient data is not available, or in other cases where preferred for various reasons, machine learning may also be undertaken in the absence of an underlying model; that is, input sources may be weighted, structured, or the like within a machine learning facility without regard to any a priori understanding of structure, and outcomes (such as those based on measures of success at accomplishing various desired objectives) can be serially fed to the machine learning system to allow it to learn how to achieve the targeted objectives. For example, the system may learn to recognize faults, to recognize patterns, to develop models or functions, to develop rules, to optimize performance, to minimize failure rates, to optimize profits, to optimize resource utilization, to optimize flow (such as flow of traffic), or to optimize many other parameters that may be relevant to successful outcomes (such as outcomes in a wide range of environments). Machine learning may use genetic programming techniques, such as promoting or demoting one or more input sources, structures, data types, objects, weights, nodes, links, or other factors based on feedback (such that successful elements emerge over a series of generations).”)
The separate neural networks are taught to be applied to separate modules ([0920] “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary. Each of the independent neural networks in the modular neural network may work with separate inputs, accomplishing subtasks that make up the task the modular network as whole is intended to perform. For example, a modular neural network may comprise a recurrent neural network for pattern recognition, such as to recognize what type of industrial machine is being sensed by one or more sensors that are provided as input channels to the modular network and an RBF neural network for optimizing the behavior of the machine once understood. The intermediary may accept inputs of each of the individual neural networks, process them, and create output for the modular neural network, such an appropriate control parameter, a prediction of state, or the like.”). The modular neural networks are first, second, and third neural networks. On of ordinary skill in the art would recognize that the modular network working with separate inputs (e.g. the data from each module) is the application of the first, second, and third neural networks to their respective modules. Cella recites several embodiments, the elements of which may be configured and combined in manners that would be obvious to one of ordinary skill in the art ([2186] “While the foregoing written description enables one skilled in the art to make and use what is considered presently to be the best mode thereof, those skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the disclosure.”)
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructions of De Masi to include the first, second, and third neural networks of Cella because it would yield predictable and advantageous results. The neural networks of Cella are taught to be used with, alongside, or in place of some functions of a single neural network (see paragraph [0921]), and thus would yield predictable results when integrated into the neural network model of De Masi. The application of separate neural networks to each of the modules, which represent different types of sensor information, would yield advantageous results of having each neural network trained on a type of sensor information, thereby increasing the accuracy of the model.
Regarding claim 42, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step i) the input data comprises an empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and a pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
Regarding claim 43, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein the empirical data (102) is selected from distances, pipe diameters, export pressures, export temperatures, gas flow rates, water flow rates, condensate flow rates, amounts of CO2, amounts of H2S, pipeline corrosion allowance, pipeline design life, pipeline nominal thickness, concrete thickness, insulation thickness, or combinations thereof (Cella: [1048] lines 19-37, “The data collector 10804 may include the data collection circuit 10808. The ambient environment condition or local sensors include one or more of a noise sensor, a temperature sensor, a flow sensor, a pressure sensor, a chemical sensor, a vibration sensor, an acceleration sensor, an accelerometer, a Pressure sensor, a force sensor, a position sensor, a location sensor, a velocity sensor, a displacement sensor, a temperature sensor, a thermographic sensor, a heat flux sensor, a tachometer sensor, a motion sensor, a magnetic field sensor, an electrical field sensor, a galvanic sensor, a current sensor, a flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, a heat flow sensor, a particulate flow sensor, a level sensor, a proximity sensor, a toxic gas sensor, a chemical sensor, a CBRNE sensor, a pH sensor, a hygrometer, a moisture sensor, a densitometer, an imaging sensor, a camera, an SSR, a triax probe, an ultrasonic sensor, a touch sensor, a microphone, a capacitive sensor, a strain gauge, an EMF meter, and the like.”; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 44, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein the pipeline variable (104) is obtained from the empirical data (102) by means selected from theoretical equation, algorithm, software simulation, or machine learning (De Masi: Equations 1 and 2; Cella: position sensor, location sensor). One of ordinary skill in the art would recognize the pipeline inclination and concavity is determined by the equations and positional data, such as from the position or location sensors of Cella.
Regarding claim 45, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 44, wherein the pipeline variable (104) is selected from gas velocities, liquid densities, liquid velocities, liquid viscosities, pressures, superficial gas velocities, superficial liquid velocities, temperatures, or combinations thereof (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”).
Regarding claim 46, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein step i) the empirical data (102) and the pipeline variable (104) including gas velocities, pressures, temperatures and pipe diameters (Cella: sensors of paragraph [1048], including a temperature sensor, a flow sensor, a pressure sensor, a gaseous flow sensor, a hygrometer, a moisture sensor) are fed to the water condensation rate module (106).
Regarding claim 47, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid densities, liquid viscosities, superficial gas velocities, superficial liquid velocities, temperatures and pipe diameters are fed to the flow regime module (108) (Cella: [1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”; De Masi: page 19 “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”).
Regarding claim 48, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid velocities, liquid viscosities, pressures, CO2 pressures and temperatures are fed to the corrosion rate module (110) (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”; Cella: sensors of paragraph [1048], including flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, pressure sensor, temperature sensor; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 49, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 42, wherein step i) the empirical data (102) is fed to the operating data module (112) (De Masi: Equations 1 and 2, Fig. 2).
Regarding claim 50, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step i) the water condensation rate module (106), the flow regime module (108), the corrosion rate module (110) and the operating data module (112) comprise n hidden layers, where n is selected from an integer of 2 to 10 (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”). The neural network having multiple hidden layers would include having 2 to 10 hidden layers.
Even if De Masi in view of Cella does not explicitly teach 2 to 10 hidden layers, It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the number of appropriate hidden layers for each module, since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233.
Regarding claim 53, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step ii) the supervised machine learning technique is selected from a back propagation means, a gradient descent means or a logistic regression means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 54, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 53, wherein the supervised machine learning technique is a back propagation means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 55, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41,wherein step ii) the first neural network (600) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 56, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 55, wherein the first neural network (600) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the first neural network (600) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the water condensation rate module (106) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 57, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step ii) the second neural network (700) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 58, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 57, wherein the second neural network (700) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the second neural network (700) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the flow regime module (108) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 59, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step ii) the third neural network (800) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 60, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 59, wherein the third neural network (800) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the third neural network (800) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the water condensation rate module (106) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 61, De Masi teaches A computer program comprising instructions for implementing a method for predicting pipeline corrosion (Abstract) comprising steps of:
generating a predictive model (100) based on a neural network (Fig. 1) comprising:
obtaining a set of input data (page 20, “Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
providing four modules (page 19, “Multiphase flow modelling is based on OLGA software”; “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; de Waard model; and NORSOK model) relevant to pipeline corrosion including a water condensation rate module (106) (page 19, “Water plays a crucial role for corrosion, enhancing corrosion rate depending on its hold-up and velocity, gas flow rate, pressure and temperature and pipeline inclination. In our specific case, water can be considered a phase separated from gas, at the bottom of pipe.”), a flow regime module (108) (page 19, ”This program provides information on temperature profile along the pipeline, pressure profile, velocity profiles of each phase, phase hold-ups and flow regimes, given boundary pressure, temperature values and flow composition.”; “Fluid regime is described by a discrete number as follows: 1: stratified flow 2: annular flow 3: slug flow 4: bubble flow”), a corrosion rate module (110) (de Waard model), and an operating data module (112) (page 19, “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”; Equations 1 and 2, Fig. 2);
dividing the input data and feeding the divided input data to said four modules (see II. Methodology: A-C; page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)”);
concatenating said four modules to output a depth of metal loss rate (122) (page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7);
applying a supervised machine learning technique for training the predictive model (100) generated from step i) (Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction[15]. Two (or more) layer fitting networks can fit any finite input-output nonlinear relationship arbitrarily well, given enough hidden neurons);
applying the predictive model (100) from step ii) to other set of the input data in order to predict a depth of metal loss rate (122) (page 21, “Three quantities are predicted by FNN: CR, metal loss and area of defects. For each variable, a FNN is implemented. CR value derives from the dataset of comparison between 2005 and 2012.”)
The referenced Levenberg-Marquardt back propagation algorithm is a supervised machine learning technique, as evidenced by Denizhan (Onur Denizhan, Comparison of different supervised learning algorithms for position analysis of the slider-crank mechanism, Alexandria Engineering Journal, Volume 92, 2024, Pages 39-49, ISSN 1110-0168, https://doi.org/10.1016/j.aej.2024.02.055.) (Abstract: “the application of following three different supervised learning algorithms to the position analysis of the slider-crank (R-RRT) mechanism is investigated using analytical solution datasets: the Levenberg-Marquardt Backpropagation (LM) algorithm, Bayesian Regularization (BR) algorithm, and Scaled Conjugate Gradient Descent (SCG) algorithm.”).
De Masi does not teach the A computer program, wherein, step ii), applying the supervised machine learning technique includes applying a first neural network (600) to the water condensation rate module (106), applying a second neural network (700) to the flow regime module (108), and applying a third neural network (800) to the corrosion rate module (110), in order to obtain initial weights of each module.
Cella teaches an analogous instructions of utilizing a neural network to analyze input data, comprising A computer program ([1807] lines 4-8, “The present disclosure may be implemented as a method on the machine, as a system or apparatus as part of or in relation to the machine, or as a computer program product embodied in a computer readable medium executing on one or more of the machines.”), wherein, step ii), applying the supervised machine learning technique ([0905] lines 1-10, “In embodiments, the foregoing neural network may be configured to connect with a DAQ instrument and other data collectors that may receive analog signals from one or more sensors. The foregoing neural networks may also be configured to interface with, connect to, or integrate with expert systems that can be local and/or available through one or more cloud networks. In embodiments, FIGS. 110 through 136 depict exemplary neural networks and FIG. 109 depicts a legend showing the various components of the neural networks depicted throughout FIGS. 110 to 136.”) includes applying a first neural network (600) to the water condensation rate module (106) ([1272] on page 204 left column, “A moisture sensing device can detect the liquid, condensation or H2O content of the target or its environment.”), applying a second neural network (700) to the flow regime module (108) ([1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”), and applying a third neural network (800) to the corrosion rate module (110) ([2140] lines 1-4, “In embodiments, the methods and systems disclosed herein may include, connect with or be integrated with sensors that may monitor interconnections for corrosion or other conditions”), in order to obtain initial weights of each module ([0211] lines 28-51, “Where sufficient understanding of the underlying structure or behavior of a system is not known, insufficient data is not available, or in other cases where preferred for various reasons, machine learning may also be undertaken in the absence of an underlying model; that is, input sources may be weighted, structured, or the like within a machine learning facility without regard to any a priori understanding of structure, and outcomes (such as those based on measures of success at accomplishing various desired objectives) can be serially fed to the machine learning system to allow it to learn how to achieve the targeted objectives. For example, the system may learn to recognize faults, to recognize patterns, to develop models or functions, to develop rules, to optimize performance, to minimize failure rates, to optimize profits, to optimize resource utilization, to optimize flow (such as flow of traffic), or to optimize many other parameters that may be relevant to successful outcomes (such as outcomes in a wide range of environments). Machine learning may use genetic programming techniques, such as promoting or demoting one or more input sources, structures, data types, objects, weights, nodes, links, or other factors based on feedback (such that successful elements emerge over a series of generations).”)
The separate neural networks are taught to be applied to separate modules ([0920] “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a modular neural network, which may comprise a series of independent neural networks (such as ones of various types described herein) that are moderated by an intermediary. Each of the independent neural networks in the modular neural network may work with separate inputs, accomplishing subtasks that make up the task the modular network as whole is intended to perform. For example, a modular neural network may comprise a recurrent neural network for pattern recognition, such as to recognize what type of industrial machine is being sensed by one or more sensors that are provided as input channels to the modular network and an RBF neural network for optimizing the behavior of the machine once understood. The intermediary may accept inputs of each of the individual neural networks, process them, and create output for the modular neural network, such an appropriate control parameter, a prediction of state, or the like.”). The modular neural networks are first, second, and third neural networks. On of ordinary skill in the art would recognize that the modular network working with separate inputs (e.g. the data from each module) is the application of the first, second, and third neural networks to their respective modules. Cella recites several embodiments, the elements of which may be configured and combined in manners that would be obvious to one of ordinary skill in the art ([2186] “While the foregoing written description enables one skilled in the art to make and use what is considered presently to be the best mode thereof, those skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the disclosure.”)
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructions of De Masi to include the first, second, and third neural networks of Cella because it would yield predictable and advantageous results. The neural networks of Cella are taught to be used with, alongside, or in place of some functions of a single neural network (see paragraph [0921]), and thus would yield predictable results when integrated into the neural network model of De Masi. The application of separate neural networks to each of the modules, which represent different types of sensor information, would yield advantageous results of having each neural network trained on a type of sensor information, thereby increasing the accuracy of the model.
Regarding claim 62, De Masi in view of Cella teaches The computer program according to Claim 61, wherein step i) the input data comprises an empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and a pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
Regarding claim 63, De Masi in view of Cella teaches The computer program according to Claim 62, wherein the empirical data (102) is selected from distances, pipe diameters, export pressures, export temperatures, gas flow rates, water flow rates, condensate flow rates, amounts of CO2, amounts of H2S, pipeline corrosion allowance, pipeline design life, pipeline nominal thickness, concrete thickness, insulation thickness, or combinations thereof (Cella: [1048] lines 19-37, “The data collector 10804 may include the data collection circuit 10808. The ambient environment condition or local sensors include one or more of a noise sensor, a temperature sensor, a flow sensor, a pressure sensor, a chemical sensor, a vibration sensor, an acceleration sensor, an accelerometer, a Pressure sensor, a force sensor, a position sensor, a location sensor, a velocity sensor, a displacement sensor, a temperature sensor, a thermographic sensor, a heat flux sensor, a tachometer sensor, a motion sensor, a magnetic field sensor, an electrical field sensor, a galvanic sensor, a current sensor, a flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, a heat flow sensor, a particulate flow sensor, a level sensor, a proximity sensor, a toxic gas sensor, a chemical sensor, a CBRNE sensor, a pH sensor, a hygrometer, a moisture sensor, a densitometer, an imaging sensor, a camera, an SSR, a triax probe, an ultrasonic sensor, a touch sensor, a microphone, a capacitive sensor, a strain gauge, an EMF meter, and the like.”; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 64, De Masi in view of Cella teaches The computer program according to Claim 62, wherein the pipeline variable (104) is obtained from the empirical data (102) by means selected from theoretical equation, algorithm, software simulation, or machine learning (De Masi: Equations 1 and 2; Cella: position sensor, location sensor). One of ordinary skill in the art would recognize the pipeline inclination and concavity is determined by the equations and positional data, such as from the position or location sensors of Cella.
Regarding claim 65, De Masi in view of Cella teaches The computer program according to Claim 64, wherein the pipeline variable (104) is selected from gas velocities, liquid densities, liquid velocities, liquid viscosities, pressures, superficial gas velocities, superficial liquid velocities, temperatures, or combinations thereof (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”).
claim 66, De Masi in view of Cella teaches The computer program according to Claim 62, wherein step i) the empirical data (102) and the pipeline variable (104) including gas velocities, pressures, temperatures and pipe diameters (Cella: sensors of paragraph [1048], including a temperature sensor, a flow sensor, a pressure sensor, a gaseous flow sensor, a hygrometer, a moisture sensor) are fed to the water condensation rate module (106).
Regarding claim 67, De Masi in view of Cella teaches The computer program according to Claim 62, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid densities, liquid viscosities, superficial gas velocities, superficial liquid velocities, temperatures and pipe diameters are fed to the flow regime module (108) (Cella: [1457] lines 4-7, “Vibration sensors, flow sensors, pressure sensors, temperature sensors, acoustic sensors, and the like may be utilized by the system to generate data regarding the operation of the fluid pumping system.”; De Masi: page 19 “The pipeline has been characterized by its geometrical features: elevation, inclination and concavity.”).
Regarding claim 68, De Masi in view of Cella teaches The computer program according to Claim 62, wherein step i) the empirical data (102) and the pipeline variable (104) including liquid velocities, liquid viscosities, pressures, CO2 pressures and temperatures are fed to the corrosion rate module (110) (De Masi: “Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity)”; Cella: sensors of paragraph [1048], including flow sensor, a gaseous flow sensor, a non-gaseous fluid flow sensor, pressure sensor, temperature sensor; [2148] lines 8-13, “By way of these examples, the one or more sensors may be configured to measure partial pressure or particle count when sensing internal and/or external emission such as diatomic hydrogen, carbon dioxide, carbon monoxide, and other combustion byproducts.”).
Regarding claim 69, De Masi in view of Cella teaches The computer program according to Claim 62, wherein step i) the empirical data (102) is fed to the operating data module (112) (De Masi: Equations 1 and 2, Fig. 2).
Regarding claim 70, De Masi in view of Cella teaches The computer program according to Claim 61, wherein step i) the water condensation rate module (106), the flow regime module (108), the corrosion rate module (110) and the operating data module (112) comprise n hidden layers, where n is selected from an integer of 2 to 10 (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”). The neural network having multiple hidden layers would include having 2 to 10 hidden layers.
Even if De Masi in view of Cella does not explicitly teach 2 to 10 hidden layers, It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the number of appropriate hidden layers for each module, since it has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art. In re Aller, 105 USPQ 233.
Regarding claim 73, De Masi in view of Cella teaches The computer program according to Claim 61, wherein step ii) the supervised machine learning technique is selected from a back propagation means, a gradient descent means or a logistic regression means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 74, De Masi in view of Cella teaches The computer program according to Claim 63, wherein the supervised machine learning technique is a back propagation means (De Masi: page 20, “Several training algorithms were tested; finally the Levenberg-Marquardt back propagation algorithm was selected as the one producing best prediction”). The Levenberg-Marquardt back propagation algorithm is the back propagation means.
Regarding claim 75, De Masi in view of Cella teaches The computer program according to Claim 61,wherein step ii) the first neural network (600) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 76, De Masi in view of Cella teaches The computer program according to Claim 75, wherein the first neural network (600) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the first neural network (600) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the water condensation rate module (106) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 77, De Masi in view of Cella teaches The computer program according to Claim 61, wherein step ii) the second neural network (700) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 78, De Masi in view of Cella teaches The computer program according to Claim 77, wherein the second neural network (700) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the second neural network (700) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the flow regime module (108) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Regarding claim 79, De Masi in view of Cella teaches The computer program according to Claim 61, wherein step ii) the third neural network (800) is an artificial neural network (ANN) (De Masi: Abstract: artificial neural network (ANN); Figs. 6 and 7; Cella: [0904] “References to a neural net throughout this disclosure should be understood to encompass a wide range of different types of neural networks, machine learning systems, artificial intelligence systems, and the like, such as […] artificial neural networks”).
Regarding claim 80, De Masi in view of Cella teaches The computer program according to Claim 79, wherein the third neural network (800) (Cella: modular neural network) comprises steps of:
obtaining the empirical data (102) (Cella: [0967] “In embodiments, a monitoring system for data collection in an industrial environment may comprise: a data collector communicatively coupled to a plurality of input channels; a data storage structured to store a collector route template, sensor specifications for sensors that correspond to the input channels, wherein the collector route template comprises a sensor collection routine; a data acquisition circuit structured to interpret a plurality of detection values, each of the plurality of detection values corresponding to at least one of the input channels; and a data analysis circuit structured to receive output data from the plurality of input channels and evaluate the received output data with respect to a rule, wherein the data collector is configured to modify the sensor collection routine based on the application of the rule to the received output data.”) and the pipeline variable (104) (De Masi: page 20, “The FNN integrates all the above quantities as input values. Therefore, input variables are of three types: Geometrical pipeline characteristics (elevation, inclination and concavity) Fluid dynamic multiphase variables (flow regime, pressure, gas flow, total flow, liquid velocity, gas velocity) Deterministic models (de Waard and NORSOK)).
generating an output layer (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) by calculating the empirical data (102) and the pipeline variable (104) with a physical model of water condensation rate (Cella: [0318] lines 1-5, “In embodiments, the platform 100 may include the local data collection system 102 deployed in the environment 104 using machine learning to enable derivation-based learning outcomes from computers without the need to program them.”);
training the third neural network (800) ([0910] lines 1-11, “In many embodiments, an expert system or neural network may be trained, such as by a human operator or supervisor, or based on a data set, model, or the like. Training may include presenting the neural network with one or more training data sets that represent values, such as sensor data, event data, parameter data, and other types of data (including the many types described throughout this disclosure), as well as one or more indicators of an outcome, such as an outcome of a process, an outcome of a calculation, an outcome of an event, an outcome of an activity, or the like.”) with one or more hidden layers (Cella: [0927] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a probabilistic neural network (“PNN”), which in embodiments may comprise a multi-layer (e.g., four-layer) feedforward neural network, where layers may include input layers, hidden layers, pattern/summation layers and an output layer.”) and a back propagation means (Cella: [0924] lines 1-7, “In embodiments, methods and systems described herein that involve an expert system or self-organization capability may use a feedforward, back-propagation multi-layer perceptron (MLP) neural network, such as for handling one or more remote sensing applications, such as for taking inputs from sensors distributed throughout various industrial environments.”)
transforming weights of one or more hidden layers to be the initial weights of the corrosion rate module (110) (Cella: [1005] lines 15-23, “The expert system may (optionally using a neural net, machine learning system, deep learning system, or the like, which may occur under supervision by one or more supervisors (human or automated)) intelligently manage bands aligned with different goals and assign weights, parameter modifications, or recommendations based on a factor, such as a bias towards one goal or a compromise to allow better alignment with all goals being tracked, for example.”; [1674] “A further embodiment of any of the foregoing embodiments of the present disclosure may include situations wherein the expert system is a machine learning system that iteratively configures at least one of a set of inputs, a set of weights, and a set of functions based on feedback relating to the data channel.”). The configuring of weights is the transforming weights.
Claim(s) 11, 31, 51, and 71 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Masi in view of Cella as applied to claim 1, 21, 41, and 61, respectfully, above, and further in view of Amer et al. (US 20190094124 A1, provided by applicant).
Regarding claim 11, De Masi in view of Cella teaches The method according to Claim 1, wherein step i) four modules are concatenated to output the depth of metal loss rate (122) (De Masi: page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7).
De Masi in view of Cella does not teach the method, wherein modules are concatenated to a concatenate layer (114)
Amer teaches an analogous method of identifying corrosion, wherein modules are concatenated to a concatenate layer (114) ([0033] lines 1-6, “The output of the preprocessing phase is an amalgamation of all of the inputs. The amalgamation can be achieved using various techniques (or combinations thereof) including, for example: a) concatenating variables to each other (e.g., appending environmental variables to thermographs”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the concatenate layer of Amer in the model of De Masi in view of Cella because it would yield predictable results. The concatenate layer, wherein the separate input variables are appended into a vector, would yield the predictable result of supplying the neural network a vector input, which is a well-known technique in the art of neural networks.
Regarding claim 31, De Masi in view of Cella teaches The machine-learning system according to Claim 21, wherein step i) four modules are concatenated to output the depth of metal loss rate (122) (De Masi: page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7).
De Masi in view of Cella does not teach the system, wherein modules are concatenated to a concatenate layer (114)
Amer teaches an analogous system of identifying corrosion, wherein modules are concatenated to a concatenate layer (114) ([0033] lines 1-6, “The output of the preprocessing phase is an amalgamation of all of the inputs. The amalgamation can be achieved using various techniques (or combinations thereof) including, for example: a) concatenating variables to each other (e.g., appending environmental variables to thermographs”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the concatenate layer of Amer in the model of De Masi in view of Cella because it would yield predictable results. The concatenate layer, wherein the separate input variables are appended into a vector, would yield the predictable result of supplying the neural network a vector input, which is a well-known technique in the art of neural networks.
Regarding claim 51, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step i) four modules are concatenated to output the depth of metal loss rate (122) (De Masi: page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7).
De Masi in view of Cella does not teach the instruction, wherein modules are concatenated to a concatenate layer (114)
Amer teaches an analogous instructions of identifying corrosion, wherein modules are concatenated to a concatenate layer (114) ([0033] lines 1-6, “The output of the preprocessing phase is an amalgamation of all of the inputs. The amalgamation can be achieved using various techniques (or combinations thereof) including, for example: a) concatenating variables to each other (e.g., appending environmental variables to thermographs”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the concatenate layer of Amer in the model of De Masi in view of Cella because it would yield predictable results. The concatenate layer, wherein the separate input variables are appended into a vector, would yield the predictable result of supplying the neural network a vector input, which is a well-known technique in the art of neural networks.
Regarding claim 71, De Masi in view of Cella teaches The non-transitory computer readable medium according to Claim 41, wherein step i) four modules are concatenated to output the depth of metal loss rate (122) (De Masi: page 20, “The FNN integrates all the above quantities as input values”; Figs. 6-7).
De Masi in view of Cella does not teach the instruction, wherein modules are concatenated to a concatenate layer (114)
Amer teaches an analogous instructions of identifying corrosion, wherein modules are concatenated to a concatenate layer (114) ([0033] lines 1-6, “The output of the preprocessing phase is an amalgamation of all of the inputs. The amalgamation can be achieved using various techniques (or combinations thereof) including, for example: a) concatenating variables to each other (e.g., appending environmental variables to thermographs”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the concatenate layer of Amer in the model of De Masi in view of Cella because it would yield predictable results. The concatenate layer, wherein the separate input variables are appended into a vector, would yield the predictable result of supplying the neural network a vector input, which is a well-known technique in the art of neural networks.
Allowable Subject Matter
Claims 12, 32, 52, and 72 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN BUTLER GEISS whose telephone number is (571)270-1248. The examiner can normally be reached Monday - Friday 7:30 am - 4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Rastovski can be reached at (571)270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.B.G./Examiner, Art Unit 2863
/Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2863