DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The specification has been checked, but not to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Drawings
The applicant’s submitted drawings appear to be acceptable for examination purposes. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the drawings.
Information Disclosure Statement
As required by M.P.E.P. 609(c), the applicant's submission of the Information Disclosure Statement, dated 28 February 2023, is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Langone et al. (LS-SVM based spectral clustering and regression for predicting maintenance of industrial machines, Oct 2014, pgs. 268-278) in view of Mallak (Comprehensive Machine and Deep Learning Fault Detection and Classification Approaches of Industry 4.0 Mechanical Machineries: With Application to A Hydraulic Test Rig, March 2021, pgs. i-183).
As per claim 1, Langone teaches a computer-implemented method for predictive maintenance [an LS-SVM-based kernel spectral clustering (KSC) method is used to predict, in advance, maintenance actions in an analyzed machine (pg. 268, abstract, etc.)] the method comprising:
establishing a station sequence that includes a machine at a station that a given part traverses [the KSC is used on sensor data coming from a vertical form seal and fill (VFFS) machine (pg. 268, abstract, etc.)], each station including at least one machine that performs at least one operation with respect to the given part [the KSC is used on sensor data coming from a vertical form seal and fill (VFFS) machine (pg. 268, abstract, etc.) which packs and seals the container (part) (pg. 271, fig. 2; etc.)];
receiving measurement data relating to attributes of a plurality of parts [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) (pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)], the measurement data being obtained by one or more sensors at each station, the measurement data corresponding to a current process period [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) at a given sampling frequency (pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)); which is the measurement data corresponding to a current process period];
generating, via a first machine learning model, latent representations, by encoding the measurement data into a latent space [the clustering model encodes the measured data to a high-dimensional feature space, producing projections representing the latent variables (pg. 269, section 2.1; etc.); which are the latent representations in the latent space by a first machine learning model];
generating, via the first machine learning model, at least machine states of the machine based on the latent representations [a set of binary clustering indicators are produced from the latent variable projections and form a code-book, where each code book is a binary word of length k-1 representing a cluster (pg. 269, section 2.1; etc.) which is the machine state based on the latent representations];
receiving machine observation data relating to the current process period, the machine observation data indicating conditions of the plurality of machines at the plurality of stations [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) at a given sampling frequency (pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)); where the sensor data from the machine is the machine observation data relating to the current process period];
generating aggregated data that is based on the measurement data and the machine observation data [a windowing operation is used to include past sensor signals with the current process period signals (measurement and machine observation data – see above), and the signals are concatenated and used by the model to distinguish good working conditions from a faulty state (pg. 273, section 5.1.1; etc.); where the concatenation is generating aggregated data (see also claim 3, below)]; and
generating, via a second machine learning model, a maintenance prediction based on the aggregated data, the maintenance prediction corresponding to a next process period [from the KSC model multi-step recursive prediction is performed with another NAR model to predict necessary maintenance multiple steps ahead (pg. 270, section 3.1; pg. 275, section 5.2; etc.); which is based on the aggregated data, above].
While Langone teaches a system for maintenance prediction using sensor data for a plurality of parts and a machine at a station the parts are processed by (see above) as well as multiple stations (see, e.g., Langone: pg. 276, section 6, “option 1”; etc.), it has not been relied upon for teaching a station sequence that includes a plurality of machines at a plurality of stations.
Mallak teaches a station sequence that includes a plurality of machines at a plurality of stations [a test rig can include a plurality of sensors collecting data from a plurality of machines or machine components (pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.) of machines in a production line (pg. 1, section 1; etc.)].
Langone and Mallak are analogous art, as they are within the same field of endeavor, namely using machine learning models for maintenance prediction from sensor data.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to extend the machine and station sensor data to multiple machines in multiple stations that a part passes in a production line, as taught by Mallak, for the production line sensor data taught by Langone.
Because both Langone and Mallak teach utilizing sensor data from parts and machines in a production line to perform maintenance prediction, it would have been obvious to one of ordinary skill in the art to extend the machine and station sensor data to multiple machines in multiple stations that a part passes in a production line, as taught by Mallak, for the production line sensor data taught by Langone, to achieve the predictable result of allowing predictions for prediction lines of different sizes. Additionally, it has been held that the mere duplication of the essential working parts of a device involves only routine skill in the art. St. Regis Paper Co. v. Bemis Co., 193 USPQ 8.
As per claim 2, Langone/Mallak teaches further comprising:
generating a feature map by performing feature extraction on the machine states [the first machine learning model performs feature extraction from the sensor data (Langone: pg. 273, section 5.1; etc.) including mapping of the inputs to a high-dimensional feature space (Langone: pg. 269, section 2.1; etc.) and may perform feature engineering (extraction) and generate a feature map (Mallak: pg. 45, Convolution Layer; etc.)],
wherein the aggregated data is generated by combining the feature map and the machine observation data [the first machine learning model performs mapping of the inputs to a high-dimensional feature space (Langone: pg. 269, section 2.1; etc.) and may perform feature engineering (extraction) and generate a feature map (Mallak: pg. 45, Convolution Layer; etc.) and data from multiple parts may be concatenated (aggregated) (Langone: pg. 273, section 5.1.1; etc.) as well as sensor data from multiple machines (Mallak: pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.) and the feature map data may be combined in a pooling layer (Mallak: pg. 46: Pooling Layer; etc.); both/either of which are generated aggregated data by combining the feature map and machine station sensor (observation) data].
As per claim 3, Langone/Mallak teaches wherein the feature map and the machine observation data are combined by concatenation or weighted averaging [the first machine learning model performs mapping of the inputs to a high-dimensional feature space (Langone: pg. 269, section 2.1; etc.) and may perform feature engineering (extraction) and generate a feature map (Mallak: pg. 45, Convolution Layer; etc.) and data from multiple parts may be concatenated (aggregated) (Langone: pg. 273, section 5.1.1; etc.) as well as sensor data from multiple machines (Mallak: pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.) and the feature map data may be combined in a pooling layer (Mallak: pg. 46: Pooling Layer; etc.); both/either of which are generated aggregated data by combining the feature map and machine station sensor (observation) data].
As per claim 4, Langone/Mallak teaches wherein the feature extraction is performed by passing the machine states through a self-attention layer or a fully connected layer to generate the feature map [the feature engineering phase can include multiple convolution+ReLU and pooling layers, followed by fully connected layers (Mallak: pgs. 45-46, fig. 8 and section 9.2; etc.)].
As per claim 5, Langone/Mallak teaches wherein:
the second machine learning model includes a regression model [the second machine learning model can include a nonlinear autoregressive (NAR) model (Langone: abstract, etc.)];
the maintenance prediction includes prediction data for each station [from the KSC model multi-step recursive prediction is performed with another NAR model to predict necessary maintenance multiple steps ahead (Langone: pg. 270, section 3.1; pg. 275, section 5.2; etc.) to predict maintenance for multiple machines in the production line (Mallak: pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.)]; and
the prediction data includes a time period before machine failure for the next process period [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) at a given sampling frequency, to predict maintenance/failure for a future period (Langone: pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)) and can include different failure levels/classifications (Mallak: pg. 53, table 2; pg. 120, section C.3; etc.)].
As per claim 6, Langone/Mallak teaches wherein:
the second machine learning model includes a classification model [the feature engineering model/phase can be followed by a classification model/phase (Mallak: pg. 45, fig. 8; etc.)];
the maintenance prediction includes prediction data for each station [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) at a given sampling frequency, to predict maintenance/failure for a future period (Langone: pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)) and can include different failure levels/classifications (Mallak: pg. 53, table 2; pg. 120, section C.3; etc.) for multiple machines (Mallak: pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.)]; and
the prediction data includes a classification state for the next process period, the classification state being indicative of a faulty state or a non-faulty state [the data sets used include sensor data from the machine as well as event data from the processed bags (the plurality of parts) at a given sampling frequency, to predict maintenance/failure for a future period (Langone: pgs. 271-272, section 4 (DS_I and DS_II discuss the data types)) and can include different failure levels/classifications (Mallak: pg. 53, table 2; pg. 120, section C.3; etc.) for multiple machines (Mallak: pg. v, abstract; pg. 2, second to fourth paragraphs; pg. 17, section 3.2; pgs. 62-63, table 4; etc.); which includes classification states for future periods including faulty/non-faulty as well as levels of failure].
As per claim 7, Langone/Mallak teaches wherein:
the measurement data is based on multimodal sensor data [multiple types of sensors/sensor data can be used, including accelerometers and thermal cameras (Langone: pg. 271, section 4; Mallak: pg. 170, section 2; etc.)]; and
the first machine learning model includes (i) an embedding model to encode the measurement data into latent representations [the first machine learning model performs feature extraction from the sensor data (Langone: pg. 273, section 5.1; etc.) including mapping of the inputs to a high-dimensional feature space (Langone: pg. 269, section 2.1; etc.) and may perform feature engineering (extraction) and generate a feature map (Mallak: pg. 45, Convolution Layer; etc.); which is an embedding model encoding the sensor measurement data into a latent representation] and (ii) a dynamics model to update the machine states based on the latent representations [the first machine learning model includes a decoding stage that takes the cluster indicators produced (above) and selects a nearest code-word (Langone: pg. 270, section 2.1; etc.) and the feature map data may be combined in a pooling layer (Mallak: pg. 46: Pooling Layer; etc.); both/either of which are dynamics models updating a machine state based on the latent representations produced in the prior stage(s)].
As per claim 8, see the rejection of claim 1, above, wherein Langone/Mallak also teaches a system comprising:
a processor; and
a memory in data communication with the processor, the memory having computer readable data including instructions stored thereon that, when executed by the processor, cause the processor to perform [the method] [the models are implemented via Matlab (instructions) stored in a memory and processed by a CPU of a desktop (Langone: pg. 272, section 5; etc.)].
As per claim 9, see the rejection of claim 2, above.
As per claim 10, see the rejection of claim 3, above.
As per claim 11, see the rejection of claim 4, above.
As per claim 12, see the rejection of claim 5, above.
As per claim 13, see the rejection of claim 6, above.
As per claim 14, see the rejection of claim 7, above.
As per claim 15, see the rejection of claim 1, above, wherein Langone/Mallak also teaches a non-transitory computer readable medium having computer readable data including instructions stored thereon that, when executed by a processor, cause the processor to perform [the method] [the models are implemented via Matlab (instructions) stored in a memory (non-transitory computer readable medium) and processed by a CPU of a desktop (Langone: pg. 272, section 5; etc.)].
As per claim 16, see the rejection of claim 2, above.
As per claim 17, see the rejection of claim 3, above.
As per claim 18, see the rejection of claim 4, above.
As per claim 19, see the rejection of claim 5, above.
As per claim 20, see the rejection of claim 6, above.
Conclusion
The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): claims 1-20 are rejected.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ao et al. (Advances in Machine Learning for Sensing and Condition Monitoring, Dec 2022, pgs. 1-23) – discloses various systems/methods utilizing machine learning models for condition monitoring from sensor data.
Buabeng et al. (Predictive Maintenance Model Based on Multisensor Data Fusion of Hybrid Fuzzy Rough Set Theory Feature Selection and Stacked Ensemble for Fault Classification, June 2022, pgs. 1-24) – discloses detection of performance anomalies and maintenance prediction using multiple machine learning models, including feature selection.
Kanawaday et al. (Machine Learning for Predictive Maintenance of Industrial Machines using IoT Sensor Data, April 2018, pgs. 87-90) – discloses maintenance prediction for industrial machines using data from multiple IoT sensors.
The examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE GIROUX whose telephone number is (571)272-9769. The examiner can normally be reached M-F 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GEORGE GIROUX/Primary Examiner, Art Unit 2128