Prosecution Insights
Last updated: April 19, 2026
Application No. 17/560,403

COLLABORATIVE MONITORING OF INDUSTRIAL SYSTEMS

Non-Final OA §101§103
Filed
Dec 23, 2021
Examiner
RODEN, DONALD THOMAS
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
36.5%
-3.5% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made non-final. This Office Action is in response to the amendments field on January 2, 2026. Claims 1-20 are pending in the case, claims 1, 8 and 15 have been amended. Response to Amendment The amendment filed on January 2, 2026 has been entered. Claims 1-20 remain pending in the application. Response to Arguments Applicant's arguments filed January 2, 2026 have been fully considered but they are not persuasive. Regarding the 101 Arguments On pages 8-9, Applicant Argues: This invention solves amongst other things ways to improve running of a manufacturing plant and avoid catastrophic shutdowns. None of this can be performed abstractly because the process is dynamic and each time, depending on the circumstances a new and very different process will be designed. Currently there is no use of distributed analytics on top of graphs for collaborative monitoring and real-time detection of performance degradation. Several problems currently exist that have not been solved due to the complexity of the situation. For one, there are no efficient way of knowing the status of a real production plant based on information from neighborhoods stations, which could introduce major delays in anomaly detection. This is a complex issue that cannot be resolved abstractly and much research and real data has been analyzed to come up with the methodology and system of the present amended claims. In addition, the current amended claims use GCNN to take advantage of information from the neighborhood of a given station for monitoring that is also not provided by any of the systems in the field. The present amended claims provide a real life rapid and efficient decision making whenever performance degradation in the production plant arises as long as it has happened once at a given station. Through exploitation of similarities and GCNN for improved predictive maintenance and by combining machine learning with a learned or user-provided topology of the industrial setting and machine/sensor distribution, a more scalable and flexible collaborative monitoring of machine and associated diagnostics of deviations is provided. This is a very case by case specific issue and solution that do not have a single abstract solution. Additionally, when dealing with high dimensionality data, the task of extracting important and discriminative features of large volume of changeable issues and dynamic variables, the records and data will be fast changing and requires advanced and dynamic computation and data handling processes that cannot be provided abstractly. To this end, amended Claim 1 that provides: collecting, by a processor, data from each of two or more stations of a set of stations; generating a dual system including a recorder and a reader for collaborative monitoring of the operation status of the station and detection of its performance degradation; recording using the recorder, various features of the station; determining a subset of the set of stations that are related; computing predictions of operational status of any related stations and monitoring any performance degradation of adjacent stations; monitoring a residual for a machine learning model for each station in the subset of stations; training and using tuning parameters for a machine learning model associated with the set of stations; detecting a change in the operation of a first station of the subset of stations using said machine learning model; and executing one or more operational changes using said machine learning model when a change in operation has been detected. Independent claims 8 and 15 provide similar language. Examiner Response: Applicant Although the applicant argues that the claimed invention improves operation of a manufacturing plant, and avoids catastrophic shutdowns, no manufacturing plant is recited in the claims. Rather, the claims recite collecting data, recording features, determining related stations, computing predictions, monitoring residuals, detecting a change in operation, and using a machine learning model. Such limitations are directed to the collection and analysis of information and the detection of a condition based on that analysis, which falls within the abstract idea grouping of mental processes. See MPEP § 2106.04(a)(2)(III). Applicant’s further argument that the claimed subject matter cannot be performed abstractly because the processes dynamic, complex, or based on high-dimensional and fast-changing data is also not persuasive. The complexity or volume of data being analyzed does not remove the claim from the realm of abstractr idea, where the claim remains focused on analyzing information and identifying a result. The recitation of machine learning, GCNN, and tuning parameters does not, by itself, demonstrate a practical application, where such limitations merely instruct that the abstract idea be carried out using generic computer technology. See MPEP § 2106.05(f). Accordingly, Applicant’s argument is not persuasive because the claims do not recite a specific technological improvement to a manufacturing plant or to computer technology, but instead recite generic computer implementation of data analysis for detecting operational changes. On pages 9-10, Applicant Argues: The 2019 Revised Patent Subject Matter Eligibility Guidance (hereinafter, merely "2019 Guidance") provides that an invention is not abstract if it is determined that additional elements are recited in the claim beyond an alleged judicial exception which integrate the exception into a practical application of the exception. Specifically, the 2019 Guidance separates Step 2A of the two-part Alice framework for determining subject matter eligibility under 35 U.S.C. § 101 into a two-prong test that determines whether a claim recites an abstract idea and, if an abstract idea is recited, whether that abstract idea is integrated into a practical application. For the first prong, an analysis is performed to determine whether the claimed invention falls within one of the following categories: mathematical concepts, certain method of organizing human activity, and mental processes. If the claim does not fall within one of these enumerated categories, then the claim does not recite an abstract idea absent rare circumstances. Applying the second prong of Step 2A, the 2019 Guidance states that "[a] claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception". Step 2A, Prong 1: The Claimed Invention is not Directed to an Abstract Idea Applicant's claimed invention is not simply directed to an abstract idea falling within the category of "Certain Methods of Organizing Human Activity." When the recitations of the claimed invention are viewed as a whole in light of the specification, it is clear that the claimed invention is directed to provide a practical application that includes a technical solution to overcome issues associated with overuse of computer resources when automatically mapping medical codes to extracted information from text in a narrative form. National and private healthcare systems around the world are supporting increasingly complex and expensive treatments. In any case, the insurance companies bear the burden of paying the amount paid to hospitals and doctors. Technically, however, there is no closed information supply chain from diagnosis through one or more treatments, which are usually part of a frequently handwritten medical record, to insurance companies. Natural language processing (NLP) has been used to try to automate medical coding. The medical codes are typically organized hierarchically, i.e., as a sequence of characters comprising a main code and a respective sub-code. Although NLP technology may help with identifying some main codes, it often lacks accuracy to gain the more detailed sub-code. This is due to lack of data to train the NLP engine, and lack of context beyond sentence and/or paragraphs that the NLP engine looks at. In addition, individual hospitals and/or individual doctors may have their own abbreviations for specific treatments. In enabling that computer functionality, a system that performs this hierarchical and artificial intelligence type teaching provides great improvements for patients in their treatment. The current amended claims provide includes understanding and applying of an amount of data beyond what may be comprehensible by a single person. (See paragraph [0022] of Applicant's specification). Therefore, the claimed invention is not directed to a judicial exception and based on the first prong of the Alice framework, the claimed invention is directed to patent eligible subject matter. Furthermore, beside abstract ideas, there are no mathematical formula involved in the present invention as reflected by the amended claims. Examiner Response: Applicant’s arguments are not persuasive. The Examiner has applied the 2019 Revised Patent Subject Matter Eligibility Guidance in determining that the claims recite an abstract idea. In particular, the claims recite collecting data, determining related stations, computing predictions, monitoring residuals, and detecting changes, which fall within the abstract idea grouping of mental processes. See MPEP 2106.04(a)(2)(III). Applicant’s discussion of certain methods of organizing human activity, medical coding, NLP, and healthcare systems is not commensurate with the claimed invention and therefore is not persuasive. Further, the absence of an explicit mathematical formula does not preclude the claims from reciting an abstract idea. The claims remain directed to analysis of information and detection of a result based on that analysis. Accordingly, the claims remain directed to an abstract idea. On pages 11-12, Applicant Argues: Step 2A, Prong 2: The Claimed Invention Integrates the Alleged Exception into a Practical Application of the Exception Without conceding that Applicant's claimed invention recites an abstract idea, Applicant submits that the claimed invention is integrated into a practical application of the alleged mental process by including additional elements that apply or use the judicial exception in some other meaningful way (described by the 2019 Guidance as an example limitation indicative of integration into a practical application). Applicant submits that the steps of the claimed invention have been narrowly tailored to illustrate elements which apply and use the judicial exception in a meaningful way. Additionally, Federal Circuit court decisions and USPTO direction have provided further guidance regarding the rejection of claims under 35 U.S.C. § 101. Specifically, McRO, Inc. dba Planet Blue v. Bandai Namco Games America Inc., 120 USPQ2d 1091 (Fed. Cir. 2016) held the claimed methods of automatic lip synchronization and facial expression animation using computer-implemented rules patent eligible under 35 U.S.C. § 101, because they were not directed to an abstract idea (Step 2A of the USPTO's SME guidance). The McRO court relied on how the claimed rules within the McRO invention enabled the automation of specific animation tasks that previously could not be automated when determining that the claims were directed to improvements in computer animation instead of an abstract idea. Specifically, the claims in McRo were deemed patent eligible under 35 U.S.C. § 101 based on the fact that they outlined a specific way of improving computer technology which "allow[ed] for the improvement realized by the invention." Similarly, Applicant's claimed method is similar in that it improves a method to obtain medical data which allows for how information can be used from a plurality of sources to build a complex network of nodes and relationships, thereby delivering a sorted list of potential paths of medical diagnosis codes and related procedural codes - in particular, main and/or secondary diagnosis codes, as well as, main procedure codes, as well as, secondary procedure codes - as a result of a query. Thus, "[a]n 'improvement in computer-related technology' is not limited to improvements in the operation of a computer or a computer network per se, but may also be claimed as a set of 'rules' (basically mathematical relationships) that improve computer-related technology by allowing computer performance of a function not previously performable by a computer." (Memorandum Regarding Recent Subject Matter Eligibility Decisions, issued November 2, 2016, pp. 2-3). Thus: An indication that a claim is directed to an improvement in computer- related technology may include- (1) a teaching in the specification about how the claimed invention improves a computer or other technology (e.g., the McRO court relied on the specification's explanation of how the claimed rules enabled the automation of specific animation tasks that previously could not be automated when determining that the claims were directed to improvements in computer animation instead of an abstract idea). In contrast, the court in Affinity Labs of TX v. DirecTV relied on the specification's failure to provide details regarding the manner in which the invention accomplished the alleged improvement when holding the claimed methods of delivering broadcast content to cellphones directed to an abstract idea. (2) a particular solution to a problem or a particular way to achieve a desired outcome defined by the claimed invention, as opposed to merely claiming the idea of a solution or outcome (e.g., McRO's claims defined a specific way, namely use of particular rules to set morph weights and transitions through phonemes, to solve the problem of producing accurate and realistic lip synchronization and facial expressions in animated characters, and thus were not directed to an abstract idea). In contrast, Electric Power Group's claimed method was directed to an abstract idea because it merely presented the results of collecting and analyzing information, without even identifying a particular tool for the presentation. (Id. at pp. 2-3). As such, the claims are directed to patent-eligible subject matter under McRO. Accordingly, Applicant respectfully submits that the claimed invention should be considered a practical application of the alleged abstract idea, and therefore is patent eligible. Examiner Response: Applicant’s argument is not persuasive. The additional elements recited in the claims do not integrate the abstract idea into a practical application. Rather, the claims merely recite generic computer implementation, including use of a processor, machine learning model, and tuning parameters, to collect data, analyze relationships, compute predictions, monitor residuals, detect changes, and execute an operational change. Such limitations merely instruct that the abstract idea be applied using generic computer technology and do not impose a meangiful limit on the judicial exception. Applicant’s reliance on MCRO is also not persuasive. Unlike McRO, the present claims do not recite a specific set of rules or a particular technological mechanism that improves computer technology. Instead, the claims recite result-oriented use of a machine learning model to perform the abstract analysis. Further, Applicant’s discussion of medical data, diagnose codes, and procedural codes is not commensurate with the claimed invention and is therefore not persuasive. Accordingly, the clams do not integrate the abstract idea into a practical application under Step 2A, Prong Two. On pages 12-13, Applicant Argues: Step 2B: The Claimed Invention Amounts to Significantly More than the Alleged Judicial Exception As held in the BASCOM Global Internet Services, Inc. v. AT&T Mobility LLC. Fed. Cir., No 2015-1763, 6/27/16 decision, when the patent claim seeks to cover a judicial exception to patent eligibility, the final question asks whether the inventive concept covered in the claimed invention was "significantly more" than merely the judicial exception. In this case, the question was whether the claim added significantly more, such that more than a mere abstract idea would be captured. The Federal Circuit ruled that the claims did add significantly more and, therefore, the claims are patent eligible and stated, "[a]s is the case here, an inventive concept can be found in the non-conventional and non-generic arrangement of known, conventional pieces." Applying BASCOM to amended claims, the claimed subject matter improves the technology of medical technology. Therefore, for at least the above reasons, Applicant respectfully requests that the rejection under 35 U.S.C. § 101 be reconsidered and withdrawn. Applicant maintains that all claims are allowable for at least the reasons presented hereinabove. However, in the interests of brevity, this response does not comment on each and every comment made by the Examiner in the Office Action. This should not be taken as acquiescence of the substance of those comments, and Applicant reserves the right to address such comments. Consequently in view the amendments made to the claims, Applicant respectfully requests the withdrawal of the rejection. Examiner Response: Applicant’s arguments are not persuasive. The claims do not recite significantly more that the abstract idea. The additional elements, including the processor, machine learning model, and tuning parameters, are recited at a high level of generality and merely perform generic computer functions in connection with the abstract idea. See MPEP 2106.05(d). Applicant’s reliance on BASCOM is also not persuasive because the present claims do not recite a non-conventional or non-generic arrangement of elements. Instead, the claims merely use generic computer components to collect data, analyze information, detect a change, and apply the result. Further, Applicant’s discussion of medical technology is not commensurate with the claimed invention and is therefore not persuasive. Accordingly, the claims do not amount to significantly more than the judicial exception under Step 2B. Regarding the 102 and 103 Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the new ground of rejection set forth below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. To determine if a claim is directed to patent ineligible subject matter, the Court has guided the Office to apply the Alice/Mayo test, which requires: Step 1: Determining if the claim falls within a statutory category. Step 2A: Determining if the claim is directed to a patent ineligible judicial exception consisting of a law of nature, a natural phenomenon, or abstract idea; and Step 2A is a two prong inquiry. MPEP 2106.04(II)(A). Under the first prong, examiners evaluate whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Abstract ideas include mathematical concepts, certain methods of organizing human activity, and mental processes. MPEP 2104.04(a)(2). The second prong is an inquiry into whether the claim integrates a judicial exception into a practical application. MPEP 2106.04(d). Step 2B: If the claim is directed to a judicial exception, determining if the claim recites limitations or elements that amount to significantly more than the judicial exception. (See MPEP 2106). Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-7 are directed to a method (a process), Claims 8-14 are directed to a system(a machine), and Claims 15-20 are directed to a computer readable storage medium (a manufacture). Therefore, Claims 1-20 are directed to a process, machine or manufacture or composition of matter. Regarding claim 1 Step 2A Prong 1 Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind(including observation, evaluation, judgement, opinion) or with the aid of pencil and paper but for recitation of generic computer components (e.g., “processor”, “machine learning model”) [see MPEP 2106.04(a)(2)(III)]. “determining a subset of the set of stations that are related” (e.g., a human can compare a set of stations written on paper and create subsets of ones that are related) “computing predictions of operational status of any related stations and monitoring any performance degradation of adjacent stations” (e.g., a human can predict if related machines are still able to work depending on their current conditions) “detecting a change in the operation of a first station of the subset of stations…” (e.g., a human can compare differences in operational status of a station from other stations in a subset) Claim 1 further recites the following mental process, that in each case under the broadest reasonable interpretation, covers performance of mathematical relationships, mathematical formulas or equations, and mathematical calculations but for recitation of generic computer components (e.g., “processor”, “machine learning model”) [see MPEP 2106.04(a)(2)(I)]. “monitoring a residual for a machine learning model for each station in the subset of stations” (e.g., the difference between observed and predicted values) Accordingly, at Step 2A, prong one, the claim is directed to an abstract idea. Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “processor” and “machine learning model” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “collecting, by a processor, data from each of two or more stations of a set of stations”, limitation, this additional element of collecting data is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)). Regarding the “generating a dual system including a recorder and a reader for collaborative monitoring of the operation status of the station and detection of its performance degradation” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “recording using the recorder, various features of the station” which is recited at a high-level of generality and is merely stating that the collected data be stored on a recorder, is merely generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Regarding the “training and using tuning parameters for a machine learning model associated with the set of stations”, “…using said machine leaning model”, and “executing one or more operational changes using said machine learning model when a change in operation has been detected” which are recited at a high-level of generality, and are a recitation of training and using a machine learning model to execute instructions to apply the abstract idea using generic computer components/technology. These are mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “processor” and “machine learning model” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “collecting, by a processor, data from each of two or more stations of a set of stations” limitation, as discussed above, the additional element of collecting data is recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding the “generating a dual system including a recorder and a reader for collaborative monitoring of the operation status of the station and detection of its performance degradation” which is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). Regarding the “recording using the recorder, various features of the station” which is recited at a high-level of generality and is merely stating that the collected data be stored on a recorder, is merely generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Regarding the “training and using tuning parameters for a machine learning model associated with the set of stations”, “…using said machine leaning model”, and “executing one or more operational changes using said machine learning model when a change in operation has been detected” which are recited at a high-level of generality, and are a recitation of training and using a machine learning model to execute instructions to apply the abstract idea using generic computer components/technology. These are mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claim 2 Step 2A Prong 1 The claim recites the following mental processes: “identifying the subset of the set of stations that have similar operational dynamics” (e.g., a human can compare information from a station written on paper, create groups based similar dynamics) “determining, utilizing historical data regarding the subset of stations, operational metrics associated with performance of the subset of stations.” (e.g., a human can compare historical performance of a station written on paper, and determine their metrics) Step 2A Prong 2 In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 3 Step 2A Prong 1 The claim recites the following mental process(es): “determining a subset of the set of stations that are related” (e.g., a human can compare a set of stations written on paper and create subsets of ones that are related) The claim recites the following mathematical concept(s): “utilizing a graph convolutional neural network to determine an adjacency matrix.” (e.g., matrix manipulation, eigenvalue decomposition) Step 2A Prong 2 In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 4 Step 2A Prong 1 The claim recites the following mental process(es): “detecting a change in the operation of a first station of the subset of stations” (e.g., a human can compare differences in operational status of a station from other stations in a subset) The claim recites the following mathematical concept(s): “using change point detection” (e.g., mean, variance) Step 2A Prong 2 In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 5 Step 2A Prong 1 The claim recites the following mathematical concept(s): “identifying that a first residual for the first station is deviating from zero” (e.g., subtraction, comparison) “adjusting tuning parameters for a first machine learning model associated with the first station.” (e.g., updating weights, modifying learning rates) Step 2A Prong 2 In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 6 Step 2A Prong 1 The claim recites the following mental process(es): “detecting a change in the operation of a first station of the subset of stations” (e.g., a human can compare differences in operational status of a station from other stations in a subset) The claim recites the following mathematical concept(s): “identifying that a second residual associated with the second station is deviating from zero” (e.g., subtraction, comparison) “adjusting tuning parameters for a second machine learning model associated with the second station.” (e.g., updating weights, modifying learning rates) Step 2A Prong 2 In accordance with Step 2A, Prong 2, the claim does not include any additional elements and the judicial exception is not integrated into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 7 Step 2A Prong 1 Claim 7 recites the same abstract idea as claim 5 Step 2A Prong 2 The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of a “machine learning model” and “edge computing device” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). In particular, the recited “machine learning model” and “edge computing device” are merely generic computer components because they merely are recited to perform the function of “identifying that a first residual for the first station is deviating from zero; and adjusting tuning parameters for a first machine learning model associated with the first station”. Accordingly, at Step 2A, prong two, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element of a “machine learning model” and “edge computing device” which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional element individually or in combination does not amount to significantly more than the judicial exception. Regarding claims 8-14 Claims 8-14 recites a system. Each of these claims corresponds directly to the method Steps of claims 1-7, respectively, with the addition of generic hardware components such as a memory and a processor which are insufficient to render the claims subject matter eligible for the same reasons described above. Regarding claims 15-20 Claims 15-20 recites a computer program storage system. Each of these claims corresponds directly to the method Steps of claims 1-6, respectively, with the addition of generic hardware components such as computer readable storage medium, and a processor which are insufficient to render the claims subject matter eligible for the same reasons described above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 8-10, and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Putman et al. (US 20210320931 A1, referred to as Putman),in view of Verma et al. (US 20190312898 A1, referred to as Verma) in view of Cella et al. (US 20210182996 A1, referred to as Cella). Regarding claim 1, Putman teaches a computer-implemented method for determine an operational status of a station ([0033-0036], and [0050-0051]: Describes a manufacturing monitoring system implemented using a deep learning processor/controller and associated data processing components that receive and analyze data from process stations in a manufacturing process. ; 0062-0064], and [0079-0080]: Describe monitoring the inputs to and outputs of each process station individually and together with those to other stations, and using conditioned machine learning algorithms to identify anomalous activity and associated confidence levels, which correspond to determining the operational status of a station.), the method comprising: collecting, by a processor, data from each of two or more stations of a set of stations (FIG. 2, [0033-0036], and [0050-0051]: Describes a system including a deep learning processor that obtains response data from a set of process stations performing operations as part of a manufacturing process. It comprises different process stations that may operate in a series or parallel. The data processing server receives data generated by sensors couple to or within the process stations, as well as data generated by station controllers); generating a dual system including a recorder and a reader for collaborative monitoring of the operation status of the station and detection of its performance degradation ([0046-0048], and [0060-0064]: Describes a manufacturing system having process stations and station controllers, together with a deep learning controller trained to identify anomalous activity based on response data from the process stations/controllers. It uses a signal splitter that divides a control signal output so that one divided signal is provided to the deep learning controller and another divided signal is provided to the station controller, thereby establishing a dual monitoring arrangement. It further details monitoring the inputs to and outputs of other stations, to dynamically identify anomalous activity.); recording using the recorder, various features of the station ([0039-0042], and [0050-0051]: Describes obtaining response data from a set of process stations in a manufacturing process, where the data includes values associated with the stations, including station values, control values, intermediate output values, and final output values. Sensor-generated data from the sensors is coupled to or within the process stations. These values and sensor-generated data correspond to various features of the station being recorded.); executing one or more operational changes using said machine learning model when a change in operation has been detected ([0079-0080: Describes that the deep learning processor employs conditioned machine learning algorithms to analyze factory operation and control data, identify anomalous activity, and determine a confidence level associated with the anomalous activity. It assigns thresholds to the confidence levels and performing predefined actions when a threshold is met, including immediate action, generating an alert, prompting operator review, or flagging the anomalous activity for further checking. These predefined actions correspond to executing one or more operational changes when a change in operation has been detected using the machine learning model.). Although Putman teaches, collecting, by a processor, data from each of two or more stations of a set of stations, generating a dual system including a recorder and a reader for collaborative monitoring of the operation status of the station and detection of its performance degradation, recording using the recorder, various features of the station, and executing one or more operational changes using said machine learning model when a change in operation has been detected. It does not teach determining a subset of the set of stations that are related, computing predictions of operational status of any related stations and monitoring any performance degradation of adjacent stations, monitoring a residual for a machine learning model for each station in the subset of stations, and detecting a change in the operation of a first station of the subset of stations using said machine learning model Verma teaches, determining a subset of the set of stations that are related ( Verma [0072-0073]: Describes constructing a graph of the nodes in the network, with edges representing relationships between the nodes. A GCNN is applied to the graph to process and analyze the relationships between the nodes, corresponding to determining subsets of sets of nodes which are related.); Putman in view of Verma teaches, computing predictions of operational status of any related stations and monitoring any performance degradation of adjacent stations (Putman teaches the prediction/status/degradation [0054-0055], and [0062-0064]: Describes a deep learning processor that generates expected behavioral pattern data for process stations and compares the expected behavioral pattern data with actual behavioral pattern data to identify anomalous activity, including unusual correlation, frequency, amplitude, trend, and/or rate-of-change patterns at a single station or across stations. ;Verma teaches related/adjacent stations [0055-0060]: Describes that nodes in the network have spatial relationships identified through the network topology, and further discloses a graph G=(V,E) in which edges represent dependency links between devices/nodes. Which correspond to related stations and adjacent stations.); It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify Putman’s manufacturing system, with Verma’s machine learning model for the operational status of stations. Doing so would have enabled the system to improve accuracy, contextual relevance, and reliability of station-level anomaly and degradation detection. Putman in view of Verma teaches, monitoring a residual for a machine learning model for each station in the subset of stations ( Verma [[0057-0060], and [0074]: Describes detecting an anomaly by comparing a reconstruction error associated with the output of the convolutional long short-term memory recurrent neural network to a defined threshold. The reconstruction error corresponds to a residual of the machine learning model. The machine leaning analysis is performed using sensor data from network nodes/stations represented in the network graph, thereby associating the residual monitoring with the stations under analysis.) detecting a change in the operation of a first station of the subset of stations using said machine learning model (Verma, [0051], and [0074-0075]: Describes receiving sensor data from a plurality of nodes in a computer network and processing the sensor data using a graph convolutional neural network and a convolutional long short-term memory recurrent neural network. It detects an anomaly in the computer network by comparing a reconstruction error associated with the output of the machine learning model to a defined threshold.); Although Outman in view of Verma teaches determining a subset of the set of stations that are related, computing predictions of operational status of any related stations and monitoring any performance degradation of adjacent stations, monitoring a residual for a machine learning model for each station in the subset of stations, and detecting a change in the operation of a first station of the subset of stations using said machine learning model. They do not teach training and using tuning parameters for a machine learning model associated with the set of stations. Cella teaches, training and using tuning parameters for a machine learning model associated with the set of stations ([0556-0560]: Describes that the “machine learning model 3000 may learn one or more functions via iterative optimization of an objective function, thereby learning to predict an output associated with new inputs”. The iterative optimization involves adjusting and tuning parameters of the machine learning model during training in order to minimize the objective function and improve prediction accuracy.); It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify Putman in view of Verma’s machine learning model for the operational status of stations, to incorporate Cella’s training using tuned parameters. Doing so would improve the prediction and anomaly-detection performance for a more accurate determination of outputs from the new inputs. Regarding claim 2, Putman in view of Verma in view of Cella, teaches the computer-implemented method of claim 1. Putman in view of Verma in view of Cella further teaches identifying the subset of the set of stations that have similar operational dynamics (Putman [0062-0064], and [0076-0077]: Describes that a deep learning processor generates and analyzes behavioral pattern data associated with process stations, including learning behavioral patterns for subsets of response data that include values associated with multiple stations, such as station X and station Y. It analyzes operational behavior across multiple stations to identify anomalous activity and patterns These grouping and analysis of stations based on learned behavioral patterns corresponds to identifying a subset of statins that have similar operational dynamics.); and determining, utilizing historical data regarding the subset of stations, operational metrics associated with performance of the subset of stations (Putman [0039-0042], and [0072-0075]: Describe conditioning machine learning algorithms using robust data set generated from process stations, including varying setpoints corresponding to control values of each process station, which corresponds to utilizing historical data regarding the stations. The deep learning processor receives and uses station values, control values, process values, and manufacturing performance metrics associated with the process stations, which correspond to operational metrics associated with performance of the subset of stations.). Regarding claim 3, Putman in view of Verma in view of Cella, teaches the computer-implemented method of claim 1. Verma further teaches, wherein determining a subset of the set of stations that are related utilizing a graph convolutional neural network to determine an adjacency matrix ([Verma [0060-0061]: describes that an anomaly detector ca be constructed ton top of a graph convolutional neural network and the graph structure can be represented in matrix form, typically as an adjacency matrix A.). Regarding claim 8, which recites substantially the same limitations as claim 1. Claim 8 further recites a memory and a processor in communication with the memory (Putman [0033-0036], and [0050-0051]: Describes a data processing system including a processor configured to receive and process data form process stations, as well as associated computing components such as servers and controllers for implementing the machine learning operations. These computing systems inherently include memory in communication with the processor for storing and processing data.) to perform the method Steps of claim 1, respectively, and is therefore rejected on the same premise. Regarding claims 9, which recites substantially the same limitations as claim 2. Claim 9 further recite a system(Putman [0033-0036], and [0050-0051]: Describes a data processing system including a processor configured to receive and process data form process stations, as well as associated computing components such as servers and controllers for implementing the machine learning operations.) to perform the method steps of claim 2, respectively, and are therefore rejected on the same premise. Regarding claim 10, which recites substantially the same limitations as claims 3. Claims 10 further recite a system(Putman [0033-0036], and [0050-0051]: Describes a data processing system including a processor configured to receive and process data form process stations, as well as associated computing components such as servers and controllers for implementing the machine learning operations.) to perform the method steps of claims 3, respectively, and are therefore rejected on the same premise. Regarding claim 15, which recites substantially the same limitations as claim 1. Claim 15 further recites a computer readable storage medium (Putman [0102]: Describes an apparatus which includes a computer program stored on a computer readable medium.) to perform the method steps of claim 1, respectively, and is therefore rejected on the same premise Regarding claim 16, which recites substantially the same limitations as claim 2. Claim 16 further recite a computer program (Putman [0102]: Describes an apparatus which includes a computer program stored on a computer readable medium.) to perform the method Steps of claim 2, respectively, and are therefore rejected on the same premise. Regarding claim 17, which recites substantially the same limitations as claims 3. Claims 17 further recite a computer program (Putman [0102]: Describes an apparatus which includes a computer program stored on a computer readable medium.) to perform the method Steps of claims 3, respectively, and are therefore rejected on the same premise. Claim(s) 4-7, 11-14, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Putman et al. (US 20210320931 A1, referred to as Putman),in view of Verma et al. (US 20190312898 A1, referred to as Verma) in view of Cella et al. (US 20210182996 A1, referred to as Cella), in view of Maya et al. (US 10496515 B2, referred to as Maya). Regarding claim 4, Putman, in view of Verma, in view of Cella, teaches the computer-implemented method of claim 1, wherein detecting a change in the operation of a first station of the subset of stations ([0074-0075]: As described above.) Although Putman, in view of Verma, in view of Cella, teaches detecting a change in the operation of a first station of the subset of stations, it does not teach that they include change point detection. Maya teaches change point detection (Col. 15, lines 12-42: Describes that abnormality degree data in an abnormality determination period may be divided at a certain time point, and that a statistical test is performed on the divided pieces of abnormality degree data to determine whether abnormality exists based on a resulting significance probability. It compares information amounts for the divided and undivided abnormality degree data to determine whether the data should be divided. These correspond to using change point detection to detect a change in operation.). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the method of Putman, in view of Verma, in view of Cella to incorporate Maya’s change point detection technique into the system. Doing so would apply real-time deviation-based change-point analytics, enabling the system to flag shifts in a station’s behavior sooner and trigger quicker downstream actions. Regarding claim 5, Putman, in view of Verma, in view of Cella, teaches the computer-implemented method of claim 1, further comprising: identifying that a first residual for the first station is deviating from zero (Verma [0074]: Describes that a reconstruction error is determined from a monitored node, this is compared to a threshold to decide if the node is anomalous. This corresponds to a deviation from zero.). Although Putman, in view of Verma, in view of Cella, teaches identifying that a first residual for the first station is deviating from zero, it does not teach adjusting tuning parameters for a first machine learning model associated with the first station. Maya teaches adjusting tuning parameters for a first machine learning model associated with the first station(Col. 3, lines 49-58: “monitoring target may be devices such as a server, a home appliance, and the like” describes that devices/stations are used to monitor in conjunction with a machine learning model.; Col. 10, lines 24-45: “…updates the parameters of the state estimation models so as to reduce the estimation error by using an error back propagation method…” which corresponds to tuning model parameters in response to an error in a monitored target.). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the method of Putman, in view of Verma, in view of Cella to incorporate Maya’s tuning parameters. Doing so would improve accuracy to detect abnormalities, model adaptation and continued detection performance. Regarding claim 6, Putman, in view of Verma, in view of Cella, in view of Maya teaches the computer-implemented method of claim 5, further comprising: Maya further teaches, detecting a change in the operation of a second station of the subset of stations ([0074-0075]: Describes how an anomalous nodes or devices can be detected from a change in their operations.); identifying that a second residual associated with the second station is deviating from zero ( Maya [0074]: Describes that a reconstruction error is determined from monitored nodes, this is compared to a threshold to decide if the node is anomalous. This corresponds to a deviation from zero from a second node.) ; and Although Putman, in view of Verma, in view of Cella teaches detecting a change in the operation of a second station of the subset of stations by identifying that a second residual associated with the second station is deviating from zero, it does not teach adjusting tuning parameters for a second machine learning model associated with the second station. Maya further teaches adjusting tuning parameters for a second machine learning model associated with the second station (Col. 3, lines 49-58: “monitoring target may be devices such as a server, a home appliance, and the like” indicating that there are more than one device to be used, supporting a second station.; Col. 4, lines 3-14: Describes how a monitored target is estimated separately with a state estimation model tied to it.; Col. 10, lines 24-56: “…updates the parameters of the state estimation models so as to reduce the estimation error by using an error back propagation method…” which corresponds to tuning model parameters in response to an error found in a target.). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the method of Putman, in view of Verma, in view of Cella to incorporate Maya’s tuning parameters. Doing so would improve accuracy to detect abnormalities, model adaptation and continued detection performance. Regarding claim 7, Putman, in view of Verma, in view of Cella, in view of Maya teaches the computer-implemented method of claim 5. Verma further teaches, operated by an edge computing device (Verma [0015-0016]: “a fog node is a functional node that is deployed close to fog endpoints to provide computing, storage, and networking resources and services” and [0026]: “various fog nodes/devices 122 (e.g., with fog modules, described below) may execute various fog computing resources on network edge devices” describes how deploying resources at a networks edge to perform local computations, including monitoring and analytics tasks. Corresponding to operating edge devices with a machine learning model.). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the method of Putman, in view of Verma, in view of Cella to incorporate Maya’s residual-based parameter tuning logic into the system. Doing so would let the model update its parameters directly on each edge device, keeping predictions responsive under network constraints and eliminating round-trip latency. Regarding claims 11-14, which recites substantially the same limitations as claims 4-7. Claims 11-14 further recite a system(Putman [0033-0036], and [0050-0051]: Describes a data processing system including a processor configured to receive and process data form process stations, as well as associated computing components such as servers and controllers for implementing the machine learning operations.) to perform the method Steps of claims 4-7, respectively, and are therefore rejected on the same premise. Regarding claims 18-20, which recites substantially the same limitations as claims 4-7. Claims 18-20 further recite a computer program (Putman [0102]: Describes an apparatus which includes a computer program stored on a computer readable medium.) to perform the method Steps of claims 4-7, respectively, and are therefore rejected on the same premise. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892 for additional art including: US 2020/0293917 A1: anomaly detection US 11374953 B2: multiple model output combination US 2019/0244012 A1: collecting operational metrics Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONALD T RODEN whose telephone number is (571)272-6441. The examiner can normally be reached Mon-Thur 8:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.T.R./Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Dec 23, 2021
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103
Aug 28, 2025
Interview Requested
Sep 12, 2025
Applicant Interview (Telephonic)
Sep 12, 2025
Examiner Interview Summary
Sep 15, 2025
Response Filed
Oct 22, 2025
Final Rejection — §101, §103
Jan 02, 2026
Response after Non-Final Action
Feb 18, 2026
Request for Continued Examination
Feb 28, 2026
Response after Non-Final Action
Mar 24, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month