Prosecution Insights
Last updated: April 19, 2026
Application No. 18/342,139

METHOD AND SYSTEM FOR EXCEPTION MANAGEMENT

Non-Final OA §101§103
Filed
Jun 27, 2023
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Tata Consultancy Services Limited
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on June 27, 2023. Claims 1-12 are pending in the case. Claims 1, 5, and 9 are the independent claims. This action is non-final. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental steps) without significantly more. This judicial exception is not integrated into a practical application because any additional elements amount to implementing the abstract idea on a generic computer. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1, 5, and 9, and relying on the evaluation flowchart in MPEP 2106: Step 1 (Is the claim to a process, machine, manufacture, or composition of matter?): Yes. Claim is a method (process). Claim 5 is a system (machine). Claim 9 is a storage medium (article of manufacture). Step 2a Prong One (Does the claim recite an abstract idea?): Yes. Claims 1, 5, and 9 recite: generating a plurality of data properties and a plurality of quality properties of the exception data (a mental process of observation and/or determination, including a human observing the exception data and mentally determining corresponding data and quality properties); determining an algorithm matching the obtained exception data, based on the data properties and the quality properties (a mental process of determination regarding an algorithm which matches the exception data based on the properties). Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations and mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?): No. Claims 1, 5, and 9 additionally recite: a system for generating a data model for exception management, comprising: one or more hardware processors; a communication interface; and a memory storing a plurality of instructions, wherein the plurality of instructions when executed, cause the one or more hardware processors to perform a method; one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause performance of the method; and the processor implemented method of generating a data model for exception management (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); obtaining exception data corresponding to a plurality of exceptions as input data, wherein the exception data comprising a)date of exception, b) type of exception, and c) a plurality of attributes of exception (insignificant extra-solution activity as discussed in MPEP 2106.05(g)) that the generating of the plurality of data properties and plurality of quality properties is by executing a plurality of classification algorithms on the obtained exception data (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); generating the data model utilizing a training data comprising a) the data properties, b) the quality properties, and c) information on the algorithm matching the obtained exception data (insignificant extra-solution activity as discussed in MPEP 2106.05(g) and mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to siqnificantly more than the judicial exception): No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…)), claims 1, 5, and 9 do not recite any additional elements that amount to significantly more than the abstract idea. As discussed above, Claims 1, 5, and 9 additionally recite: a system for generating a data model for exception management, comprising: one or more hardware processors; a communication interface; and a memory storing a plurality of instructions, wherein the plurality of instructions when executed, cause the one or more hardware processors to perform a method; one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause performance of the method; and the processor implemented method of generating a data model for exception management (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); obtaining exception data corresponding to a plurality of exceptions as input data, wherein the exception data comprising a)date of exception, b) type of exception, and c) a plurality of attributes of exception (insignificant extra-solution activity as discussed in MPEP 2106.05(g), which can be reevaluated to include well-understood, routine, conventional activity such as receiving or transmitting data over a network, gathering data, storing and retrieving information in memory, etc. as discussed in MPEP 2106.05(d)) that the generating of the plurality of data properties and plurality of quality properties is by executing a plurality of classification algorithms on the obtained exception data (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)); generating the data model utilizing a training data comprising a) the data properties, b) the quality properties, and c) information on the algorithm matching the obtained exception data (insignificant extra-solution activity as discussed in MPEP 2106.05(g), which can be reevaluated to include well-understood, routine, conventional activity such as storing information in memory, outputting results, etc. as discussed in MPEP 2106.05(d), and mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea. Regarding dependent claims 2, 6, and 10: Step 2a Prong One: incorporates the rejection of claims 1, 5, and 9. Step 2a Prong Two: the claims additionally recite wherein the data properties and the quality properties comprise a) coverage, b) confidence, c) persistence, and d) recency (field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite wherein the data properties and the quality properties comprise a) coverage, b) confidence, c) persistence, and d) recency (field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claims 3, 7, and 11: Step 2a Prong One: incorporates the rejection of claims 1, 5, and 9; the claims further recite wherein determining the algorithm matching the obtained exception data comprises: selecting classification with entropy as the algorithm if the data properties and the quality properties indicate that the exception has dimension exceeding a threshold of dimension and has a highly skewed population (a mental process of selecting a classification algorithm based on the corresponding data set, such as a human mentally selecting/determining that the classification algorithm should be classification with entropy based on data and quality properties indicating higher dimensions and population skew); selecting classification with gini index as the algorithm if the data properties and the quality properties indicate that the exception has dimension below a threshold of dimension and has a uniform population (a mental process of selecting a classification algorithm based on the corresponding data set, such as a human mentally selecting/determining that the classification algorithm should be classification with gini index based on data and quality properties indicating lower dimensions and population uniformity); selecting regression trees as the algorithm if numeric attributes in the exception exceeds a threshold of numeric attributes (a mental process of selecting a classification algorithm based on the corresponding data set, such as a human mentally selecting/determining that the classification algorithm should be regression tree based on higher levels of numeric attributes); and selecting classification and regression trees (CART) as the algorithm if the exception contains a mix of attributes (a mental process of selecting a classification algorithm based on the corresponding data set, such as a human mentally selecting/determining that the classification algorithm should be regression tree based on a mix of attributes). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claims 4 and 14: Step 2a Prong One: incorporates the rejection of claims 1, 5, and 9; the claims further recite identifying at least one exception associated with the at least one rule by processing the input (a mental process of determination that an exception is associated with a rule); performing a root cause analysis of the identified at least one exception to identify at least one cause of the identified at least one exception (a mental process of determination of a root cause of an exception); performing a categorization of the at least one cause as related to one of a) process improvements, or b) potential mavericks (a mental process of determination regarding a category of the cause); generating at least one recommendation based on the categorization of the at least one cause (a mental process of determination regarding a recommendation based on the category of the cause). Step 2a Prong Two: the claims additionally recite fetching at least one rule and corresponding data properties and quality properties as input (insignificant extra-solution activity as discussed in MPEP 2106.05(g)); that the identifying an performing steps are performed using the data model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite fetching at least one rule and corresponding data properties and quality properties as input (insignificant extra-solution activity as discussed in MPEP 2106.05(g), which can be reevaluated to include well-understood, routine, conventional activity such as receiving or transmitting data over a network, gathering data, storing and retrieving information in memory, etc. as discussed in MPEP 2106.05(d)); that the identifying an performing steps are performed using the data model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1, 2, 5, 6, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 20210271582 A1) in view of Subbiah et al. (US 20210012242 A1). With respect to claims 1, 5, and 9, Liu teaches system for generating a data model for exception management, comprising: one or more hardware processors; a communication interface; and a memory storing a plurality of instructions, wherein the plurality of instructions when executed, cause the one or more hardware processors to perform a method; one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause performance of the method; and the processor implemented method of generating a data model for exception management, comprising: obtaining exception data corresponding to a plurality of exceptions as input data, wherein the exception data comprising a) date of exception, b) type of exception, and c) a plurality of attributes of exception (e.g. paragraph 0020, acquiring a plurality of types of log source data and storing; paragraph 0027, plurality of types of log source data include application system log, operating system resource status log, exception log data, streaming log data, detailed operation and maintenance record, and third-party labeling data; i.e. log source data is obtained, including exception data; paragraph 0049-0053, obtaining log data including real-time acquisition of exception log, etc.; storing data as log source data, operation and maintenance result, labeling result, etc., including storing exception and fault labeling result; paragraphs 0099-0100, step 501, performing data acquisition including acquiring exception logs; paragraph 0036, 0058, 0064, sorting log data in chronological order; i.e. where the resulting chronological ordering of the data provides an indication of the date of the exception); generating a plurality of data properties and a plurality of quality properties of the exception data by executing a plurality of classification algorithms on the obtained exception data (e.g. paragraph 0022, continuously performing exception and fault labeling on source data in data storage and storing labeling result; paragraph 0029, performing exception and fault labeling on source data including detailed operation and maintenance record and third-party labeling data, exception and fault confirmed though checking and outputted from result checking module, and data used by automatic model training for training, test and verification; paragraph 0030-0034, indicating a plurality of different manners (i.e. algorithms) in which the exception and fault labeling module performs exception and fault labeling, including a manual manner which is labeling detailed operation and maintenance record according to fault occurrence, fault type, and fault cause, a semi-manual manner which is labeling exception and fault confirmed through checking and outputted form result checking module according to fault occurrence module, fault type, and fault cause, a semi-supervised learning manner which is labeling the data used for training, test, and verification using a semi-supervised learning algorithm and some labeled samples, and a transfer learning manner in which labeling data is generated using transfer learning technology; paragraph 0054-0056, continuously performing exception and fault labeling on source data and storing labeling results; i.e. the data (including fault/exception data) is labeled for faults/exceptions according to at least fault type and fault cause, using a plurality of different algorithms for performing the labeling (where labeling as a particular cause, type, etc. is analogous to classification); paragraphs 0103-0104, step 503, performing exception and fault labeling, including confirming exception data acquired in the system and labeling real exception event, labeling fault data and fault type acquired by the system, etc.); determining an algorithm matching the obtained exception data, based on the data properties and the quality properties (e.g. paragraph 0038, selecting appropriate algorithm from supervised, unsupervised, and semi-supervised model according to task type, exception and fault labeling data, and training, test, and verification data; paragraph 0060, selecting appropriate algorithm according to task type, exception and fault labeling data, and training, test and verification data; for example exception mode detection task has a small quantity of samples do to variety of exception modes and relatively low occurrence frequency and is therefore generally based on unsupervised model, while fault location and early fault warning are generally based on supervised model and supplemented by semi-supervised model; paragraphs 0105-0105, step 504, performing automatic model training and assessment including single model training and assessment which is selecting one or more algorithms according to current status of log source and labeling result data storage and a task type (automatic exception detection, rapid fault location, early fault warning)); and generating the data model utilizing a training data comprising a) the data properties, b) the quality properties, and c) information on the algorithm matching the obtained exception data (e.g. paragraph 0038, indicating that the selection of the appropriate algorithm is to perform training and assessment to generate a model; paragraph 0058-0059, continuously generating and updating operation and maintenance models; paragraphs 0105-0105, step 504, performing automatic model training and assessment, including performing training, test, and assessment and storing formed algorithm etc. as a model or knowledge base). Assuming arguendo that Liu does not explicitly disclose generating a plurality of data properties and a plurality of quality properties of the exception data by executing a plurality of classification algorithms on the obtained exception data; determining an algorithm matching the obtained exception data, based on the data properties and the quality properties; and generating the data model utilizing a training data comprising a) the data properties, b) the quality properties, and c) information on the algorithm matching the obtained exception data; Subbiah teaches generating a plurality of data properties and a plurality of quality properties of the exception data by executing a plurality of classification algorithms on the obtained exception data (e.g. paragraph 0017, obtaining records of measurement data; paragraph 0018, obtaining labels for measurement data; paragraph 0021, determining plausibility of measurement data; paragraph 0043, determining measures for quality of measurement data, type and strength of disturbances contained therein, etc.; paragraph 0045-0046, supplying measurement data to machine learning model configured to produce classification value as measure of quality, such as whether there is noise/blurring in an image, crackling or hum superimposed on audio measurements, values indicating comprehensive indicators of quality, representing strengths of individual disturbances, etc.; paragraph 0047, additional dimensions for classification/regression of quality of measurement data including alignment between different collected signals, missing data, signal-to-noise ratio, and probability of erroneous readings); determining an algorithm matching the obtained exception data, based on the data properties and the quality properties (e.g. paragraph 0041, selecting machine learning model whose figure of fitness meets predetermined criterion; paragraph 0043, determining figure of fitness for machine learning model to process measurement data that are of the quality according to the measures, that contain disturbances of the determined type and strength, etc.); and generating the data model utilizing a training data comprising a) the data properties, b) the quality properties, and c) information on the algorithm matching the obtained exception data (e.g. paragraph 0022, record of measurement data and label included in set of training data on which the machine-learning model is to be trained, based on plausibility of measurement data meeting criteria; paragraph 0037, different machine learning models created starting from given set of labeled records of measurement data; paragraph 0040, plurality of trained machine learning models obtained); Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Liu and Subbiah in front of him to have modified the teachings of Liu (directed to operation and maintenance system and method, i.e. using trained machine learning models), to incorporate the teachings of Subbiah (directed to assessing conditions of industrial equipment and processes, i.e. using trained machine learning models) to include the capability to generate the plurality of data properties and a plurality of quality properties of the exception data using the classification algorithms, determine/select an algorithm/model based on the properties, and generate the model utilizing training data comprising the properties and determined algorithm/model. One of ordinary skill would have been motivated to perform such a modification in order to provide a finally-obtained machine learning model which is more likely to deliver an accurate assessment of equipment or a process based on measurement data as described in Subbiah (paragraph 0023). With respect to claims 2, 6, and 10, Liu in view of Subbiah teaches all of the limitations of claims 1, 5, and 9 as previously discussed, and Liu further teaches wherein the data properties and the quality properties comprise d) recency (e.g. paragraph 0036, 0058, 0064, sorting log data in chronological order; i.e. where the resulting chronological ordering of the data provides an indication of recency). Subbiah further teaches wherein the data properties and the quality properties comprise a) coverage, b) confidence, c) persistence, and d) recency (e.g. paragraph 0021, determining plausibility of measurement data; paragraph 0043, determining measures for quality of measurement data, type and strength of disturbances contained therein, etc.; paragraph 0045-0046, classification value as measure of quality, such as whether there is noise/blurring in an image, crackling or hum superimposed on audio measurements, values indicating comprehensive indicators of quality, representing strengths of individual disturbances, etc.; paragraph 0047, additional dimensions for classification/regression of quality of measurement data including alignment between different collected signals, missing data, signal-to-noise ratio, and probability of erroneous readings; i.e. where type and strength, alignment between signals, indications of missing data, signal to noise ratio, probability of erroneous readings, plausibility, etc., are indicative of, and analogous to, at least coverage (i.e. alignment, signal to noise ratio, missing data), confidence (i.e. probability of erroneous readings, plausibility of measurement data), and persistence (i.e. strength)). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Liu and Subbiah in front of him to have modified the teachings of Liu (directed to operation and maintenance system and method, i.e. using trained machine learning models), to incorporate the teachings of Subbiah (directed to assessing conditions of industrial equipment and processes, i.e. using trained machine learning models) to include the capability to generate the plurality of data properties and a plurality of quality properties of the exception data, including properties which comprise at least coverage (i.e. alignment, signal to noise ratio, missing data), confidence (i.e. probability of erroneous readings, plausibility of measurement data), and persistence (i.e. strength). One of ordinary skill would have been motivated to perform such a modification in order to provide a finally-obtained machine learning model which is more likely to deliver an accurate assessment of equipment or a process based on measurement data as described in Subbiah (paragraph 0023). Claims 3, 7, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Subbiah, further in view of Gulati, Pooja & Sharma, Amita & Gupta, Manish. (2016). Theoretical Study of Decision Tree Algorithms to Identify Pivotal Factors for Performance Improvement: A Review. International Journal of Computer Applications. 141. 19-25. 10.5120/ijca2016909926. (hereinafter Gulati). With respect to claims 3, 7, and 11, Liu in view of Subbiah teaches all of the limitations of claims 1, 5, and 9 as previously discussed. Liu and Subbiah do not explicitly disclose wherein determining the algorithm matching the obtained exception data comprises: selecting classification with entropy as the algorithm if the data properties and the quality properties indicate that the exception has dimension exceeding a threshold of dimension and has a highly skewed population; selecting classification with gini index as the algorithm if the data properties and the quality properties indicate that the exception has dimension below a threshold of dimension and has a uniform population; selecting regression trees as the algorithm if numeric attributes in the exception exceeds a threshold of numeric attributes; and selecting classification and regression trees (CART) as the algorithm if the exception contains a mix of attributes. However, Gulati teaches wherein determining the algorithm matching the obtained exception data comprises: selecting classification with entropy as the algorithm if the data properties and the quality properties indicate that the exception has dimension exceeding a threshold of dimension and has a highly skewed population (e.g. pages 19-20, Table 1, indicating that classification trees are selected to classify a dataset into different classes, are mainly used for categorical target variables, use information gain for splitting the dataset, and can use entropy for splitting to find homogeneity in the dataset; page 20, section 2.1 and Fig. 1, indicating that if the response variable has multiple categories, the C4.5 classification algorithm can be used; page 21, section 4.1.1, indicating that entropy is a measure of disorder in a system; entropy measures impurity of dataset that means the higher the entropy value the more the information content; entropy used for when disorder of the dataset is high; i.e. where the data set has properties indicating a large number of dimensions/categories (i.e. exceeding a threshold amount), and is highly disordered (i.e. skewed), a classification tree algorithm, using entropy, may be selected); selecting classification with gini index as the algorithm if the data properties and the quality properties indicate that the exception has dimension below a threshold of dimension and has a uniform population (e.g. pages 19-20, Table 1, indicating that classification trees are selected to classify a dataset into different classes, are mainly used for categorical target variables, use information gain for splitting the dataset, and can use gini index for splitting to find homogeneity in the dataset; page 20, section 2.1 and Fig. 1, indicating that if the response variable has only two categories, a standard classification tree may be used; page 21 section 4.1.2, information gain dataset into homogenous classes; page 22, section 4.1.4, indicating that gini index is an alternative of information gain; i.e. where the data set has properties indicating a small number of dimensions/categories (i.e. below a threshold amount), and is homogenous (i.e. uniform), a classification tree algorithm, using gini index may be selected); selecting regression trees as the algorithm if numeric attributes in the exception exceeds a threshold of numeric attributes (e.g. pages 19-20, Table 1, indicating that regression trees are selected where the dataset is to be classified for a range of numbers, are used for numeric and continuous target variables, etc.; i.e. where the dataset to be classified includes mostly or all numeric values/attributes, a regression tree is selected); and selecting classification and regression trees (CART) as the algorithm if the exception contains a mix of attributes (e.g. page 22, Fig. 2 row 4, indicating that the CART algorithm is useful for handing data with missing values and noisy data, and is biased towards multivalued attributes; i.e. where the dataset to be classified includes missing values, noisy data, and/or multivalued attributes (analogous to having a mix of attributes), CART is selected). Claims 4, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Subbiah, further in view of Vaissiere et al. (US 20170261972 A1). With respect to claims 4, 8, and 12, Liu in view of Subbiah teaches all of the limitations of claims 1, 5, and 9 as previously discussed. Liu and Subbiah do not explicitly disclose wherein the method comprising deriving one or more actionable recommendations using the data model by: fetching at least one rule and corresponding data properties and quality properties as input ; identifying at least one exception associated with the at least one rule, by processing the input using the data model; performing a root cause analysis of the identified at least one exception to identify at least one cause of the identified at least one exception, using the data model; performing a categorization of the at least one cause as related to one of a) process improvements, or b) potential mavericks, using the data model; and generating at least one recommendation based on the categorization of the at least one cause. However, Vassiere teaches the method comprising deriving one or more actionable recommendations using the data model by: fetching at least one rule and corresponding data properties and quality properties as input (e.g. paragraph 0119, modeling properties of measurement results to be expected during faultless performance of respective step; paragraph 0170, fault is indicated where at least one of properties determined based on measurement results exceeds corresponding reference range; paragraph 0188, data sets determined based on information for selected types of disturbances, including information regarding root causes, related actions for determination of their presence and remedies for resolving them, and additional diagnostic information including rules for determining root causes; e.g. rules which correspond to disturbances (i.e. exceptions), which are identified based on corresponding data properties (such that the rules correspond with the data properties associated with the disturbance/exception) are stored in, and retrievable from, data sets); identifying at least one exception associated with the at least one rule, by processing the input using the data model (e.g. paragraph 0170, fault is indicated where at least one of properties determined based on measurement results exceeds corresponding reference range; paragraph 0171, whether disturbance is recognized by monitoring unit as a fault depends on impact of disturbance on the monitored properties in relation to corresponding reference ranges, using primary, secondary, and tertiary models to detect faults due to disturbance large enough to cause property to exceed reference range); performing a root cause analysis of the identified at least one exception to identify at least one cause of the identified at least one exception, using the data model (e.g. paragraphs 0061-0069, performing diagnoses regarding faults using data sets comprising type of disturbance, including list of root causes of respective disturbances and corresponding actions; searching for data sets stored for disturbances which match properties determined for present faulty performance, and determining root causes in data sets as possible root causes, and performing diagnosis based on determined possible root causes; ); performing a categorization of the at least one cause as related to one of a) process improvements, or b) potential mavericks, using the data model (e.g. paragraphs 0184-0185, different types of disturbances have different impacts on monitored properties; properties can be applied in order to determine root causes that caused the detected faults; data sets are related to known types of disturbances, where each data set comprises a known type of disturbance and its impact on each of the monitored properties, and each data set comprises list of possible root causes known to cause this type of disturbance; paragraph 0194, root cause to be resolved identified based on fault detected based on properties matching respective type of disturbance; i.e. the disturbance can be determined to be a particular type of disturbance (i.e. anomaly, potential maverick, etc. which is indicated by data outside of an expected range) based on the impacts to monitored properties); and generating at least one recommendation based on the categorization of the at least one cause (e.g. paragraph 0064, particular action for the listed root cause; action given by remedy suitable of resolving respective root cause; paragraph 0069, ; paragraph 0189, information including root causes for causing impairments of device type, actions for determination of their presences, remedies for resolving them, etc.; paragraph 0194, application of remedy for corresponding root cause). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Liu, Subbiah, and Vassiere in front of him to have modified the teachings of Liu (directed to operation and maintenance system and method, i.e. using trained machine learning models) and Subbiah (directed to assessing conditions of industrial equipment and processes, i.e. using trained machine learning models), to incorporate the teachings of Vassiere (directed to monitoring processes in an industrial site, i.e. using models for monitoring data properties with respect to expected ranges) to include the capability to identify rules corresponding to data and quality properties, such as rules for identifying exceptions/disturbances/faults, perform the identification of at least one exception/disturbance/fault associated with a rule, perform a root cause analysis, including a categorization of a type of a root cause of the exception/disturbance/fault, and generate a recommendation/solution/remedy based on this. One of ordinary skill would have been motivated to perform such a modification in order to provide prompt, reliable, and efficient detection of disturbances based on readily available information as described in Vassiere (paragraph 0013). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month