Prosecution Insights
Last updated: April 19, 2026
Application No. 18/549,811

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Non-Final OA §101§102§103§112
Filed
Sep 08, 2023
Examiner
KASSIM, HAFIZ A
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Panasonic Intellectual Property Management Co., Ltd.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
148 granted / 338 resolved
-8.2% vs TC avg
Strong +54% interview lift
Without
With
+53.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
40.9%
+0.9% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This is a non-final, first office action on the merits. Claims 1-10 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JAPAN JP2021-042839, filed on 03/16/2021. Status of Claims Applicant’s preliminary amendment date 06/27/2019, amended claims 3-5, 7-8, and 10. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Independent claim 1 recites “[an] information processing device” but does not positively recite any structural elements. The body of the claim recites a series of steps like a method claim. As such, it is unclear whether the claim is directed to an apparatus or a method for using the system, or some attempt to claim both a system and a method untied to some positively recited element(s) of the system. The dependent claims 2-8 inherit the deficiency of their respective parent claim. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 10 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends . Claim 10 fails to further limit claim 9, from which it depends and is therefore an improper dependent claim. For example, claim 10 can be infringed without infringing claim 9 since there is no requirement in claim 10 that the method of claim 9 be actually performed. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claims 1-10 are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. With respect to Step 2A Prong One of the framework, claims 1 and 9-10 recite an abstract idea. Claims 1 and 9-10 include “evaluating quality of a plurality of instances of first data to generate a first evaluation result; performing, using the plurality of instances of first data, to generate a model for detecting an anomaly; evaluating quality of a plurality of instances of second data to generate a second evaluation result; comparing the first evaluation result and the second evaluation result and detecting a concept drift, based on a comparison result; and applying the model to the plurality of instances of second data to estimate whether an anomaly is present in the plurality of instances of second data”. The limitations above recite an abstract idea under Step 2A Prong One. More particularly, the elements above recite mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion) because the elements describe a process for estimating an anomaly. As a result, claims 1 and 9-10 recite an abstract idea under Step 2A Prong One. Claims 2-8 further describe the process for estimating an anomaly. As a result, claims 2-8 recite an abstract idea under Step 2A Prong One for the same reasons as stated above with respect to claims 1 and 9-10. With respect to Step 2A Prong Two of the framework, claims 1 and 9-10 do not include additional elements that integrate the abstract idea into a practical application. Claims 1 and 9-10 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1 and 9-10 include a processing device, an evaluator, a learner machine learning, a detector, an estimator, a machine learning model, and a non-transitory computer-readable recording medium. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. As a result, claims 1 and 9-10 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Claims 2-3 and 6 do not include any additional elements beyond those recited with respect to claims 1 and 9-10. As a result, claims 2-3 and 6 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two for the same reasons as stated above with respect to claims 1 and 9-10. Claims 4-5 and 7-8 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 4-5 and 7-8 include a processing device, an evaluator, a learner machine learning, a detector, an obtainer, a pre-processor, machine learning model, and a notifier. When considered in view of the claims as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. As a result, claims 4-5 and 7-8 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. With respect to Step 2B of the framework, claims 1 and 9-10 do not include additional elements amounting to significantly more than the abstract idea. As noted above, claims 1 and 9-10 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1 and 9-10 include a processing device, an evaluator, a learner machine learning, a detector, an estimator, a machine learning model, and a non-transitory computer-readable recording medium. The additional elements do not amount to significantly more than the abstract idea because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, independent claims 1 and 9-10 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Claims 2-3 and 6 do not include any additional elements beyond those recited with respect to claims 1 and 9-10. As a result, claims 2-3 and 6 do not include additional elements that amount to significantly more than the abstract idea under Step 2B for the same reasons as stated above with respect to claims 1 and 9-10. Claims 4-5 and 7-8 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 4-5 and 7-8 include a processing device, an evaluator, a learner machine learning, a detector, an obtainer, a pre-processor, machine learning model, and a notifier. The additional elements do not amount to significantly more than the abstract idea because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claims 4-5 and 7-8 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Therefore, the claims are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. Accordingly, claims 1-10 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 5-10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Maughan et al. (US Pub No. 2017/0330109 ) (hereinafter Maughan et al . ) , hereinafter Maughan . Regarding claims 1 and 9-10, Maughan discloses an information processing device comprising: an evaluator that evaluates quality of a plurality of instances of first data to generate a first evaluation result and evaluates quality of a plurality of instances of second data to generate a second evaluation result (Fig. 5, paras [0125]-[0127], wherein generate and evaluate learned functions with different sets of features, the predictive correlation module 518 may determine which features and/or instances of features correlate with higher confidence metrics, are most effective, or the like based on metadata from the metadata library 514…..determines the relationship of a feature's predictive qualities for a specific outcome or result based on each instance of a particular feature); a learner that performs machine learning, using the plurality of instances of first data, to generate a machine learning model for detecting an anomaly (para [0063], wherein the drift detection module 204 may compare outcomes from the prediction module 202 (e.g., machine learning predictions based on workload data) to outcomes identified in the evaluation metadata described below, in order to determine whether a drift phenomenon has occurred (e.g., an anomaly in the results, a ratio change in classifications, a shift in values of the results, or the like)); evaluating quality of a plurality of instances of second data to generate a second evaluation result (paras [0125]-[0127], wherein generate and evaluate learned functions with different sets of features, the predictive correlation module 518 may determine which features and/or instances of features correlate with higher confidence metrics, are most effective, or the like based on metadata from the metadata library 514…..determines the relationship of a feature's predictive qualities for a specific outcome or result based on each instance of a particular feature); a detector that compares the first evaluation result and the second evaluation result and detects a concept drift, based on a comparison result (para [0063], wherein the drift detection module 204 may compare outcomes from the prediction module 202 (e.g., machine learning predictions based on workload data) to outcomes identified in the evaluation metadata described below; and paras [0101], [0058] & [0095]-[0096], wherein a modified predictive result based on reapplying a model to modified workload data, where the modified predictive result includes a comparison between an updated result and a corresponding non-updated result….in comparison to training data or past workload data. Similarly, an output drift phenomenon may include a change in predictive results from a model, relative to actual outcomes (included in the training data and/or obtained for past workload data) or relative to prior predictive results); and an estimator that applies the machine learning model to the plurality of instances of second data to estimate an anomaly in the plurality of instances of second data (paras [0073]-[0078], wherein the predict-time fix module 206 may estimate or otherwise determine an impact of the missing features and/or records on the original machine learning and/or on the retrained machine learning, and may provide the impact to a user or other client 104. For example, the predict-time fix module 206 may make multiple predictions or other results using data in a normal and/or expected range, and compare the predictions or other results to those made without the data, to determine an impact of missing the data on the predictions or other results; and para [0081], wherein retrain machine learning excluding one or more feature and retrain machine learning replacing drifted, changed, and/or missing values with expected values, comparing and/or evaluating predictions or other results from both and selecting the most accurate retrained machine learning for use). Regarding claim 5, Maughan discloses the information processing device according to claim 1, further comprising: an obtainer that obtains a plurality of instances of data (paras [0025] & [0040], wherein the operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices…..the ability to collect, analyze and mine these huge repositories of structured, unstructured, and/or semi-structured data is now possible); and a pre-processor that performs pre-processing on the plurality of instances of data to generate the plurality of instances of first data and the plurality of instances of second data (paras [0103]-[0105], wherein module that may pre-process, reformat, or otherwise prepare the data for the predictive analysis module 102. The data receiver module 402 may support structured data, unstructured data, semi-structured data, or the like…..initialization data may comprise labeled data. In a further embodiment, initialization data may comprise unlabeled data (e.g., for semi-supervised learning or the like)….initialization data and/or workload data may be labeled (e.g., may already include predictions, in order to validate and/or detect drift in another machine learning model, or the like). Regarding claim 6, Maughan discloses the information processing device according to claim 5, wherein the pre-processing includes data cleansing and at least one of data coupling or data conversion (paras [0103]-[0105], wherein module that may pre-process, reformat (i.e., data cleansing), or otherwise prepare the data for the predictive analysis module 102. The data receiver module 402 may support structured data, unstructured data, semi-structured data, or the like…; para [0099], wherein detects drift for a feature affecting multiple records, the modification module 310 may remove that feature from the modified workload data. For example, if an age range in the workload data is inconsistent with an age range in the training data, the modification module 310 may modify the workload data by removing out-of-range age values, or by removing all age values). Regarding claim 7, Maughan discloses the information processing device according to claim 1, further comprising: a notifier that provides a notification indicating that a concept drift has been detected, when the detector detects the concept drift (para [0069], wherein notify a user or other client 104 of a drift or other change. In certain embodiments, the predict-time fix module 206 may allow the prediction module 202 to provide a prediction or other result, despite a detected drift or other change). Regarding claim 8, Maughan discloses the information processing device according to claim 1, wherein when the detector detects a concept drift, the learner performs machine learning, using a plurality of instances of data that are different from the plurality of instances of first data, to generate the machine learning model anew (paras [0060] & [0064], wherein the drift detection module 204 may use machine learning and/or a statistical analysis of the one or more monitored inputs and/or outputs to detect and/or predict drift; para [0049], wherein the predictive analytics module 102 may retrain a predictive model (e.g., generate a new/retrained predictive ensemble, generate new/retrained learned functions, or the like), in response to detecting the drift phenomenon…; and para [0078], wherein the retrain module 302 may modify existing training data to produce the updated training data for retraining (e.g., without obtaining additional data from a user. The retrain module 302, in certain embodiments, may retrain one or more ensembles or other machine learning for a user or other client 104 without additional data from the user or other client 104, by excluding records and/or features for which values have drifted or otherwise changed). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co. , 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Maughan et al. (US Pub No. 2017/0330109) (hereinafter Maughan et al . ) in view of Walter et al. (US Pub No. 2020/0012900) (hereinafter Walter et al. ). Regarding claim 2, Maughan discloses the information processing device according to claim 1, wherein the evaluator includes a basic evaluator that evaluates each of the plurality of instances of first data and each of the plurality of instances of second data, based on evaluation item is at least one of a data type, a character code, or an anomalous value (see Maughan, paras [0114]-[0118], wherein a subset of features of initialization data, a subset of features of initialization data, a subset of both features and instances of initialization data, or the like. Varying the features and/or instances used to train different learned functions, in certain embodiments, may further increase the likelihood that at least a subset of the generated learned functions are useful, suitable, and/or effective……the predictive compiler module 406 evaluates learned functions from the function generator module 404 using test data to generate evaluation metadata. The predictive compiler module 406, in a further embodiment, may evaluate combined learned functions, extended learned functions, combined-extended learned functions, additional learned functions, or the like using test data to generate evaluation metadata; and para [0063], wherein compare outcomes from the prediction module 202 (e.g., machine learning predictions based on workload data) to outcomes identified in the evaluation metadata described below, in order to determine whether a drift phenomenon has occurred (e.g., an anomaly in the results, a ratio change in classifications, a shift in values of the results), the first evaluation result includes an evaluation result on each of the plurality of instances of first data, the evaluation result (see Maughan, para [0059], wherein one or more predictive results may affect one or more records. In one embodiment, a drift phenomenon may or pertain to a single record of workload data, or affect a single result. For example, if the training data establishes or suggests an expected range for a data value, the drift detection module 204 may detect an out-of-range value), and the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result (see Maughan, para [0058], wherein an input drift or workload data drift phenomenon may include a change in workload data, in comparison to training data or past workload data. Similarly, an output drift phenomenon may include a change in predictive results from a model, relative to actual outcomes (included in the training data and/or obtained for past workload data) or relative to prior predictive results). Maughan et al. fails to explicitly disclose based on a first profile . Analogous art Walter discloses based on a first profile whose evaluation item is at least one of a data type, a character code, or an anomalous value (see Walter, para [0184], wherein detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data. For example, drift may be detected based on a difference between the covariance matrix of the predicted data and a covariance matrix of the event data; para [0046], wherein generating data models (e.g., type of data model); para [0073], wherein the actual data can include unstructured data (e.g., character strings, tokens, and the like); and para [0182], wherein event data (e.g., variance, sampling rate, detecting a measured value falls inside or outside a particular range or above or below particular threshold, etc.)), Analogous art Walter discloses the first evaluation result includes an evaluation result on each of the plurality of instances of first data, the evaluation result being based on the first profile (see Walter, para [0184], wherein detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data; para [0040], wherein the cloud computing instances can be general-purpose computing devices; para [0130], wherein evaluate a number of duplicate elements in each of the synthetic dataset and reference data stream dataset), and Analogous art Walter discloses the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result being based on the first profile (see Walter, para [0184], wherein detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data; para [0110], wherein system 100 can compare a synthetic dataset to a normalized reference dataset, a synthetic dataset to an actual (unnormalized) dataset, or to compare two datasets according to a similarity metric consistent with disclosed embodiments. For example, in some embodiments, model optimizer 107 can be configured to perform such comparisons). Maughan directed to a system for drift detection and correction for predictive analytics. Walter directed to detecting data drift for data used in machine learning models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Maughan, regarding the System for Predictive Drift Detection and Correction , to have included based on a first profile whose evaluation item is at least one of a data type, a character code, or an anomalous value, the first evaluation result includes an evaluation result on each of the plurality of instances of first data, the evaluation result being based on the first profile, and the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result being based on the first profile because both inventions teach improving the quality of the synthetic data model. Further, the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 3, Maughan discloses the information processing device according to claim 1, wherein the evaluator includes a statistics evaluator that evaluates statistics of the plurality of instances of first data and statistics of the plurality of instances of second data (see Maughan, para [0064], wherein the drift detection module 204 may break up and/or group results from the prediction module 202 into classes or sets ( e.g., by row, by value, by time, or the like) and may perform a statistical analysis of the classes or sets. For example, the drift detection module 204 may determine that a size and/or ratio of one or more classes or sets has changed and/or drifted over time), the first evaluation result includes an evaluation result on each of the plurality of instances of first data, the evaluation result (see Maughan, para [0059], wherein one or more predictive results may affect one or more records. In one embodiment, a drift phenomenon may or pertain to a single record of workload data, or affect a single result. For example, if the training data establishes or suggests an expected range for a data value, the drift detection module 204 may detect an out-of-range value), and the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result (see Maughan, para [0058], wherein an input drift or workload data drift phenomenon may include a change in workload data, in comparison to training data or past workload data. Similarly, an output drift phenomenon may include a change in predictive results from a model, relative to actual outcomes (included in the training data and/or obtained for past workload data) or relative to prior predictive results). Maughan et al. fails to explicitly disclose based on a second profile whose evaluation item is at least one statistic . Analogous art Walter discloses the evaluator includes a statistics evaluator that evaluates statistics of the plurality of instances of first data and statistics of the plurality of instances of second data, based on a second profile whose evaluation item is at least one statistic (see Walter, paras [0191]-[0192], wherein a baseline data metric of the data profile of the baseline synthetic data is determined, the data profile including the data schema and a statistical profile. For example, step 1906 may include determining a baseline covariance matrix of the baseline synthetic data…. Current input data may be entirely composed of actual data, entirely composed of synthetic data, or include a mix of synthetic data and actual data), Analogous art Walter discloses the first evaluation result includes an evaluation result on each of the plurality of instances of first data, the evaluation result being based on the second profile (see Walter, paras [0191]-[0192], wherein a baseline data metric of the data profile of the baseline synthetic data is determined, the data profile including the data schema and a statistical profile. For example, step 1906 may include determining a baseline covariance matrix of the baseline synthetic data…. Current input data may be entirely composed of actual data, entirely composed of synthetic data, or include a mix of synthetic data and actual data; para [0184], wherein detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data; para [0040], wherein the cloud computing instances can be general-purpose computing devices; and para [0130], wherein evaluate a number of duplicate elements in each of the synthetic dataset and reference data stream dataset), and Analogous art Walter discloses the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result being based on the second profile (see Walter, paras [0191]-[0192], wherein a baseline data metric of the data profile of the baseline synthetic data is determined, the data profile including the data schema and a statistical profile. For example, step 1906 may include determining a baseline covariance matrix of the baseline synthetic data…. Current input data may be entirely composed of actual data, entirely composed of synthetic data, or include a mix of synthetic data and actual data; para [0184], wherein detecting data drift includes determining a difference between the data profile of the predicted data and the data profile of the event data; and para [0110], wherein system 100 can compare a synthetic dataset to a normalized reference dataset, a synthetic dataset to an actual (unnormalized) dataset, or to compare two datasets according to a similarity metric consistent with disclosed embodiments. For example, in some embodiments, model optimizer 107 can be configured to perform such comparisons). One of ordinary skill in the art would have recognized that applying the known technique of Walter would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to claim 2. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Maughan et al. (US Pub No. 2017/0330109) (hereinafter Maughan et al . ) in view of Olgiati et al. (US Pub No. 2021/0097431) (hereinafter Olgiati et al. ). Regarding claim 4, Maughan discloses the information processing device according to claim 1, wherein the evaluator includes a learning evaluator that evaluates the plurality of instances of second data (see Maughan, para [0116], wherein a predictive ensemble comprises an organized set of a plurality of learned functions. Providing a classification, a confidence metric, an inferred function, a regression function, an answer, a prediction, a recognized pattern, a rule, a recommendation, or another result using a predictive ensemble), and the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result (see Maughan, para [0070], wherein a record granularity (e.g., indicating which record(s) include one or more drifted values), …… the predict-time fix module 206 provides a drift flag or other indicator indicating an importance and/or priority of the drifted record and/or feature (e.g., a ranking of the drifted record and/or feature relative to other records and/or features in order of importance or impact on a prediction or other result, an estimated or otherwise determined impact of the drifted record and/or feature on a prediction or other result). Maughan et al. fails to explicitly disclose based on a third profile whose evaluation item is at least one feature in the machine learning, based on the third profile . Analogous art Olgiati discloses wherein the evaluator includes a learning evaluator that evaluates the plurality of instances of second data based on a third profile whose evaluation item is at least one feature in the machine learning (see Olgiati , para [0027], wherein profiling of the model training process, the analysis may use the collected profiling data for rule evaluation to detect performance conditions or performance problems, e.g., bottlenecks in the training cluster. The analysis system may automatically generate alarms when particular conditions are detected or initiate actions to modify the training; and para [0091], wherein debugging of machine learning model training using tensor data, including version comparison using tensor data from a current run and tensor data from a prior run, according to some embodiments. In some embodiments, two different versions or instances of a model training process may be implemented), and Analogous art Olgiati discloses the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result being based on the third profile (see Olgiati , para [0027], wherein profiling of the model training process, the analysis may use the collected profiling data for rule evaluation to detect performance conditions or performance problems, e.g., bottlenecks in the training cluster. The analysis system may automatically generate alarms when particular conditions are detected or initiate actions to modify the training; and paras [0090]-[0091], wherein debugging of machine learning model training using tensor data, including modification of model training according to results of an analysis, according to some embodiments. The machine learning analysis system 1270 may include a component 1474 for training modification. The training modification 1474 may represent one or more actions taken to remediate detected problems and/or improve the training). Maughan directed to a system for drift detection and correction for predictive analytics. Olgiati directed to debugging and profiling of machine learning model training. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Maughan, regarding the System for Predictive Drift Detection and Correction , to have included a learning evaluator that evaluates the plurality of instances of second data, based on a third profile whose evaluation item is at least one feature in the machine learning, and the second evaluation result includes an evaluation result on each of the plurality of instances of second data, the evaluation result being based on the third profile because both inventions teach improving the quality of the synthetic data model. Further, the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 9, claim 9 is rejected based upon the same rationale as the rejection of claim 1, respectively, since it is method claim corresponding to the processing device claim. Claim 9 recites additional features evaluating quality of a plurality of instances of second data to generate a second evaluation result (see Maughan, paras [0125]-[0127]). Regarding claim 10, claim 10 is rejected based upon the same rationale as the rejection of claim 1, respectively, since it is a non-transitory computer-readable recording medium claim corresponding to the processing device claim. Claim 10 recites additional features such as a non-transitory computer-readable recording medium (see Maughan, para [0027]). Conclusion The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. (US Pub No. 2021/0224696; US Pub No. 2016/0342903; US Pub No. 2022/0188694; US Pub No. 2017/0372232; US Pub No. 2022/0036201; US Pub No. 2020/0151619; S Saurav, P Malhotra, V TV, N Gugulothu (Online anomaly detection with concept drift adaptation using recurrent neural networks)- … conference on data …, 2018 - dl.acm.org; and F Stertz , S Rinderle -Ma, J Mangler (Analyzing process concept drifts based on sensor event streams during runtime) - International Conference on Business …, 2020 - Springer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAFIZ KASSIM whose telephone number is (571)272-8534. The examiner can normally be reached on Mon - Fri (8am - 5pm) EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAFIZ A KASSIM/ Primary Examiner, Art Unit 3623 03/ 16 /202 6
Read full office action

Prosecution Timeline

Sep 08, 2023
Application Filed
Mar 15, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602638
RISK MANAGEMENT SYSTEM AND RISK MANAGEMENT METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12586008
MANAGING HOTEL GUEST HOUSEKEEPING WITHIN AN AUTOMATED GUEST SATISFACTION AND SERVICES SCHEDULING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561706
SYSTEMS AND METHODS FOR MANAGING VEHICLE OPERATOR PROFILES BASED ON RELATIVE TELEMATICS INFERENCES VIA A TELEMATICS MARKETPLACE
2y 5m to grant Granted Feb 24, 2026
Patent 12548038
Realtime Busyness for Places
2y 5m to grant Granted Feb 10, 2026
Patent 12541724
SYSTEMS AND METHODS FOR TIME-SERIES FORECASTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
98%
With Interview (+53.7%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month