Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,793

METHOD AND SYSTEM FOR INTEGRATED MONITORING OF CHATBOTS FOR CONCEPT DRIFT DETECTION

Non-Final OA §101§103
Filed
Jul 07, 2023
Examiner
MCCORD, PAUL C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Pwc Product Sales LLC
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
393 granted / 569 resolved
+7.1% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Applicants amendments to Claims 1, 2, 4, 8-12, 16, 24, 25 as filed 8/8/25 suffice to obviate the 35 U.S.C. 101 rejection thereof. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 7-11, 13-16, 18-20, 24, 25 rejected under 35 U.S.C. 103 as being unpatentable over Khatami: 20210406726 hereinafter Kha further in view of Huang: 20210141862 hereinafter Hua and further in view of Gopalan: 20180285772 hereinafter Gop. Regarding claim 1 Kha teaches: A method for monitoring concept drift in a trained model, the method comprising: receiving a first dataset representing model operations executed by the trained model at a first time (Kha: ¶ 32, 34, 36-39, 42; fig 1-4: system gets a first and/or subsequent window of upcoming data; performing a look back analysis and a MSE analysis thereon, such as using a window by window basis for analysis; such as by proceeding step wise through subsequent windows to predict values at each time step), wherein the first dataset comprises at least one of user input data, vector data representative of the user input data, and model output data (Kha: ¶ 37-39; fig 1, 2: drift detection at 220 receives model output data in the form of MSE values determined from model predictions); applying a data processing operation to the first dataset to determine a first result data based on the first dataset (Kha: ¶ 37-39; fig 1, 2: system determines predicted values and averaging function results, MSE results, etc. of predictions upon, over, etc. the first data); receiving a second dataset representing model operations executed by the trained model at a second time (Kha: ¶ 32, 34, 36-39, 42; fig 1-4: such as by practicing the method upon one or more subsequent time windows); applying a data processing operation to the second dataset to determine a second result data based on the second dataset (Kha: ¶ 32, 34, 36-39, 42; fig 1-4: such as by practicing the method upon one or more subsequent time windows to predict window values upon the time step corresponding to the one or more subsequent time windows); determining, based on the first result data, a difference between the first result data and the second result data (Kha: ¶ 32, 37-39; fig 2: system determines and excludes outliers by comparison of predicted values; and averaging, MSE, etc. results with real values for each of the data); determining, based on the difference, whether concept drift has occurred (Kha: ¶ 32, 37-39; fig 2, 7: drift of a model determined based on iterative comparisons of the input data and determined data), wherein determining whether concept drift has occurred comprises determining that that a characteristic of a distribution associated the trained model has changed (Kha: ¶ 37-39; fig 1, 2: MSE distribution beyond a threshold similarly of a previous distribution determined to comprise model drift); and in accordance with a determination that concept drift has occurred, transmitting an instruction to update training of the trained model; and retraining the trained model (Kha: ¶ 6, 32, 37-39; fig 2, 7: if drift is detected model is updated such as based on or emergent from the detected drift) and additionally comprises conducting the training, retraining, etc. based on applying one or more labels to data in the first, subsequent, etc. dataset (Kha: ¶ 53, etc.; Fig 6: such as labeling determined values as an outlier). Kha discusses the determining of first, second, etc. results, differences therebetween and model update based thereon dependent from an estimation of model drift and does not explicitly teach determining concept drift wherein determining concept drift comprises determination of a distribution associated with an intent classification of the trained model has changed nor does Kha explicitly teach a system, method, etc., wherein retraining the trained model comprises: applying one or more labels to data in the first dataset, the one or more labels associated with an intent classification of the data and comprising at least one new intent classification; and updating the trained model based at least in part on the labeled first dataset. In a related field of endeavor Hua teaches a system and method for intent classification comprising an estimation of model drift by determining the new intents and/entities are discovered (Hua: Abstract; ¶ 3, 21, 35: such as based on additional questions from users upon different time windows, user contexts, etc.) and thereby updating dialog paths, rules, other data structures, etc. based thereon to generate new intent/dialog pairs (Hua: ¶ 3, 21, 35, 44; Fig 3: encounter by a chatbot of a new intent, entity, etc. mandates adjustment of controlling information, such as at least update of an extant dialog path, intent/entity set, etc. which comprise new labelled data) which trigger a retraining of the model based on the discovered new intent, entity, etc. (Hua: ¶ 35, 46, 56). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to retrain the Kha model based on the determination intent based dialog contexts as taught or suggested by Hua such as for maintaining an estimate of drift against the emergence of new intents, entities, etc. and updating the model, triggering retraining thereof based thereon and for at least the purpose of adapting the system, method, etc. to emergent data upon the determining, predicting, etc. of drift such as to track drift with respect to new, emergent, etc. user data. one of ordinary skill in the art would have expected only predictable results therefrom. Kri in view of Hua does not explicitly discuss determining that a characteristic of a distribution associated with an intent classification of the trained model has changed. In a related field of endeavor Gop teaches a system for dynamic update of models wherein the system computes and processes distributions of a training data set comprising labelled data and processes new data to determine a likelihood of new data with respect thereto and implementing training when the likelihood of new data exceeds a characteristic of the distribution such as by the accumulation of low likelihood data points (Gop: Abstract; ¶ 12, 27, 32). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the drift monitoring and retraining framework of Kha in view of Hua by triggering the retraining system thereof using a distribution based trigger such as that of Gop; Kha in view of Hua recognizes that new intents appear over time and Gop teaches or suggests a method to improve based on the teaching that that additional data should be collected with respect thereto; and used to retrain the model such as for collecting new trained data in the form of the emergent intents, entities, etc. of Kha in view of Hua and for at least the purpose of maintaining distributions of parameters to detect divergence therein to track incremental shifts of the input data from the model; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 2 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the data processing operation comprises a statistical analysis operation and wherein the first result data comprises a first statistical output (Kha: ¶ 6, 32, 37-39; fig 1, 2, 7: such as by computation of a mean square error on the first dataset and/or second dataset). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom.. Regarding claim 4 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the data processing operation comprises a model error rate determination analysis and wherein the first result data comprises a first error rate (Kha: ¶ 31-39: system maintains error distribution over time to determine changes in time series data and a need for update of a model upon which system invokes the update). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 7 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 6, wherein the user input data comprises natural language data. (Hua: ¶ 20, 37: such as incoming chat, dialog, etc. data); (Gop: ¶ 3, 10, 21: such as within incoming text data). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 8 Kha in view of Hua in view of Gop teaches or suggests: The system of claim 6, wherein the model output data comprises an intent classification generated based on user input data of the first dataset (Hua: ¶ 20, 37, 44, 48, etc.: such as for classifying incoming textual data). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 9 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein receiving the first dataset comprises receiving the first dataset from a shared memory in communication with the trained model (Kha: ¶ 31-39, etc.; Fig 2, 3, etc.: model utilizes first, second, etc. windows of data from a database). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 10 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the first dataset comprises a predetermined amount of data (Kha: ¶ 31-39, etc.; Fig 2, 3, etc.: model utilizes first, second, etc. windows of data from a database); (Gop: ¶ 33: such as by utilizing data with respect to a determined time window). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 11 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the first dataset comprises data received during a predetermined period of time (Kha: Abstract; ¶ 2-6 31-39: window based technique for determine concept or topic drift over time by quantifying model drift); (Gop: ¶ 33: such as by utilizing data with respect to a determined time window). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 12 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the method further comprises: receiving a second dataset representing model operations executed by the trained model; and applying a data processing operation to the second dataset to determine the second result data (Kha: Abstract; ¶ 2-6 31-39, 42; Fig 2: such as determination of MSE with the exclusion of outliers, determine of drift by comparisons based thereon such as by practicing the method upon one or more subsequent time windows to predict window values upon the time step corresponding to the one or more subsequent time windows). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 13 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 12, wherein the second dataset comprises a training dataset that was used to train the trained model (Kha: Abstract; ¶ 2-6 31-39; Fig 1, 2: comparison of subsequent data windows after retraining includes data used to train a current iteration of the model); (Hua: ¶ 3, 21, 35, 44, 46, 56; Fig 3: such as by iteratively improving the first dataset base thereon); (Gop: Abstract; ¶ 12, 27: such as by retraining the first, second, etc. dataset). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 14 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 12, wherein the second dataset comprises a dataset having a similar distribution to a training dataset that was used to train the trained model (Kha: Abstract; ¶ 2-6, 31-39; Fig 1, 2: comparison of subsequent data windows after retraining includes data used to train a current iteration of the model and as such comprises at least a similar distribution as only outlying values are excluded); (Hua: ¶ 3, 21, 35, 44, 46, 56; Fig 3: model updates with respect to emergent, new, etc. data thus producing a similarly distributed dataset save for the inclusion of the new intent, entity, etc.); (Gop: Abstract; ¶ 12, 27, 32: model updates based on new data and thus produces an amended training set with a similar distribution). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 15 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 12, wherein the second dataset comprises an inference dataset processed by the trained model at a different time than the first dataset (Kha: Abstract; ¶ 2-6 31-39; Fig 1, 2: second dataset generated based on and subsequent to the determination of the first dataset); (Gop: Abstract; ¶ 12, 27, 32: system practices inference based on a sequence of trained, retrained, etc. models at subsequent times). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 16 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the difference comprises a difference between a first characteristic of the first result data and a second characteristic of the second result data (Kha: Abstract; ¶ 2-6 31-39; Fig 1, 2: comparison of first and second generated data results from operations and comparisons on characteristics of the first and second data to determine when it becomes different enough to invoke retraining after retraining includes data used to train a current iteration of the model). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 18 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein determining whether concept drift has occurred comprises determining whether the difference is greater than a threshold value (Kha: Abstract; ¶ 2-6 31-39, 45; Fig 1, 2: system determines if prediction error, MSE, KS distance between first and second data exceed a threshold); (Gop: Abstract, etc.: difference of detected drift with respect to predetermined difference threshold iteratively updates, the model). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 19 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein transmitting the instruction to update training of the trained model comprises transmitting executable program code configured to cause the trained model to be retrained when the code is executed by one or more processor (Kha: ¶ 6, 32, 37-39; fig 2, 7: if drift is detected model is updated); (Hua: Abstract: detection of novel intent, entity, etc. instances generates update instructions which iteratively improve a dialog graph, path, etc.; retrain a model); (Gop: Abstract, etc.: difference of detected drift with respect to predetermined difference threshold iteratively instructs the model to update). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 20 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the instruction comprises an indication of the character or magnitude of the detected concept drift (Kha: ¶ 6, 32, 37-39; fig 2, 7: first and second/subsequent MSE calculation comprise indicia of the character of a detected drift); Gop: Abstract: a count of data drift maintained resolved against a threshold). The claim is considered obvious over Kha as modified by Hua and Gop as addressed in the base claim as it would have been obvious to apply the further teaching of Kha, Hua, and/or Gop to the modified device of Kha, Hua, and Gop; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 23 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the trained model is a chatbot (Hua: ¶ 2, 5, 6, etc. : model operative with respect to a chatbot). Regarding claims 24, 25 – the claims are considered to recite substantially similar subject matter to that of claim 1 and are similarly rejected. Claim 3 rejected under 35 U.S.C. 103 as being unpatentable over Khatami: 20210406726 hereinafter Kha further in view of Huang: 20210141862 hereinafter Hua and further in view of Gopalan: 20180285772 hereinafter Gop as applied to claims 1, 2, 4, 7-11, 13-16, 18-20, 24, 25 supra and further in view of Ackerman, “Theory and Practice of Quality Assurance for Machine Learning Systems” (copy provided by Examiner, published 4/2022 and hereinafter Ack). Regarding claim 3 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 2, wherein the statistical analysis operation comprises one or more of the following: a Kolmogorov-Smirnov (KS) test (Kha: ¶ 6, 32, 37-39, 44, 56; fig 1, 2, 7: system compares detected first, second MSE values such as based on a KD approach), a maximum mean discrepancy (MMD) test, a least-squares density difference (LSDD) test, a KMeans and chi square test, an equal intensity KMeans (EIKMeans) and chi square test, a Jensen-Shannon (JS) divergence test, and an uncertainty classifier. Examiner has taken official notice of the well-known nature of such algorithms in performing the types of statistical tests claims which Applicant had failed to timely and specifically traverse in the response to the NF action filed by Applicant 8/8/25 and the well-known nature was accepted as Admitted Prior Art in the final action of 10/8/25. In the arguments accompanying the RCE filed 2/9/26 Applicant traverses, arguing against the well-known nature of the claimed features. As instant and unquestionable demonstration consider Ack which teaches the utility of MMD to determine drift by tracking a distance between two distributions such as for chatbot drift (Ack: § 6.2.1, 6.2.2) would have comprised an obvious inclusion such as for at least the purpose of conducting statistical testing upon the determined data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to select among well known tests such as one or more of KS, and/or the Bjo taught or suggested MMD for computation of drift to thereby improve the Kha in view of Hua in view of Gop chatbot drift detection system and method; one of ordinary skill in the art would have expected only predictable results therefrom. Please see additionally Feng: ¶ 20260012274: ¶ 195-199 which discusses fine tuning by determining drift based on distance and cites the utility of well-known statistical analysis operations such as MMD, JS divergence, KL divergence, etc.; Kamulete: 20200410403: ¶ 54 which discusses determining drift based on distance and cites the utility of well-known statistical analysis operations such as MMD, KS divergence, KL divergence, etc. Claim 5 rejected under 35 U.S.C. 103 as being unpatentable over Khatami: 20210406726 hereinafter Kha further in view of Huang: 20210141862 hereinafter Hua and further in view of Gopalan: 20180285772 hereinafter Gop as applied to claims 1, 2, 4, 7-11, 13-16, 18-20, 24, 25 supra and further in view of Rafael de Lima Cabral, “Concept drift detection based on Fisher’s Exact test,” (copy provided by Examiner, published 2018 and hereinafter Cab). Regarding claim 5 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 4, wherein the model error rate determination analysis comprises one or more of the following: Fisher’s test, and a statistical test of equal proportions (STEPD). Examiner has taken official notice of the well-known nature of such algorithms in performing the types of statistical tests claims which Applicant had failed to timely and specifically traverse in the response to the NF action filed by Applicant 8/8/25 and the well-known nature was accepted as Admitted Prior Art (APA: please see MPEP 2144.03) in the final action of 10/8/25. In the arguments accompanying the RCE filed 2/9/26 Applicant traverses, arguing against the well-known nature of the claimed features. As instant and unquestionable demonstration consider Cab which teaches that Fisher’s text and a statistical test of equal proportions comprise well-known algorithms by which to detect concept drift (Cab: Abstract, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize well known tests such as the Cab taught or suggested Fisher’s and STEPD for computation of drift such as by quantification of errors, error rate, etc. to thereby improve the Kha in view of Hua in view of Gop chatbot drift detection system and method; one of ordinary skill in the art would have expected only predictable results therefrom. Please see additionally “Boschloo’s Test” Wikipedia page provided by Examiner and available at least 1/28/22 which conflates Fisher’s test with a statistical determination of equal proportion. Claim 17, 22 rejected under 35 U.S.C. 103 as being unpatentable over Khatami: 20210406726 hereinafter Kha further in view of Huang: 20210141862 hereinafter Hua and further in view of Gopalan: 20180285772 hereinafter Gop as applied to claims 1, 2, 4, 7-11, 13-16, 18-20, 24, 25 supra and further in view of Bjorelind, “Clustering and Summarization of Chat Dialogues” (copy provided by Examiner, published 2021 and hereinafter Bjo). Regarding claim 17 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 1, wherein the difference comprises a difference between a first determined number of clusters of the first dataset and a second determined number of clusters in a second dataset (Kha: ¶ 52, 56, 92: system detects anomalous subsets of data up to and including the entire set of time series data such as by detection of a vector distance between first and second data characteristics, MSE, KS distance, etc. thereof); (Gop: ¶ 11, 13, 29=31: system operates over various clustering algorithms to determine features with respect to a model which is retrained based on emergently determined features). While Kha in view of Hua in view of Gop does not explicitly teach intent, topic, concept, etc. determination based on clustering of parameters about a particular topic neighborhood in an overall state space Examiner has taken official notice of the well-known nature of such algorithms in performing the types of statistical tests claims which Applicant had failed to timely and specifically traverse in the response to the NF action filed by Applicant 8/8/25 and the well-known nature was accepted as Admitted Prior Art in the final action of 10/8/25. In the arguments accompanying the RCE filed 2/9/26 Applicant traverses, arguing against the well-known nature of the claimed features. As instant and unquestionable demonstration consider Bjo which teaches an Adjusted Rand Index as a manner of comparing two sets of clusters with respect to the number of clusters in each, such as for determining a similarity therebetween (Bjo: § 2.2.5.2, 3.6.2: such as available in the well-known scikit-learn library). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize well known clustering packages such as the Bjo taught or suggested using the scikit-learn package including the Improved Rand Index for analysis of clustering parameters to thereby improve the Kha in view of Hua in view of Gop chatbot drift detection system and method; one of ordinary skill in the art would have expected only predictable results therefrom. Please see also scikit-learn documentation provided by Examiner and available at least 6/10/2022, particularly § 2.3.10.1: Rand Index for numerical cluster comparison and analysis; and 2.3.7: DBSCAN for spatial clustering. Regarding claim 22 Kha in view of Hua in view of Gop teaches or suggests: The method of claim 21, wherein the one or more labels applied to the data are determined based on a spatial clustering analysis of the first dataset (Kha: ¶ 52, 56, 92: system detects anomalous subsets of data up to and including the entire set of time series data such as by detection of a vector distance between first and second data characteristics, MSE, KS distance, etc. thereof); (Gop: ¶ 11, 13, 29-31: system operates over various clustering algorithms to determine features with respect to a model which is retrained based on emergently determined features). While Kha in view of Hua in view of Gop does not explicitly teach intent, topic, concept, etc. determination based on clustering of parameters Examiner has taken official notice of the well-known nature of such algorithms in performing the types of statistical tests claims which Applicant had failed to timely and specifically traverse in the response to the NF action filed by Applicant 8/8/25 and the well-known nature was accepted as Admitted Prior Art in the final action of 10/8/25. In the arguments accompanying the RCE filed 2/9/26 Applicant traverses, arguing against the well-known nature of the claimed features. As instant and unquestionable demonstration consider Bjo which teaches an Density-Based Spatial Clustering (DBSCAN) as a manner of spatially clustering textual data (Bjo: § 1.4.2, 3.6.1: such as available in the well-known scikit-learn library) such as for clustering textual data while minimizing noise (Bjo: § 2.2.3, 2.2.4, 3.6.1, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize well known clustering packages such as the Bjo taught or suggested spatial clustering using the scikit-learn package in concert with the Bjo taught or suggested Improved Rand Index for analysis of clustering parameters to thereby improve the Kha in view of Hua in view of Gop chatbot drift detection system and method; one of ordinary skill in the art would have expected only predictable results therefrom. Please see also scikit-learn documentation provided by Examiner and available at least 6/10/2022, particularly § 2.3.10.1: Rand Index for numerical cluster comparison and analysis; and 2.3.7: DBSCAN for spatial clustering. Response to Arguments Applicant’s arguments in concert with amendments to the claims, see Remarks and Claims, filed 2/9/26, with respect to the rejection(s) of claim(s) 1-25 under 35 USC 103 over Khatami in view of Erhard in view of He have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Khatami in view of Huang in view of Gopalan, and/or Khatami in view of Huang in view of Gopalan in view of Ackerman, and/or Khatami in view of Huang in view of Gopalan in view of Rafael de Lima Cabral, and/or Khatami in view of Huang in view of Gopalan in view of Bjorelind. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL C MCCORD whose telephone number is (571)270-3701. The examiner can normally be reached 730-630 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL C MCCORD/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
May 02, 2025
Non-Final Rejection — §101, §103
Jul 29, 2025
Interview Requested
Aug 07, 2025
Examiner Interview Summary
Aug 07, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Response Filed
Oct 06, 2025
Final Rejection — §101, §103
Feb 09, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603094
ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES
2y 5m to grant Granted Apr 14, 2026
Patent 12592238
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12593192
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12572323
DYNAMIC AUDIO CONTENT GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567003
TECHNOLOGIES FOR DECENTRALIZED FLEET ANALYTICS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month