Prosecution Insights
Last updated: April 19, 2026
Application No. 18/620,812

CONTENT GENERATION RELATED POLICY DRIFT

Non-Final OA §101§103§112
Filed
Mar 28, 2024
Examiner
GEBREMICHAEL, BRUK A
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Toronto-Dominion Bank
OA Round
3 (Non-Final)
22%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
152 granted / 680 resolved
-47.6% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
61 currently pending
Career history
741
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
36.6%
-3.4% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 680 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2016, is being examined under the first inventor to file provision of the AIA . 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/09/2025 has been entered. 4. Currently claims 1-6, 9-14, 17-22 have been amended; claims 7 and 15 have been canceled. Therefore, claims 1-6, 8-14 and 16-22 are pending in this application. Claim objections 5 claim 12 is objected to for the following informality; the term, “in the from the”, in line 2 of the claim, is considered to be a typographical error for --from the--; and thus, appropriate correction is required. Claim Rejections - 35 USC § 101 6. Non-Statutory (Directed to a Judicial Exception without an Inventive Concept/Significantly More) 35 U.S.C.101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. ● Claims 1-6, 8-14 and 16-22 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more. (Step 1) The current claims fall within one of the four statutory categories of invention (MPEP 2106.03). (Step 2A) [Wingdings font/0xE0] Prong-One: The claim(s) recite a judicial exception, namely an abstract idea, as shown below: — Considering each of claims 1, 9 and 17 as the representative claim, the following claimed limitations recite an abstract idea: receive an identifier of a rule based on a request; identify a subset that includes an identifier of the rule within metadata of the subset; detect a drift between current chat content associated with the rule and the rule itself based on [analyzing] the subset and the rule itself; generate materials which describe corrections to the current chat content of the rule based on [analyzing] the drift. Thus, the limitations identified above recite an abstract idea since the limitations correspond to certain methods of organizing human activity, and/or mental processes, which are part of the enumerated groupings of abstract ideas identified according to the current eligibility standard (see MPEP 2106.04(a)). For instance, the current claims correspond to managing personal behavior. In particular, as a user is discussing about a given topic/rule during a chat session, the content of the chat is evaluated in order to determine whether there is a drift between the chat content associated with the rule and the rule itself—such as, by comparing the content of the chat that relates to the rule with the actual content of the rule, etc., and subsequently, when a drift is detected, one or more content items, which provide information for correcting the detected drift, are drafted and utilized to provide correct information/response regarding the rule, etc. Similarly, given the limitations that recites the process of: receiving an identifier of a rule based on a request; identifying a subset that includes an identifier of the rule within metadata of the subset; detecting a drift between current chat content associated with the rule and the rule itself, etc., the claims also correspond to mental processes; such as, an evaluation, an observation and/or a judgment process, etc. (Step 2A) [Wingdings font/0xE0] Prong-Two: The claim(s) recite additional element(s), wherein a computer system, which implements a processor, a memory, etc., is utilized to facilitate the recited functions/steps regarding: collecting information (e.g., receiving an identifier of a rule based on a request); analyzing the collected information using one or more algorithms, which include AI models (e.g., identify a subset of vectors within a vector database that include an identifier of the rule stored within metadata of the subset of vectors; detecting a drift between current chat content associated with the rule and the rule itself based on execution of an artificial intelligence (AI) model on the subset of vectors and text content of the rule); generating one or more content items (e.g., generating digital training materials which describe corrections to the current chat content of the rule based on execution of a second AI model on the drift, wherein the AI model triggers the second AI model to generate the digital training materials via an automated feedback loop), etc. However, the claimed additional element(s) fail to integrate the abstract idea into a patent-eligible practical application since the additional element(s) are utilized merely as a tool to facilitate the abstract idea. Accordingly, when each of the claims is considered as a whole, the additional element(s) fail to impose meaningful limits on practicing the abstract idea. For instance, when each of the claims is considered as a whole, none of the claims provides an improvement over the relevant existing technology. The observations above confirm that the claims are indeed directed to an abstract idea. (Step 2B) Accordingly, when the claim(s) is considered as a whole (i.e., considering all claim elements both individually and in combination), the claimed additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to “significantly more” than the abstract idea itself (also see MPEP 2106). The claimed additional elements are directed to conventional computer elements, which are serving merely to perform conventional computer functions. Accordingly, none of the current claims, when considered as a whole, recites an element—or a combination of elements—directed to an inventive concept. It is also worth noting—per the original disclosure—that the claimed invention is directed to a conventional and generic arrangement of the additional elements. For instance, the specification describes a system that comprises one or more commercially available conventional computing devices—such as, a laptop computer, a desktop computer, etc. ([0179]; [0180]); wherein the conventional computing device(s) communicates, over the conventional communication network (e.g., the Internet), with at least one server of a service provider; and thereby, the system provides a user with relevant information based on the analysis of collected interactions ([0188] to [0192]). Of course, the system above executes one or more known algorithms, including artificial intelligence and/or machine learning algorithms, in order to analyze the collected information/conversations (e.g., see [0037]; [0051] to [0054]). In addition, the use of the existing computer/network technology to facilitate the process of providing/updating a relevant information/content to a user(s), based on the analysis of collected of interactions or conversations, including executing one or more artificial intelligence and/or machine learning algorithms to perform the analysis of the collected interaction, etc., is already directed to a well-known, routine, conventional activity in the art (US 2017/0316326; US 2014/0074688; US 2008/0114737, etc.). Note also that: (a) the use of one or more artificial intelligence (AI) models to adapt the output/result that a computer is generating to a user, including the process of generating training data to retrain one or more of the AI models in order to enhance the accuracy of the results that the computer is generating to the user (e.g., US 8,775,332; US 10,257,225; US 2019/0236417, etc.); (b) the use of a generative artificial intelligence—as an interactive tool—to facilitate a more realistic human-like natural conversations (e.g., US 2018/0020093; US 2018/0376002; also “Deep Reinforcement Learning For Dialog Generation”, the Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Austin, Texas, November 1-5, 2016. ©2016 Association for Computational Linguistics), etc., are already part of the conventional computer/network technology. The above observation confirms that the current claims fail to amount to “significantly more” than an abstract idea. It is worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 2-6, 8, 10-14, 16 and 18-22). Particularly, each of the dependent claims also fails to amount to “significantly more” than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element(s) utilized to facilitate the abstract idea. Accordingly, the findings above demonstrate that none of the claims implements an element—or a combination of elements—directed to an inventive concept (e.g., none of the current claims is reciting an element—or a combination of elements—that provides a technological improvement over the existing/conventional technology). ● Claims 17-20 further fail to comply with 35 U.S.C.101 since these claims are directed to non-statutory subject matter. Particularly, claims 17-20 are directed to a computer readable storage medium. It is worth noting that a computer readable storage medium broadly covers both statutory and non-statutory subject matter (e.g., signal per se). However, claims 17-20 do not positively exclude the non-statutory subject matter. Also see MPEP 2106.03(I) (emphasis added), Non-limiting examples of claims that are not directed to any of the statutory categories include: • Products that do not have a physical or tangible form, such as information (often referred to as “data per se”) or a computer program per se (often referred to as “software per se”) when claimed as a product without any structural recitations; Accordingly, claims 17-20 further fail to comply with the statutory requirement per section §101. Note also that the original disclosure also appears to encompass both statutory and non-statutory subject matter, “[a] computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory ("RAM"), flash memory . . . or any other form of storage medium known in the art” (see [0176], emphasis added). ► Applicant’s arguments directed to section §101 have been fully considered (the RCE filed on 12/09/2025, which includes the arguments filed on 11/10/2025); However, the arguments are not persuasive at least for the following reasons: Firstly, while attempting to summarize the 2019 PEG, including MPEP 2106, Applicant is asserting that “when Claim 1 is ‘viewed as a whole’, Claim 1 is not directed to an abstract concept, but rather a specific computer-implemented system architecture that manages chatbot rule accuracy using artificial intelligence models and vectorized data structures. The claimed steps include identifying a subset of vectors within a vector database through metadata filtering, executing an AI model to detect drift between embeddings of chat content with the vectors and rule text, and generating digital training materials describing corrective updates through a second AI model triggered by the first model in an automated feedback loop” (emphasis added). However, despite alleging “specific computer-implemented system architecture”, Applicant fails to show an element (if any)—or a combination of elements (if any)—that is considered to be an advance over the existing computer/network technology. Instead, while specifying the objective of the claimed system/method, Applicant is restating the steps being performed in order to achieve the objective. However, neither the objective of the claimed system/method, nor the sequence of steps being performed to achieve the objective—alone or in combination—demonstrates a technological improvement over the existing computer/network technology. This is because the use of one or more AI models to analyze collected data and generate relevant results (e.g., one or more responses that a chatbot provides to a user via a chat interface, etc.), including retraining one or more of the models in order to maintain accuracy and consistency of the results being generated, etc., is already part of the existing computer/network technology. In this regard, neither the particular environment in which one is using the existing technology, nor the particular objective that one is attempting to accomplish using the existing technology, demonstrates a technological improvement over the existing technology. The observation above demonstrates that Applicant’s conclusory assertion, “[the] claimed steps, taken together, define a machine-learning control process that improves the operation of the underlying computer system by autonomously detecting and resolving drift in chatbot behavior through an automated feedback loop which is something that cannot be performed as a mental process or by a human using pen and paper” (emphasis added), is not persuasive. This is because no structural and/or functional features (or operation) of the computer system is being improved. Instead, the claims are utilizing the existing computer/network technology—merely as a tool—to facilitate an abstract idea; such as, providing pertinent information based on the analysis of collected information (e.g., providing one or more responses that are relevant to one’s request, etc.). In particular, the existing computer/network technology already utilizes one or more AI models; and accordingly, the existing technology not only automatically analyzes a request received from a user (e.g., a request in the form of text, voice, etc.), but also automatically generates one or more relevant responses (e.g., displaying words/phrases via a chat interface; generating audible data via a speaker, etc.); and furthermore, based on discrepancy data and/or new information being gathered (e.g., one or more feedback schemes), one or more of the AI models is also retrained (e.g., automatically updating one or more parameters and/or logics of the algorithm, etc.) in order to maintain accuracy and consistency of the results that the system is providing. Thus, the core feature that Applicant is asserting, the so-called “autonomously detecting and resolving drift in chatbot behavior through an automated feedback loop”, is already part of the existing computer/network technology. Consequently, Applicant’s arguments are not persuasive. Applicant also appears to confuse the inquiry of prong-two of Step 2A with that of prong-one. For instance, when determining whether the claim (e.g., clam 1) is reciting a mental process, prong-one of Step 2A does not require one to consider any of the computer elements, which are part of the additional elements. Accordingly, it is irrelevant whether one assumes that a human cannot mentally (or using a pen and paper) perform the automated computer functions; such as, an AI model generating digital training material via the so-called automated feedback loop, etc. Instead, while considering ONLY the limitations that recite the abstract idea (e.g., see above the limitations identified under prong-one of Step 2A), one has to evaluate whether those identified limitations can be performed in the human mind (and/or using a pen and paper). Thus, Applicant’s arguments are once again not persuasives. Applicant further asserts that “the claim focuses on the technological implementation of drift detection and correction through inter-model coordination within an automated feedback loop and vector-database operations, not on any abstract concept of ‘analyzing information.’ The involvement of vector embeddings, metadata indexing, and AI model execution ties the invention to a particular technological environment that achieves a specific improvement for maintaining consistent, accurate chatbot outputs relative to evolving chat content associated with various rules the claim is directed to a technical improvement in computer functionality, similar to the eligible inventions in Enfish and McRO, it is not ‘directed to’ a judicial exception” (emphasis added). However, once again while emphasizing the objective of the claimed system/method (e.g., an objective to detect and correct drift), Applicant is describing the components involved and/or the process that claimed system/method is performing; such as: the use of vector embeddings, metadata indexing, execution of one or more AI models, etc., in order to maintain consistent and accurate chatbot outputs. However, the eligibility issue (prong-two of Step 2A and Step 2B) is not questioning the number and/or type of components (or operations) that are involved and/or the one or more objectives (if any) that the claimed system/method is attempting to achieve. Instead, the eligibility issue is inquiring whether any of the current claims, when considered as whole, is implementing an element—or a combination of elements—that provides a technological improvement over the relevant existing technology. So far, despite alleging “a specific improvement” or “improvement in computer functionality”, Applicant sill fails to point out an element (if any)—or a combination of elements (if any)—that provides a technological improvement over the existing computer/network technology. Thus, unlike the cases of McRO and Enfish, none of the current claims satisfies the eligibility criteria set forth per section §101. Secondly, while referring to parts of the 2019 PEG, Applicant asserts, “[the] claims impose meaningful, practical, and succinct operations that can improve computer system performance. In Claim 1, the first AI model and second AI model operate within an automated feedback loop that detects, analyzes, and corrects drift between chatbot outputs and governing rule content without human intervention. This arrangement transforms what might otherwise be conceptual information analysis into a self-adaptive machine process that automatically maintains policy alignment and generates digital training materials. Such continuous, autonomous operation provides for automated document generation when feedback indicates such documents are necessary . . . the claim recites concrete data handling and model-execution steps tied to specific computing resources including a vector database, embedded metadata, and AI-driven generation processes which show that the invention is technologically implemented rather than generically applied. The automated triggering between models constitutes a defined control mechanism that limits any alleged abstraction to a particular, practical context and yields a real-world improvement to how AI-based chat systems function. As such, the claim satisfies Prong 2A, Step 2, because it meaningfully integrates any potential abstract idea into a specific technological application that enhances computer functionality” (emphasis added). However, here also Applicant fails to point out the alleged technological improvement (if any) that the claimed system/method is providing. For instance, as repeatedly pointed out above, the use of two or more AI models to automatically analyze collected data (e.g., a request from a user, etc.) and generate one or more pertinent results (e.g., one of more pertinent responses via a chat interface, etc.), including retraining one or more of the AI modes based on feedback data—such as, detected discrepancy data and/or newly collected information, etc., is already part of the existing computer/network technology. In fact, even basic common sense dictates that an AI model, which is a form of a machine-learning algorithm, iteratively or continually learns based on analyzing feedback data (e.g., previous results, newly collected data, etc.) in order to improve the accuracy of the results being generated. However, despite the fact above, Applicant appears to be making repetitive attempts to paint this existing technology as a new/advanced technology that the claimed (or the originally disclosed) system/method is implementing. Consequently, none of Applicant’s conclusory assertions regarding the alleged “automated feedback loop” in which the first AI model and the second AI model operates, and/or the alleged “self-adaptive machine process” that is considered to automatically maintain policy alignment and generates digital training materials, etc., demonstrates any technological improvement over the relevant existing technology. Of course, the same is true regarding Applicant’s alleged “concrete data handling and model-execution steps”, which are supposedly tied to the so-called “specific computing resources” (i.e., a vector database, embedded metadata, and AI-driven generation processes”). In particular, given the lack of technological improvement per the claimed (and the disclosed) system/method, the above is effectively demonstrating the use of the existing computer/network technology—merely as a tool—to facilitate the claimed abstract idea (see the abstract idea under prong-one of Step 2A). So far, Applicant fails to show an element (if any)—or a combination of elements (if any)—that provides a technological improvement over the existing computer/network technology. Instead, while emphasizing the objective of the claimed system/method, Applicant is once again reiterating the various components being recited, and/or the steps that the claimed system/method is executing, etc. Consequently, none of Applicant’s conclusory assertions, including the alleged “meaningful, practical, and succinct operations that can improve computer system performance”, the alleged “defined control mechanism that limits any alleged abstraction to a particular, practical context and yields a real-world improvement to how AI-based chat systems function”, and/or the alleged “specific technological application that enhances computer functionality”, etc., demonstrates a technological improvement over the existing computer/network technology. In addition, while referring to the recent decision of Ex parte Desjardins, per the Appeals Review Panel, Applicant asserts, “Claim 1 recites use of a first AI model to detect content drift with respect to a rule and a second AI model to correct the drift through training material generation. The process is automated. This process ensures that training materials are automatically generated when such a drift in chat content is detected . . . just as in the case of Ex parte Desjardins, the first AI model performs a first task of identifying the drift from chat content in a chat window, and optimizes performance of a second task of generating training materials performed by a second AI model based on the identified drift. This optimizes the training material content to specifically what is being performed incorrectly within an organization” (emphasis added). However, except for the attempt made to simply correlate the current claims with the recent decision of the Appeals Review Panel (hereinafter ARP), Applicant appears to fail to appreciate the core issue discussed per the decision above. In particular, the analysis per the ARP does not rely on the use of two or more AI models, wherein each AI model is configured to perform a respective task, to substantiate the finding regarding the technological improvement. Instead, while referring to part of the specification, the ARP points out the technological improvement achieved based on training the same machine-learning model on multiple tasks, as opposed to utilizing two machine-learning modes (e.g., a first AI model and a second AI model). In particular, such training of the same machine-learning model on multiple tasks reduces the storage capacity that the computer system requires, besides reducing the complexity of the system. In contrast, Applicant’s claimed—and disclosed—system/method relies on multiple AI models (or multiple machine-learning models); and therefore, it is effectively the opposite of the case discussed per the decision above. Consequently, Applicant’s attempt to substantiate the alleged technical improvement, while misapplying or misconstruing the decision of the ARP, is not persuasive. The observation above further confirms that Applicant’s conclusory assertion, “[the] numerous claim limitations would clearly integrate an alleged abstract idea into a practical application that does not monopolize a judicial exception and are thereby patent eligible because the practical application of Applicant's claims allow for a real-world benefit through computing systems” (emphasis added), is not persuasive. Note that Applicant’s assumption directed to Step 2B is also not persuasive. Applicant is asserting that “under the second step (2B) of Alice the ordered combination of elements in the independent claims are sufficient to ensure that the claim amounts to significantly more than the judicial exception” (emphasis added). However, Applicant fails to demonstrate whether any of the current claims is directed to a non-generic and non-conventional arrangement of the additional elements. In contrast, per the claimed—and the disclosed—system/method, each of the current claims, when considered as a whole, is directed to the conventional and generic arrangement of the additional elements. Of course, the lack of technological improvement also supports the fining above regarding the conventional and generic arrangement of the additional elements. Thus, regardless of whether Applicant assumes that the current claims do not “monopolize a judicial exception”, or the current claims are providing “real-world benefit through computing systems”, none of Applicant’s assertions above—alone or in combination—demonstrates a technological improvement over the relevant existing technology. Thus, at least for the reasons above, the Office concludes that none of the current claims, when considered as a whole, implements an inventive concept that amounts to “significantly more” than an abstract idea. Claim Rejections - 35 USC § 112 7. The following is a quotation of the first paragraph of 35 U.S.C.112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C.112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. ● Claims 1-6, 8-14 and 16-22 are rejected under 35 U.S.C.112(a) or 35 U.S.C.112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Each of claims 1, 9 and 17 currently recites, “. . . generating digital training materials which describe corrections to the current chat content of the rule based on execution of a second AI model on the drift, wherein the AI model triggers the second AI model to generate the digital training materials via an automated feedback loop” (emphasis added). However, the original disclosure does not have sufficient written description regarding the above claimed feature(s). Note that the original disclosure appears to treat machine-learning (ML) models and Artificial intelligence (AI) models as two different models. For instance, the original disclosure describes a scenario in which a ML framework (630) may trigger a drift detection process each time it is generating a new policy implementation vector; and wherein the ML framework (630) provides the vector with labels to a drift detection ML model (650) (see [0130], [0131]); and furthermore, when the drift detection ML model (650) detects a drift in policy, it automatically triggers the generation of a training material (see [0132] and [0134]). Of course, as further example that relates to a chatbot that provides responses, the drift detection ML model (650) may trigger a training process when it detects a drift in policy; wherein two of the components, i.e., (a) the current implementation of the policy and (b) the policy content with the correct implementation, are provided as input to train the ML (674) that is generating the chat responses; and wherein the training or the retraining process may be carried out by an AI engine (672), which triggers execution of the ML model (674) of the input content (see [0136]). The original specification also describes an apparatus that retrains an AI model to generate responses associated with a policy, based on executing the AI model on the policy content in response to an identified drift; and wherein the retraining of the AI model is triggered in response to identified drifts in policy implementation; so that the retrained AI model improves policy compliance by adapting its outputs to address the identified drifts (see [0142]; [0143]). However, none of the descriptions above, alone or in combination with the rest of the paragraphs in the specification, provides sufficient support regarding the limitations identified above, which positively requires the process of (i) detecting, based on execution of a first AI model on the subset of vectors and the text content of the rule, a drift between current chat content (associated with a rule) and the rule itself; and (ii) the first AI model above further triggers a second AI model to generate the digital training materials via an automated feedback loop. Accordingly, at least for the reasons above, the current claims constitute new subject matter. Claim Rejections - 35 USC § 103 8. The following is a quotation of 35 U.S.C.103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Note that the one or more citations (paragraphs or columns) presented in this office action regarding the teaching of a cited reference(s) are exemplary only. Accordingly, such citation(s) are not intended to limit/restrict the teaching of the reference(s) to the cited portion(s) only. Applicant is required to evaluate the entire disclosure of each reference; such as additional portions that teach or suggest the claimed limitations. ● Claims 1-6, 8-14 and 16-22 are rejected under 35 U.S.C.103 as being unpatentable over Voyles 2024/0012841 in view of Ramsey 2023/0419287. Regarding each of claims 1, 9 and 11, Voyles teaches the following claimed limitations: an apparatus comprising: a memory; and a processor coupled to the memory, the processor (“a method”, per claim 9; and a “computer-readable storage medium . . . cause the processor to perform”, per claim 17) (see [0052]; FIG 2: e.g., a computer-based system/method for monitoring and detection of a concept drift related to one or more trained models; such as chatbots; and accordingly, such computer-based system already incorporates a processor and a memory): receive an identifier of a rule based on a request, identify a subset of vectors within a vector database that include an identifier of the rule stored within metadata of the subset of vectors ([0053], [0054]; [0055] lines 1-30: e.g., the system receives at least one request from the user; such as, a query in the form of a natural language; and the system extracts—from the received query—pieces of information representing the various aspects of the user’s query as tokens; such as, a token representing the content of the query, a token representing the context of the query, etc., and thereby, the system creates one or more vectors that are representative of the user’s query. Subsequently, the system attempts to identify, from a plurality of intent classifications stored in its database, one or more intent classifications relevant to the user’s query based on the tokens generated above. Thus, one of the extracted pieces of information or token, such as, the token identifying the context—i.e., the token identifying the intent/concept—corresponds to the identifier of the rule that the system has received based on the request; whereas, the one or more intent classifications, which the system identifies from its database based on one or more of the tokens, correspond to the subset of vectors within a vector database that include the identifier of the rule stored within metadata of the subset of vectors), detect a drift between current chat content associated with the rule and the rule itself based on execution a [machine-learning] model on the subset of vectors and text content of the rule ([0055] lines 30-37; [0056]; [0061]; [0079]; [0080] lines 1-25: e.g., the system already implements one or more machine-learning models, including a drift detection system; and thereby, it determines a drift/evolution of the intent/concept, based on comparing the intent/concept raised in the chat with the intent/concept specified per one or more of intent/concept classifications identified from its database. For instance, the drift detection system detects a drift when it determines, after searching its database, that no relevant intent/concept classification exists that corresponds to the chat content specific to: (i) the tolls that cars pay when traveling over the Golden Gate Bridge, and/or (ii) the new bridge that connects the city of San Francisco to a part of Oakland, etc. The above indicates the process of detecting a drift between current chat content associated with the rule and the rule itself based on execution a [machine-learning] model on the subset of vectors and text content of the rule), and generate digital training materials which describe corrections to the current chat content of the rule based on execution of an [algorithm] on the drift, wherein the [machine-learning] model triggers the [algorithm] to generate the digital training materials via an automated feedback loop (see [0080] lines 25-29; [0087]; [0097] lines 1-7; [0100]; [0102] lines 1-20; [0103] lines 10-22; [0105]: e.g., when a drift/evolution is detected as discussed above, the chatbot model is retrained automatically to address the drift/evolution without downtime or disruption of its operation; wherein, the system generates instructions, which configures the chatbot model to be retrained automatically, including: updating training files based on the detected concept drift/evolution, incorporating one or more new intent classifications based on the detected concept drift/evolution, etc. Thus, the system generates digital training materials which describe corrections to the current chat content of the rule based on execution of an algorithm on the drift, wherein the machine-learning model triggers the algorithm to generate the digital training materials via an automated feedback loop). Although Voyles already contemplates the implementation of one or more machine-learning models as discussed above, Voyles does not expressly describe an artificial intelligence (AI) model, including a second AI model that that generates the digital training materials; and wherein the AI model supposedly triggers the second AI model to generate the training materials. However, Ramsey discloses a system that allows a user/customer to interact with an automated chatbot, wherein the chatbot implements an AI model to provide pertinent response to the user; and the system also incorporates an intent-extraction system that implements an AI model; wherein the intent-extraction system evaluates the interaction between the chatbot and the user in order to determine whether the response, which the chatbot is providing to the user, is pertinent to user’s query, etc., and; and thereby, the intent-extraction system retrains the chatbot based feedback collected during one or more interactions, including: one or more negative feedback that points out the detected incorrect response(s), one or more resolutions that must be applied to correct the incorrect response(s), etc. ([0054]; [0055]; [0076]; [0077]; [0095]). Accordingly, given the above teaching, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Voyles in view of Ramsey; for example, by implementing one or more AI models to upgrade the system’s algorithm(s), wherein each of the drift detection system and the chatbot is also implemented as an AI model; and wherein, based on the drift/evolution detected regarding the concept in the chat, the drift detection system commands one or more of the AI models to customize training data based on: the analysis of the chatbot’s interactions, interactions gathered from other chatbots and/or users, etc., wherein such training data also includes one or more negative feedback that points out the inaccurate response(s), one or more resolutions that must be applied to correct the detected incorrect response(s), etc., so that the chatbot is retrained based on such training data; and this minimizes the chatbot’s chance of providing an irrelevant and/or inaccurate response regarding one or more issues that the user is raising during interaction with the chatbot. Regarding each of claim 2, 10 and 18, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 and 17 respectively. The limitation directed to the process of determining a difference between the current chat content of the rule and text content described within the rule, and generating descriptive content about the difference based on execution of the second AI model, is already addressed above per the modification discussed with respect to claims 1, 9 and 17. In particular, the training data, which is utilized to retrain the chatbot, already includes a negative feedback that points out the detected inaccurate response during the chat. Accordingly, the AI model already generates text data regarding the difference between (a) the current chat content that relates to the rule and (b) the text of the rule (note also that the modification discussed above applies to each of claims 2, 10 and 18). Regarding each of claim 3, 11 and 19, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 and 17 respectively. Here also the limitation per each of the above claims, which is directed to the process of identifying—from the current chat content—a step of the rule being performed incorrectly, and genrating a description of how to correctly perform the step based on the execution of the AI model, is already addressed per the modification discussed above with respect to claims 1, 9 and 17. This is because the system not only identifies the incorrect response that the chatbot has provided during the interaction (i.e., a step of a rule performed incorrectly), but also provides one or more resolutions that must be applied in order to correct the detected incorrect response (i.e., generating a description of how to correctly perform the incorrect step detected above). Note that the modification discussed per claims 1, 9 and 17 also applies to each of claims 3, 11 and 19 Regarding each of claim 4, 12 and 20, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 and 17 respectively. Although the modification above does not expressly address the limitation directed to identifying a step of a rule being omitted from the chat content and generating a description of the step of the rule being omitted, Ramsey already teaches that the chatbot interacts with the user by carrying out proper interaction steps; such as, the chatbot first provides a welcoming/greeting phrase, which is followed by a phrase that politely inquires the user’s intention/goal, etc. ([0075]). Of course, as already discussed per each of claim 1, 9 and 17, Ramsey also teaches that the intent-extraction system evaluates whether one or more of the responses, which the chatbot is providing to the user, is pertinent to user’s query, etc., and; and thereby, the intent-extraction system retrains the chatbot based feedback collected during one or more interactions, including: one or more negative feedback that points out the detected incorrect response(s), one or more resolutions that must be applied to correct the incorrect response(s), etc. ([0054]; [0055]; [0076]; [0077]; [0095]). Accordingly, given the above teaching, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to further modify Voyles’s system by updating the system’s algorithm; so that the drift detection system further evaluates, based on one or more relevant templates in its database, whether the chatbot is generating one or more of the phrases according to one or more desired sequences—such as, generating a greeting phrase, which should be followed by a polite inquiry, etc.; and when the drift detection system detects a drift/evolution in the chat content—such as, detecting that the chatbot is inquiring the user without first greeting the user, the drift detection system commands one or more of the AI models to customize further training data based on: the analysis of the chatbot’s interactions, interactions gathered from other chatbots and/or users, etc., wherein such training data also includes one or more negative feedback that points out the incorrect sequences or steps of interactions, one or more resolutions that must be applied to correct the incorrect sequences/steps of interactions, etc., so that the chatbot is retrained based on such training data; so that, the chatbot’s chance of omitting or skipping one or more desired interaction sequences/steps is minimized; and this helps the user to be more comfortable during interaction. Regarding each of claim 5 and 13, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 respectively. Voyles further teaches, retrieving the text content of the rule from a document stored within a storage device ([0046]; [0055] lines 24-30: e.g., the system already stores intent classifications in its database, which the system uses to retrieve a relevant intent/concept that is relevant to the user’s query in the chat. Accordingly, the above indicates the process of retrieving the text content of the rule from a document stored within the storage device). Regarding each of claim 6 and 14, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 respectively. Voyles further teaches, receive an identifier of a geographic location associated with the rule; and identify the subset of vectors based on a comparison of the geographic location and the metadata of the subset of vectors ([0054] to [0056]: e.g., the system is identifying one or more relevant intent/concept classifications from its database, based on the interaction that the user is making with the chatbot; and wherein such interaction involves phrases and/or numbers that form the query and/or issue that the user is making. For instance, when the user provides the query, “what is the name of the famous bridge in San Francisco”, the system identifies the geographic identifier, namely, San Francisco; and thereby, it attempts to identify one or more intent/content classifications that relevant to the above geographic location. Note also that the intent classifications are stored as vectors). Regarding each of claim 8 and 16, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claims 1, 9 respectively. Voyles further teaches that the identifying process comprises querying the vector database with the identifier of the rule to identify the subset of vectors within the vector database ([0053], [0054]; [0055] lines 1-30: e.g., as already discussed per each of claims 1 and 9, the system extracts—from the query it receives from the user—pieces of information representing the various aspects of the user’s query as tokens; and thereby, creates one or more vectors representative of the user’s query. Subsequently, the system identifies from a plurality of intent/concept classifications, which are stored as vectors in the database, one or more intent classifications that are relevant to the user’s query based on the tokens generated above. Thus, the identifying process already comprises querying the vector database with the identifier of the rule to identify the subset of vectors within the vector database). Regarding claim 21, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claim 1. The limitation, “the digital training materials comprises actions for correctly implementing the rule, and the processor is, further configured to generate a digital document with a description of the actions for correctly implementing the rule embedded therein”, is already addressed per the modification discussed with respect to claim 1. In particular, responsive to identifying the incorrect responses that the chatbot has provided during the chat interaction, the drift detection system commands one or more of the AI models to generate training data based on collected interactions, which includes one or more resolutions that must be applied in order to correct the detected incorrect response (i.e., the digital training materials already comprise actions for correctly implementing the rule); and wherein the chatbot is retrained based on the training data generated above; so that the chatbot’s chance of providing an irrelevant and/or inaccurate response regarding one or more issues that the user is raising during interaction with the chatbot (i.e., the processor generates a digital document with a description of the actions for correctly implementing the rule embedded therein). Regarding claim 22, Voyles in view of Ramsey teaches the claimed limitations as discussed above per claim 1. Voyles further teaches, the processor is further configured to display a warning on a graphical user interface (GUI) of a software application in response to detection of the drift ([0097]; [0098]: e.g., once detecting a drift, the drift detection system generates a drift summary report to an administrator or a developer; wherein the report identifies the concept drift/evolution identified; and wherein the report is displayed via a user interface. The above indicates that the processor is already configured to display a warning on a graphical user interface (GUI) of a software application in response to detection of the drift). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUK A GEBREMICHAEL whose telephone number is (571) 270-3079. The examiner can normally be reached on 7:00AM-3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID LEWIS can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRUK A GEBREMICHAEL/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Sep 12, 2024
Response after Non-Final Action
Apr 19, 2025
Non-Final Rejection — §101, §103, §112
May 12, 2025
Examiner Interview Summary
May 12, 2025
Applicant Interview (Telephonic)
Jul 03, 2025
Response Filed
Sep 06, 2025
Final Rejection — §101, §103, §112
Nov 10, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12165542
MOTION PLATFORM
2y 5m to grant Granted Dec 10, 2024
Patent 12008914
SYSTEMS AND METHODS TO SIMULATE JOINING OPERATIONS
2y 5m to grant Granted Jun 11, 2024
Patent 11990055
SURGICAL TRAINING MODEL FOR LAPAROSCOPIC PROCEDURES
2y 5m to grant Granted May 21, 2024
Patent 11837105
PSEUDO FOOD TEXTURE PRESENTATION DEVICE, PSEUDO FOOD TEXTURE PRESENTATION METHOD, AND PROGRAM
2y 5m to grant Granted Dec 05, 2023
Patent 11810467
FINGER RECOGNITION SYSTEM AND METHOD FOR USE IN TYPING
2y 5m to grant Granted Nov 07, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
22%
Grant Probability
47%
With Interview (+25.0%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 680 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month