Prosecution Insights
Last updated: April 19, 2026
Application No. 18/091,840

Smart Generation and Display of Conversation Reasons in Dialog Processing

Non-Final OA §101§103§112
Filed
Dec 30, 2022
Examiner
DASGUPTA, SHOURJO
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Discourse AI Inc.
OA Round
3 (Non-Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
293 granted / 449 resolved
+10.3% vs TC avg
Strong +38% interview lift
Without
With
+38.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114. Claim Rejections - 35 USC § 112 3. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. 4. Claims 1-21 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention. The independent claims 1, 10, and 16 feature a limitation (or variant thereof): responsive to determining, by computer processor, that no explicit statement of reason, no statement of goal and no statement of intention is in the one or more data records: inferencing, by a computer processor, at least one reason using an artificial intelligence engine; automatically pre-selecting, by a computer processor, at least one of the allowable reason notations according to the at least one inferred reason; Regarding the limitation above, the Examiner has considered and performed keyword searches of Applicants’ specification, and related applications including those incorporated by reference (e.g., as detailed in [0001] and [0003]-[0009] of Applicants’ published specification). In particular, the Examiner can find no concrete or clear teaching of the conditional logic that considers the absence of each of explicit statement of reason, statement of goal, and statement of intention in the data records as a precondition for additional inferencing and automatically pre-selecting steps. On this basis, the Examiner rejects the independent claims for failing the written description requirement and thereby lacking proper support. The dependent claims, which include the limitation discussed here, are likewise rejected under the same rationale. If Applicants can made a credible showing of this conditional logic with proper support, then the Examiner will withdraw this rejection. 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 6. Claims 1-21 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. The independent claims 1, 10, and 16 feature a limitation (or variant thereof): responsive to determining, by computer processor, that no explicit statement of reason, no statement of goal and no statement of intention is in the one or more data records: inferencing, by a computer processor, at least one reason using an artificial intelligence engine; automatically pre-selecting, by a computer processor, at least one of the allowable reason notations according to the at least one inferred reason; providing to an agent user, on a computer display by a computer processor, the pre-selected reason notation. Based on Applicants’ claims as drafted, the providing step is not indented such that it would be understood to be required under the conditional limitation “responsive to determining ...” However, the providing step features “the pre-selected reason notation” which is only produced within the conditional limitation when the conditional limitation is met. Hence, because the Examiner does not believe the claim can require “the pre-selected reason notation” unless the conditional limitation “responsive to determining ...” is satisfied, then it reasons that the providing limitation should also be indented together with the steps for inferencing and automatically pre-selecting. In the absence of this indenting as suggested here by the Examiner, it is unclear how there can even be a pre-selected reason notation in the providing step. On this basis, the independent claims are rejected for being vague and indefinite. The dependent claims, which include the limitation discussed here, are likewise rejected under the same rationale. 7. Claims 3-7, 9, 12-13, 15, 18-19, and 21 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. The independent claims 1, 10, and 16 introduce “an agent user.” The dependent claims as listed just above feature terms such as “a user dialog”, “a user-entered change”, and “user selection” without any clear indication of whether it is referring to the aforementioned agent user or some other user. Accordingly, the aforementioned terms without clear indication of user relative to any user clearly delineated in the claims, as found in the dependent claims, has the effect of rendering the listed dependent claims vague and indefinite. The ambiguity is especially compounded by the various recitations for steps to be performed by “a computer processor” throughout basically every single claim, thereby not providing the additional helpful context that could possibly explain if a common user, a common processor, or even a common computing device is operative for the performing of these steps. Moreover, Applicants’ arguments effort to distinguish the claimed invention from the cited art of record on the basis of which user is involved, thereby necessitating their introduction of the clarification of an agent user. However, other features of the claims, as the Examiner has indicated here, remain vague with respect to which user is involved or required. Hence, for Applicants’ arguments to have any opportunity to be considered, Applicants’ claims must first be presented having the logic and consistency they would use to exclude the prior art. Claim Rejections - 35 USC § 101 8. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 9. Claims 1-4, 8, 10-14, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. For example, independent claim 1 recites: A method implemented on a computing device for automatically managing computer-based conversation reason notations related to a digitally-recorded interlocutor conversation session, the method comprising: receiving, by a computer processor, one or more data records selected from a group consisting of a narrative structure, a human-readable summary, and a conversation labeled dataset, of a digitally-recorded conversation of a text-based interlocutory conversation; accessing, by a computer processor, a set of predetermined allowable reason notations; responsive to determining, by computer processor, that no explicit statement of reason, no statement of goal and no statement of intention is in the one or more data records: inferencing, by a computer processor, at least one reason using an artificial intelligence engine; automatically pre-selecting, by a computer processor, at least one of the allowable reason notations according to the at least one inferred reason; and providing to an agent user, on a computer display by a computer processor, the pre-selected reason notation. In the claim provided just above, the Examiner has bolded the features that are directed to an abstract idea and underlined the features directed to additional elements. In accordance with that coding of claim features, the claim can be seen to involve a method for determining whether there is an explicit reason/goal/intention in text data, which is an evaluation or judgement that can be performed by a human with their mind as they read the text data. Based on their determination, they can then infer a reason, where an inference can be understood to be an evaluation/judgement. Based on the inferencing, they can select an allowable reason from a set, the reason being associated with a notation. The claim clarifies selecting as pre-selecting, the Examiner notes, but the selection as the Examiner has construed it can simply be prior or pre to an additional display/output step, e.g. such as the one the claim requires per providing which the Examiner will discuss below in relation to additional elements. Hence, in view of this interpretation, the Examiner believes that the claim is fairly directed to an abstract idea with some additional elements. The additional elements do not have the effect have integrating the abstract idea into a practical application. The claim’s mention of a computing device or use of computer processor to perform the steps merely have the effect of taking the abstract idea and applying it to a general purpose computer. The mention of some aspects of the claim being automatic, e.g. selecting, is understood in the same way- in other words, practicing what is essentially a mental step on a general purpose computer. The claim’s mention of receiving data records, for consideration by the exercise of the abstract idea, is merely an input step, and hence extra-solution activity. The same is true for the providing of the pre-selected reason notation, which is merely an output step and therefore also extra-solution activity. Additional elements such as these are not understood to successfully integrate an abstract idea into a practical application. The claim’s clarification of what the data records might be/constitute does not meaningfully change the Examiner’s analysis. The data records, regardless of type, are understood to be text data that is otherwise subject to evaluation/judgement steps as characterized by the Examiner. Moreover, they are merely subject to receiving and any context as to their actual generation is passive or outside the scope of the abstract ideas as applied. The claim’s mention of accessing the set of predetermined allowable reason notations is recited at a high level. Here, accessing could be receiving electronically, which is essentially an extra-solution input step. Or, accessing could simply be reading and/or comprehending what are essentially criteria or rules to be applied or mapped or selected, which again is essentially something amenable to performance by a human via their mind. The additional elements are not sufficient to amount to significantly more than the judicial exception. Independent claims 10 and 16 feature the same limitations and are hence rejected under the same rationale discussed above in relation to claim 1. Dependent claims 2-4, 8, 11-14, and 17-20 are likewise rejected: Claims 2, 11, and 17 are directed to clarifying receiving of data records to comprise access of a data corpus having a plurality of digitally-recorded conversations of text-based conversations, essentially. The actual recording and other detail presented herein passive and not meaningful to the present analysis. What is essential is the understanding that text data is actively received and that text data is actively subject to the abstract idea’s mental, evaluation, and judgement steps. Where the data comes from and what it entails, when recited passively in this manner, does not make the characterized abstract idea any more practically integrated in its application or otherwise provide significantly more to the abstract idea. Claims 3, 12, and 18 are directed to clarifying the extra-solution outputting/providing as discussed above in claim 1 to now comprise a visual indicator in an onscreen user dialog window. This subject matter remains concretely extra-solution activity even with this further clarification, and for that reason does not make the characterized abstract idea any more practically integrated in its application or otherwise provide significantly more to the abstract idea. Claims 4, 13, and 19 are similarly directed to a further clarification of the extra-solution outputting/providing per claim 1, and further detail that onscreen user dialog window comprises a particular type of known UI control type. As with claims 3, 12, and 18 just discussed, this subject matter remains concretely extra-solution activity even with this further clarification, and for that reason does not make the characterized abstract idea any more practically integrated in its application or otherwise provide significantly more to the abstract idea. Claims 8, 14, and 20 are directed to clarifying the extra-solution inputting/receiving and outputting/providing as discussed above in claim 1 to now comprise additional details such as requiring a particular type of text input and a particular type of onscreen output. This subject matter remains concretely extra-solution activity even with this further clarification, and for that reason does not make the characterized abstract idea any more practically integrated in its application or otherwise provide significantly more to the abstract idea. Claim Rejections - 35 USC § 103 10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 12. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 13. Claims 1-6, 10-13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2020/0265339 (“Eisenzopf”) in view of CN 113597607 A (“Petrovykh”). Regarding claim 1, Eisenzopf teaches a method implemented on a computing device ([0056] and FIG. 1, for teaching a “networked computer environment” featuring many of the listed functional modules that together provide for the exploration and related advantages of a large collection of digitally recorded conversations per [0039]) for automatically managing computer-based conversation reason notations related to a digitally-recorded interlocutor conversation session ([0039] discussing “insightful discovery of the most common goals, patterns, flows and results of those collections of conversations”, the conversations as recorded between two interlocutors as [0039] further discusses, where the digitally recorded conversations can be text messages / chat conversation as [0039] mentions, and where the discovered “most common goals, patterns, flows, and results” provided by the analysis of these conversations are inclusive of features akin to the recited “conversation reason annotations”, for example, the conversation can be segmented and subject to classification in accordance with [0069]’s ontology entities such as “a greeting 701, topic negotiation 702, a discussion about a topic comprised of a series of turns 709 between the interlocutors that may contain a corresponding question 703 and answer followed by an end 705 or change of topic 708 followed by an end of conversation 706” (where these entities are examples of “conversation reasons” as recited and the conversation segments being classified in accordance with these entities are akin to an “annotation” of a “conversation reason” as recited)), the method comprising: receiving, by a computer processor, one or more data records ([0045]: “receiving conversation data from transcribed conversations, such as between two people, an online chat or a text messaging system, a speech recognition system, or a chatbot or voicebot system”, the conversation as analyzed results in a segmentation, where either the conversation as a whole or the segments which constitute the conversation can be understood to equate to the “one or more data records”) selected from a group consisting of a narrative structure, a human-readable summary, and a conversation labeled dataset, of a digitally-recorded conversation of a text-based interlocutory conversation (the conversations as received are subject to segmentation and classification/annotation to generate a conversation model per [0041]-[0048], and in that process the conversation segments when classified, can be understood to constitute “a conversation labeled dataset”); accessing, by a computer processor, a set of predetermined allowable reason notations and ... automatically pre-selecting, by a computer processor, at least one of the allowable reason notations according to ... at least one inferred reason (conversations as received can be segmented and subject to classification in accordance with [0069]’s ontology entities such as “a greeting 701, topic negotiation 702, a discussion about a topic comprised of a series of turns 709 between the interlocutors that may contain a corresponding question 703 and answer followed by an end 705 or change of topic 708 followed by an end of conversation 706” (where these entities are examples of “conversation reasons” as recited and the conversation segments being classified in accordance with these entities are akin to an “annotation” of a “conversation reason” as recited), and where the ontological entities can constitute “previously-known intents, topics, and outcomes” (i.e., “predetermined” as recited) and further the ontological entities are applied to classify conversation segments and in that sense can be understood to be subject to selection and assignment to a segment in accordance with the classifying/analysis); and providing ... by a computer processor, the pre-selected reason notation (FIGs. 17 and 22-24, for example, showing examples of screenshots that result from this conversation analysis, including display of aspects of the conversation as processed and labeled for example, see e.g. identifications of greetings, requests, charge disputes, and so forth which constitute identifiable reasons within the scope of the recorded and processed and analyzed conversation). Regarding the above, Eisenzopf, at [0039], [0069] and [0078], and when as read as a whole, contemplated an AI-driven framework that is able to identify conversational ontology (which the Examiner reasons to be akin to a “conversation reason” as recited) based on an automated continuous computer-driven analysis process. The ontologies as taught are understood to be pre-defined, per [0069], as subject to Eisenzopf’s classification aspect. In view of this, the Examiner believes Eisenzopf’s classification to determine an ontology fairly reads on the pre-selection of an allowable reason and doing so using or sans the use of an AI engine, i.e., reading on the action steps of the limitations provided above. One of ordinary skill in the art would understand an AI-driven machine-learned classifier per Eisenzopf, which is taught to select an allowable reason based on its analysis, per the first through third limitations provided above. Eisenzopf does not well address the conditioned aspect of the further limitation of performing its classification/analysis (discussed above, per Eisenzopf) responsive to determining, by computer processor, that no explicit statement of reason, no statement of goal and no statement of intention is in the one or more data records: inferencing, by a computer processor, at least one of the allowable reason notations according to the at least one inferred reason along with (discussed above, per Eisenzopf) automatically pre-selecting a reason notation in accordance with analysis/classification. That is to say, the Examiner believes Eisenzopf provides the teaching to do classification/analysis of text input to arrive at an inferred intent, but does not explicitly teach only doing this work if the intent is not readily provided in an explicit sense. However, the Examiner believes it would be a simple and obvious design choice to modify Eisenzopf in that way so that, if the information is already given, then work to get that information does not have to be performed, e.g. thereby conserving computation resources and processing time (especially in a real-time scenario). However, to the extent that Eisenzopf alone is not sufficient, the Examiner relies upon PETROVYKH to teach what Eisenzopf otherwise lacks. See, e.g., Petrovykh’s comparable real-time agent-consumer scenario where intent is inferred based on initial and possibly even further information provided by the consumer (page 4, 4th-5th paragraphs), such that the fruitful inference that delivers the intent result could in some instances be arrived at later if the intent is not already readily provided or ascertained. Further, regarding Applicants’ further amendment to clarify the providing is to an agent user, it would be obvious for this same information to be advantageously provided to an agent who is assuming the customer service obligation (5th paragraph), much like the information as inferred is provided to a comparable agent/representative in Eisenzopf’s framework (see, e.g., the discussion provided above in relation to the providing step). Both references are directed to comparable customer service scenarios where an agent/representative is in service of a customer-type end user, and in both frameworks classification/inference is performed to better understand the end user via the immediate engagement/session. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art to incorporate Petrovykh’s more persistent inference of intent into a framework like Eisenzopf’s, with a reasonable expectation of success, such that further input and related processing to arrive at the useful intent information could be selectively performed if such information was not already provided or not already obtained, thereby providing the attendant agent/representative with more information as may be deemed useful in the state of the art where customer/consumer intent is worthy of inference to improve the experience. Regarding claim 2, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1 wherein the receiving one or more data records comprises accessing, by a computer processor, a data corpus having a plurality of digitally-recorded conversations of text-based interlocutory conversations (Eisenzopf: the framework is clearly directed to the exploration by way of analysis of “a large collection of digitally recorded conversations” per [0039], with [0101] clarifying that the scale of the typical corpus may contain hundreds or thousands of conversations, where the conversations as discussed are clearly described to be text/chat based and between two identifiable interlocutors (all further per [0039])). The motivation for combining the references is as discussed above in relation to claim 1. Regarding claim 3, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1 wherein the providing the pre-selected reason notation on a computer display comprises providing a visual indicator of the pre-selected reason notation on a user dialog on the computer display (Eisenzopf: FIGs. 17 and 22-24, for example, showing examples of screenshots that result from this conversation analysis, including display of aspects of the conversation as processed and labeled for example, see e.g. identifications of greetings, requests, charge disputes, and so forth which constitute identifiable reasons within the scope of the recorded and processed and analyzed conversation and are in this way visually displayed/“indicated” via the generated screenshots). The motivation for combining the references is as discussed above in relation to claim 1. Regarding claim 4, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 3 wherein the user dialog comprises a drop-down list dialog (Eisenzopf: [0118]-[0119]: “drop-down list” and “drop-down dialog”). The motivation for combining the references is as discussed above in relation to claim 1. Regarding claim 5, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1 further comprising: receiving, by a computer processor, a user-entered change to the pre-selected reason notation and updating, by a computer processor, the one or more data records to reflect the user-entered change to the reason notation and modifying, by a computer processor, training data for the artificial intelligence engine to reflect the user-entered change to the reason notation (Eisenzopf: [0099]’s discussion that the generated graphical depiction of the conversations as received, processed, and analyzed is subject to a user’s exploration, which includes a user’s ability to select and exclude portions of a conversation that are then in turn used to develop a resulting AI chatbot for example (i.e., permits a filtering of what constitutes the training data for the AI chatbot), and further per [0140] it is discussed that a user may edit the conversation’s representation such as to delete, duplicate, etc. features such as a turn or intent in the conversation (which the Examiner understands to be a change by the user of a labelled/annotated representation of the conversation, and where this change as made the by user propagates onwards into the use of the conversation data in this form to provide for further training of a chatbot for example)). The motivation for combining the references is as discussed above in relation to claim 1. Regarding claim 6, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1 further comprising: providing, by a computer processor, a search dialog on the computer display and receiving, by a computer processor, a user-entered search criteria and searching, by a computer processor, the set of predetermined allowable reason notations for exact or close matches to the user-entered search criteria and responsive to finding one or more exact or close reason notation matches, providing, by a computer processor, the one or more exact or close reason notation matches on the computer display, available for user selection (Eisenzopf: searching and filtering per [0118] such that a user provides a search criterion, e.g., filter by goals as one example, will result in a search of the corpus or the representation of the corpus to find appropriate matches, where for example the “Conversation Insights Filters portion” as taught per [0118] may constitute the recited “search dialog”, and making a selection of a goal as discussed to effectively apply a filter is akin to the recited “user-entered search criteria” and facilitates a search using the ontology entities (i.e., equivalent to “predetermined allowable reason notations”)). The motivation for combining the references is as discussed above in relation to claim 1. Regarding claim 10, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The claim additional recites a computer program product comprising non-transitory computer readable memory device, which is further taught per Eisenzopf’s [0056] (“... a processor running computer instructions”) and [0071] (“... tangible, computer readable memory devices to realize computer program products ...”). Regarding claim 11, the claim includes the same or similar limitations as claim 2 discussed above, and is therefore rejected under the same rationale. Regarding claim 12, the claim includes the same or similar limitations as claim 3 discussed above, and is therefore rejected under the same rationale. Regarding claim 13, the claim includes the same or similar limitations as claim 4 discussed above, and is therefore rejected under the same rationale. Regarding claim 16, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The claim additional recites one or more computer processors and at least one tangible, non-transitory computer-readable memory device, which are further taught per Eisenzopf’s [0056] (“... a processor running computer instructions”) and [0071] (“... tangible, computer readable memory devices to realize computer program products ...”). Regarding claim 17, the claim includes the same or similar limitations as claim 2 discussed above, and is therefore rejected under the same rationale. Regarding claim 18, the claim includes the same or similar limitations as claim 3 discussed above, and is therefore rejected under the same rationale. Regarding claim 19, the claim includes the same or similar limitations as claim 4 discussed above, and is therefore rejected under the same rationale. Regarding claim 20, the claim includes the same or similar limitations as claim 5 discussed above, and is therefore rejected under the same rationale. 14. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Eisenzopf in view of Petrovykh and further in view of U.S. Patent Application Publication No. 2010/0280860 (“Iskold”). Regarding claim 7, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1, as discussed above. The aforementioned reference teaches receiving, by a computer processor, a agent user-entered reason notation entry and searching, by a computer processor, the set of predetermined allowable reason notations for exact or close matches to the user-entered search criteria (Eisenzopf: searching and filtering per [0118] such that a user provides a search criterion, e.g., filter by goals as one example, will result in a search of the corpus or the representation of the corpus to find appropriate matches, where for example the “Conversation Insights Filters portion” as taught per [0118] may constitute the recited “search dialog”, and making a selection of a goal as discussed to effectively apply a filter is akin to the recited “user-entered search criteria” and facilitates a search using the ontology entities (i.e., equivalent to “predetermined allowable reason notations”)). That said, while Eisenzopf does contemplate that new “intents” can be added per its [0096], Eisenzopf does not appear to contemplate a dialog/UI for it such that Eisenzopf’s user-entered reason notation that is searched for is also one that is in accordance with a reason notation to be added, e.g. per the further limitations for providing, by a computer processor, an add reason dialog on an agent user’s computer display and responsive to receiving the user-entered reason notation entry, adding, by a computer processor, the user-entered reason notation entry to the set of predetermined allowable reason notations. Rather, the Examiner relies upon ISKOLD to teach what Eisenzopf etc. otherwise lacks, see e.g. Iskold’s [0075] discussing a provision to search for and selectively add, on the basis of a good match, elements from a database to an existing taxonomy-based project that would likewise benefit from appropriate updating to reflect new relationships and the like in the data. Both Eisenzopf and Iskold relate to taxonomy-driven frameworks where the framework permits for additions or modifications to a project to better reflect aspects in the underlying data. Hence, the aforementioned references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to permit a change or addition to Eisenzopf’s taxonomy and/or related data in the more manual and user-facing manner taught by Iskold, with a reasonable expectation of success, such that a user of Eisenzopf has a more apparent way to change the underlying data after it has been collected and analyzed but prior to its subsequent use in training models. 15. Claims 8-9, 14-15, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Eisenzopf in view of Petrovykh and further in view of U.S. Patent Application Publication No. 2019/0280996 (“Dahir”). Regarding claim 8, Eisenzopf in view of Petrovykh teaches the method as set forth in claim 1, as discussed above. The aforementioned reference clearly teaches a framework where a representation of a conversation as digital recorded, e.g. via transcribing or via chat/text messaging implementations, is received, processed, and used to generate a model that is then used to improve chat bots. See, e.g., Eisenzopf: [0045] and [0093], where the result of processing the conversation’s representation certainly results in a graphical/visualization representation thereof. That said, Eisenzopf etc. does not clearly teach that the conversation’s representation as received may feasibly be a summary of it, e.g. per the further limitations wherein the receiving one or more data records of a digitally-recorded conversation of a text-based interlocutory conversation comprises receiving a text-based summary of a conversation, and wherein the providing on a display comprises providing the summary. Rather, the Examiner relies upon DAHIR to teach what Eisenzopf etc. otherwise lacks, see e.g., Dahir’s [0033]-[0055] discussing the active identification and logging of conversations using machine learning such that conversation summaries are generated on the basis of keyword analysis and the like. That is to say, it is possible that some of the analysis that Eisenzopf already performs may be performed by other platforms such that what is stored may be conversations, as Eisenzopf contemplates, and what may be stored to some specific advantage may be summaries of conversations, as Dahir contemplates. Both references relate to frameworks that rely upon the receipt of conversation information/data, that is then subject to analysis via machine learning techniques to provide its users with some benefit/advantage. Hence, they are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend Eisenzopf’s intake/ingestion aspect to also receive conversation summaries, as Dahir provides for, such that a wider representation of conversation information/data can be processed by Eisenzopf feasibly, and perhaps in some cases bypass some of the processing by receiving an already processed and already analyzed version of the information/data, in the spirit of how preprocessing data can sometimes be a benefit in the sate of the art before the data is provided particular analysis pipelines. Regarding claim 9, Eisenzopf in view of Petrovykh and further yet in view of Dahir teaches the method as set forth in claim 8 further comprising: receiving, by a computer processor, a user-entered change to the summary and updating, by a computer processor, the one or more data records to reflect the user-entered change to the summary and modifying, by a computer processor, training data for the artificial intelligence engine to reflect the user-entered change to the summary (Eisenzopf’s [0099] discussing that the generated graphical depiction of the conversations as received, processed, and analyzed is subject to a user’s exploration, which includes a user’s ability to select and exclude portions of a conversation that are then in turn used to develop a resulting AI chatbot for example (i.e., permits a filtering of what constitutes the training data for the AI chatbot), and further per [0140] it is discussed that a user may edit the conversation’s representation such as to delete, duplicate, etc. features such as a turn or intent in the conversation (which the Examiner understands to be a change by the user of a labelled/annotated representation of the conversation, and where this change as made the by user propagates onwards into the use of the conversation data in this form to provide for further training of a chatbot for example), and feasibly this editable aspect as just discussed here per Eisenzopf can be used to modify a version of the same framework when modified to accept and process summaries of conversations per Eisenzopf as modified in view of Dahir as discussed above per claim 8). The motivation for combing the cited prior art references is as discussed above in relation to claim 8. Regarding claim 14, the claim includes the same or similar limitations as claim 8 discussed above, and is therefore rejected under the same rationale. Regarding claim 15, the claim includes the same or similar limitations as claim 9 discussed above, and is therefore rejected under the same rationale. Regarding claim 20, the claim includes the same or similar limitations as claim 8 discussed above, and is therefore rejected under the same rationale. Regarding claim 21, the claim includes the same or similar limitations as claim 9 discussed above, and is therefore rejected under the same rationale. Conclusion 16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHOURJO DASGUPTA/ Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Dec 30, 2022
Application Filed
Dec 23, 2024
Non-Final Rejection — §101, §103, §112
Mar 27, 2025
Response Filed
Jun 14, 2025
Final Rejection — §101, §103, §112
Dec 18, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591802
GENERATING ESTIMATES BY COMBINING UNSUPERVISED AND SUPERVISED MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12586371
SENSOR DATA PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578979
VISUALIZATION OF APPLICATION CAPABILITIES
2y 5m to grant Granted Mar 17, 2026
Patent 12572782
SCALABLE AND COMPRESSIVE NEURAL NETWORK DATA STORAGE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12549397
MULTI-USER CAMERA SWITCH ICON DURING VIDEO CALL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+38.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month