Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of the Application
The following is a Non-Final Office Action in response to Request for Continued Examination (RCE) filed 11/14/2025. Claims 1-3, 5, 8, 10-12, 14, 17, 19, and 22-25 are pending in this application.
Examiner notes that Examiner Kiersten Summers is taking over examination for the previous examiner of record, Jan Mincarelli.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/14/2025 has been entered.
Response to Amendment
Applicant’s amendments to claims 1, 3, 10, and 19 are acknowledged. Applicant’s cancellation of claims 4, 6-7, 9, 13, 15-16, 18, and 20-21 are acknowledged. Applicant’s addition of new claims 22-25 are acknowledged.
Response to Arguments
On Remarks page 10, Applicant argues that the claims merely involve an exception rather than reciting an exception. Here specifically Applicant argues the claims are directed to using natural language processing to digitize voice text in order to generate a plurality of document matrices in order to identify newly appearing terms for machine learning analysis and labeling. Further Applicant argues the claims recite a practical application and or significantly more. The Examiner understands Applicant’s arguments however the Examiner respectfully disagrees.
Specifically here the claims recite mental process and or human activity steps, specifically collecting data, standardizing data for comparison by transcribing audio/speech to written text, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity, as broadly recited in the claim. These are a part of the abstract idea.
The additional elements beyond the abstract idea that recite that the transcribing is being performed by “natural language processing”, the model (set or rules) is “machine learning”, and a text data is “digitized” merely results in apply it or generally linking it to the field of computers, which is not enough to qualify as a practical application and or significantly more as previously found by the courts (see MPEP 2106.05(f) and 2106.05(h)).
Specifically, here the claims additional elements invoke computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store, or transmit data) or simply adding a general purpose or computer components after the fact to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Further as recited in the claims this can be considered nothing more than generally linking it to the field of computers.
Further with respect to the additional elements of “natural language processing” and “machine learning models” here in the claims there no details about how particular “natural language processing” and “machine learning models” operate to derive the information other than they are generally being used to determine information. The “natural language processing” and “machine learning models” are generally used to apply the abstract idea without placing any limitations on how the “natural language processing” and “machine learning models” derive the information. In addition, the limitations recite only the idea of determining speech and emerging topics via “natural language processing” and “machine learning models” without any details on how this is actually accomplished. The claim omits any details as to how the “natural language processing” and “machine learning models” solves a technical problem and instead recites only the idea of a solution or outcome. The claim invokes the “natural language processing” and “machine learning models” merely as a tool for performing the recited step rather than purporting to improve the technology or a computer. This can also be viewed as nothing more than generally linking it to the field of computers (additionally see USPTO Example 48).
On Remarks page 11, Applicant argues the claims clearly impose a meaningful limitation on the judicial exception such that the claim is “more than a drafting effort designed to monopolize the judicial exception.” The Examiner respectfully disagrees.
As discussed above in this response to arguments section, the claims merely recite limitations alone and in combination that amount to no more than apply it or generally linking it to the field of computers, with respect to the abstract idea.
See MPEP 2106 (cited herein):
“The courts have also identified limitations that did not integrate a judicial exception into a practical application:
Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f);
• Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and
• Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).”
Therefore as these specific limitations have previously been found to not be enough to qualify as a practical application or significantly more, the Examiner respectively disagrees that such limitations as currently recited in the claims would amount to a practical application and or significantly more.
On Remarks page 12, Applicant argues the claim features are necessarily rooted in computer technology.
Here it appears Applicant is arguing the findings in the DDR decision. MPEP 2106.05 (f) discusses this decision, cited herein:
In contrast, other cases have found that additional elements are more than “apply it” or are not “mere instructions” when the claim recites a technological solution to a technological problem. In DDR Holdings, the court found that the additional elements did amount to more than merely instructing that the abstract idea should be applied on the Internet. DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1259, 113 USPQ2d 1097, 1107 (Fed. Cir. 2014). The claims at issue specified how interactions with the Internet were manipulated to yield a desired result—a result that overrode the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink. 773 F.3d at 1258; 113 USPQ2d at 1106.
Here DDR and the present application are not of similar subject matter therefore the argument is not persuasive.
Rather functions as broadly recited in the claims of collecting data, standardizing data for comparison by transcribing audio/speech to written text, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity exist outside the realm of computer and computer networks. These are a mental process and certain methods of organizing human activity steps. These accordingly are part of the abstract idea. Therefore are not a technological solution to a technical problem.
Beyond the abstract idea above, the additional elements that these steps are instead performed by a “computing” platform, software running on a computer “at least one processor; a communication interface communicatively coupled to the at least one processor and a memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to:”, the text is “digitized”, audio or speech is transcribed to text via “natural language processing”, the model (set of rules) that determines the new topics is a “machine learning model”, and information is stored in a “database” merely results in apply it and generally linking it to the field of computers as discussed in section 5 above, which are limitations previously found by the courts not to be a practical application or significantly more (see section 6 above). Therefore not a technological solution to a technical problem.
Therefore the Examiner respectfully disagrees.
On Remarks pages 12-13, Applicant argues the prior art with respect to Can and Applicant’s amendments. The Examiner strongly disagrees.
Can is in the art of determining emerging topics based on call center interactions (See abstract). Can clearly teaches determining topics at time (t) (first time period in Applicant’s claims) and new, disappearing, and active topics at time (t+1) (Second time period in Applicant’s claims) based on frequency of occurrence (interpreted as a number of appearances in Applicant’s claims) of the keyword (which is interpreted as term in Applicant’s claims) compared to the previous time period (see column 6 lines 9-34 and lines 50-65).
Can teaches topics are predicted for a call transcript instance or aggregated call transcript instances (see column 12 lines 50-57), where a transcript relates to an interaction between a customer and a call center (see column 5 lines 8-10).
It is noted that Can notes that topic and theme can be used interchangeably in the reference (see column 2 lines 27-28).
Therefore the Examiner respectfully disagrees.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 5, 8, 10-12, 14, 17, 19, and 22-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
The claim(s) recite(s) collecting data, standardizing data for comparison by transcribing audio/speech to written text, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity.
Collecting data, standardizing data for comparison by transcribing audio/speech to written text, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity recite observations, evaluations, judgements and opinions that can be performed in the human mind or with a physical aid like pen and paper, accordingly the claims recite a mental process (see MPEP 2106.04(a)(2))
Further the claims idea collecting data, standardizing data for comparison by transcribing audio/speech to written text, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity is related to subject matter of managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or interactions as further this specific data relates to call center communications. This managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or interactions is a certain method of organizing human activity (see MPEP 2106.04(a)(2)).
Mental processes and certain methods of organizing human activities are in the groupings of enumerated abstracts ideas, and hence the claims recite an abstract idea.
This judicial exception is not integrated into a practical application because the claims merely recite limitations that are not indicative of integration into a practical application in that the claims merely recite:
(1) Adding the words “apply it” ( or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)) and (2) Generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)).
Specifically as recited in the claims:
Examiner notes that the Examiner has bolded and underlined additional elements. Limitations not bolded and underlined are considered a part of the abstract idea.
1. (Currently Amended) A computing platform, comprising:
at least one processor;
a communication interface communicatively coupled to the at least one processor;
and a memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to:
receive customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates;
convert, using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues;
for a first time period:
generate a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period;
for a second time period subsequent to the first time period: generate a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period;
subtract the first time period body of terms document matrix from the second time period body document matrix of terms to identify filtered terms appearing in the second time period body of terms document matrix but not in the first time period body of terms document matrix;
identify the filtered terms appearing in the second time period body of terms document matrix but not in the first time period body of terms document matrix as emerging terms for the second time period;
execute a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label;
and store the emerging topic label in a database of labels available to categorize subsequent customer issues.
2. (Original) The computing platform of claim 1, wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms.
3. (Currently Amended) The computing platform of claim 2, further including instruction that, when executed, cause the computing platform to: executing execute the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms.
4. (Cancelled)
5. (Previously Presented) The computing platform of claim 1, wherein pre- processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters.
6. (Cancelled)
7. (Cancelled)
8. (Original) The computing platform of claim 1, further including instructions that, when executed, cause the computing platform to:
transmit the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device.
9. (Cancelled)
10. (Currently Amended) A method, comprising:
receiving, by a computing platform, the computing platform having at least one processor, and memory, customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates;
converting, by the at least one processor and using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues;
for a first time period:
generating, by the at least one processor, a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period;
for a second time period subsequent to the first time period:
generating, by the at least one processor, a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period;
subtracting, by the at least one processor, the first time period body of terms document matrix from the second document matrix time period body of terms to identify filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms;
identifying, by the at least one processor, the filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms as emerging terms for the second time period;
executing, by the at least one processor, a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label;
and storing, by the at least one processor, the emerging topic label in a database of labels available to categorize subsequent customer issues.
11. (Original) The method of claim 10, wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms.
12. (Original) The method of claim 11, further including: executing, by the at least one processor, the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms.
13. (Cancelled)
14. (Previously Presented) The method of claim 10, wherein pre-processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters.
15. (Cancelled)
16. (Cancelled)
17. (Original) The method of claim 10, further including: transmitting, by the at least one processor, the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device.
18. (Cancelled)
19. (Currently Amended) One or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface, cause the computing platform to:
receive customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates;
convert, using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues;
for a first time period:
generate a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period;
for a second time period subsequent to the first time period:
generate a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period;
subtract the first document matrix time period body of terms from the second document matrix time period body of terms to identify filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms;
identify the filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms as emerging terms for the second time period;
execute a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label; and store the emerging topic label in a database of labels available to categorize subsequent customer issues.
20. (Cancelled)
21. (Cancelled)
22. (New) The one or more non-transitory computer-readable media of claim 19, wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms.
23. (New) The one or more non-transitory computer-readable media of claim 19, further including instruction that, when executed, cause the computing platform to:
execute the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms.
24. (New) The one or more non-transitory computer-readable media of claim 19, wherein pre-processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters.
25. (New) The one or more non-transitory computer-readable media of claim 19, further including instructions that, when executed, cause the computing platform to: transmit the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device.
As per claim 1, the claims recite collecting data, standardizing data for comparison by transcribing audio/speech to written text and stemming data, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity, which are mental process and certain methods of organizing human activity steps. These are part of the abstract idea.
The additional elements that these steps are instead performed by a “computing” platform, software running on a computer “at least one processor; a communication interface communicatively coupled to the at least one processor and a memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to:”, the text is “digitized”, audio or speech is transcribed to text via “natural language processing”, the model that determines the new topics is a “machine learning model”, and information is stored in a “database” merely results in apply it.
Here the additional elements in the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store, or transmit data) or simply adding a general purpose or computer components after the fact to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Further this can be considered nothing more than generally linking it to the field of computers.
Further with respect to “natural language processing” and “machine learning models” here in the claims there no details about how particular “natural language processing” and “machine learning models” operate to derive the information other than they are generally being used to determine that information. The “natural language processing” and “machine learning models” are generally used to apply the abstract idea without placing any limitations on how the “natural language processing” and “machine learning models” derive the information. In addition, the limitation recites only the idea of determining speech and emerging topics via “natural language processing” and “machine learning models” without any details on how this accomplished. The claim omits any details as to how the “natural language processing” and “machine learning models” solves a technical problem and instead recites only the idea of a solution or outcome. The claim invokes the “natural language processing” and “machine learning models” merely as a tool for performing the recited step rather than purporting to improve the technology or a computer. This can also be viewed as nothing more than generally linking it to the field of computers (additionally see USPTO Example 48).
As per claim 2, the claim recites using a model to analyze the filtered terms. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities. The additional element that merely describes the model as being a model learning model that uses Latent Dirichlet Allocation (LDA) merely results in apply it.
Here there are no details about a particular machine learning model or how it operates to derive the information other than it using LDA. The machine learning model is used to generally apply the abstract idea without placing any limitation on how the machine learning operates to derive the information other than it uses LDA. In addition, the limitation recites only the idea of analyzing the filtered terms using a machine learning model with LDA without details on how this accomplished. The claim omits any details as to how the machine learning model with LDA solves a technical problem and instead recites only the idea of a solution or outcome. The claim invokes the machine learning model with LDA merely as a tool for performing the recited step rather than purporting to improve the technology or computer. This can also be viewed as nothing more than generally linking it to the field of computers (additionally see USPTO Example 48).
The additional element that this is being performed by a “computing” platform is addressed above in claim 1.
As per claim 3, the claim recites using a model to identify reoccurring terms, where one of the inputs into the model is the terms in the first time period. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities.
The additional element that merely describes the model as being a model learning model merely results in apply it and generally linking it to the field of computers as discussed above in claim 1.
The additional element that this is being performed by a “computing” platform and software running on a computer “including instruction that, when executed, cause the computing platform” are addressed above in claim 1.
AS per claim 5, the claims recite the mental process or certain method of organizing human activity step of removing unwanted information from transcripts. The additional element that the text is “digitized” and being performed by a “computing” platform are discussed above in claim 1.
AS per claim 8, the claims recite the mental process or certain method of organizing human activity step of transmitting results to another entity for display. The additional elements that this is performed by software running on a computer “further including instructions that, when executed, cause the computing platform to:” and a “computing” platform results in the same as discussed above in claim 1.
The additional element that the entity is a “computing device” with a display merely results in apply it. Here the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g. to receive, store, or transmit data) or simply adding a general purpose or computer components after the fact to an abstract idea does not integrate a judicial exception into a practical application or provide significantly more. Further this can be considered nothing more than generally linking it to the field of computers.
As per claim 10, the claims recite collecting data, standardizing data for comparison by transcribing audio/speech to written text and stemming data, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity, which are mental process and certain methods of organizing human activity steps. These are part of the abstract idea.
The additional elements that these steps are instead performed by a “computing” platform with at least one processor and memory, the text is “digitized”, audio or speech is transcribed to text via “natural language processing”, the model that determines the new topics is a “machine learning model”, and information is stored in a “database” merely results in apply it and generally linking it to the field of computers as discussed above in claim 1.
As per claim 11, the claim recites using a model to analyze the filtered terms. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities. The additional element that merely describes the model as being a model learning model that uses Latent Dirichlet Allocation (LDA) merely results in apply it and generally linking it to the field of computers as discussed above in claim 2.
As per claim 12, the claim recites using a model to identify reoccurring terms, where one of the inputs into the model is the terms in the first time period. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities. The additional element that merely describes the model as being a machine learning model and a processor performing functions merely results in apply it and generally linking it to the field of computers as discussed above in claims 1 and 10.
AS per claim 14, the claims recite the mental process and certain method of organizing human activity step of removing unwanted information from transcripts. The additional element that the text is “digitized” is discussed above in claims 1 and 10.
AS per claim 17, the claims recite the mental process and certain method of organizing human activity step of transmitting results to another entity for display. The additional elements that this is performed by a processor and that the entity is a “computing device” merely results in apply it and generally linking it to the field of computers as discussed above in claim 1 and 8.
As per claim 19, the claims recite collecting data, standardizing data for comparison by transcribing audio/speech to written text and stemming data, comparing data during different time periods according to a model (set of rules), determining by a model(set of rules) new data that emerged based on the occurrence of the new data in a second time but not a first time period, storing results of the comparison and the determination, and providing the results of comparison to another entity, which are mental process and certain methods of organizing human activity steps. These are part of the abstract idea. The additional elements that these steps are instead performed by software running on a computer “one or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface, cause the computing platform to:”, the text is “digitized”, audio or speech is transcribed to text via “natural language processing”, the model that determines the new topics is a “machine learning model”, and information is stored in a “database” merely results in apply it and generally linking it to the field of computers as discussed above in claim 1.
As per claim 22, the claim recites using a model to analyze the filtered terms. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities. The additional element that merely describes the model as being a model learning model that uses Latent Dirichlet Allocation (LDA) merely results in apply it and generally linking it to the field of computers as discussed above in claim 2. The claim being a computer readable medium claim is addressed above in claim 19.
As per claim 23, the claim recites using a model to identify reoccurring terms, where one of the inputs into the model is the terms in the first time period. This is part of the abstract idea, as this recites both a mental process and a certain method of organizing human activities. The additional element that merely describes the model as being a machine learning model merely results in apply it and generally linking it to the field of computers as discussed above in claims 1 and 10. The claim being a computer readable medium claim is addressed above in claim 19.
AS per claim 24, the claims recite the mental process or certain method of organizing human activity step of removing unwanted information from transcripts. The additional element that the text is “digitized” is discussed above in claims 1 and 10. The claim being a computer readable medium claim is addressed above in claim 19.
AS per claim 25, the claims recite the mental process or certain method of organizing human activity step of transmitting results to another entity for display. The additional elements that the entity is a “computing device” merely results in apply it and generally linking it to the field of computers as discussed above in claim 8. The claim being a computer readable medium claim is addressed above in claim 19.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims merely recite limitations that are not indicative of an inventive concept (“significantly more”) in that the claims merely recite:
(1) Adding the words “apply it” ( or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)) and (2) Generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)), as detailed above with respect to the practical application step.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5, 8, 10-12, 14, 17, 19, and 22-25 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Can (United States Patent Number: US 11,849,069).
As per claim 1, Can teaches A computing platform, comprising: (see abstract, Examiner’s note: system to determine emerging topics from call center based communications).
at least one processor; (see Figure 12, Examiner’s note: reference character 1204 describes a processor).
a communication interface communicatively coupled to the at least one processor; (see Figure 12, Examiner’s note: reference characters 1206, 1203, and 1224 are all communication interfaces communicatively coupled in the computer system 1200 to the processor).
and a memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: (see column 17 lines 58- column 18 lines 3, Examiner’s note: non-transitory computer readable medium having software stored thereon to cause the processing devices to operate as described).
receive customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates; (see column 3 lines 38-42 and column 3 lines 50-60, Examiner’s note: teaches call center calls are routed to an call agent through a call router which may analyze a user’s previous call interactions and once a call agent is selected a speech recognizer may analyze a speech in real time by analyzing utterances).
convert, using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues; (see column 5 lines 7-12 and column 7 lines 52-57, Examiner’s note: incoming calls are transcribed automatically into transcripts using the speech recognizer (see column 5 lines 7-12) and further using NPL for text generations or word phrase of sentence analysis or communication (see column 7 lines 52-57)).
for a first time period: generate a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period; for a second time period subsequent to the first time period: generate a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period; (see column 5 lines 8-10, column 6 lines 9-34, column 9 lines 1-5, and column 12 lines 50-57, Examiner’s note: teaches determining topics at time (t) (first time period as claimed) and new topics at time (t+1) (Second time period as claimed) based on frequency (number of appearances as claimed) of the keyword (term as claimed) (see column 6 lines 9-34), further teaches using stemming to perform analysis on the call center records (see column 9 lines 1-5), and teaches topics are predicted in a call transcript instance or aggregates call transcript instances (see column 12 lines 50-57), where a transcript relates to an interaction between a customer and a cell center (see column 5 lines 8-10)
subtract the first time period body of terms document matrix from the second time period body document matrix of terms to identify filtered terms appearing in the second time period body of terms document matrix but not in the first time period body of terms document matrix; identify the filtered terms appearing in the second time period body of terms document matrix but not in the first time period body of terms document matrix as emerging terms for the second time period; (see column 2 lines 2 lines 52-65 and column 6 lines 49- column 7 lines 5, Examiner’s note: teaches comparing current keywords to a previous time frame to understand what new themes appear, that were previously not found, the example here is COVID emerging).
execute a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label; (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
and store the emerging topic label in a database of labels available to categorize subsequent customer issues (see column 12 lines 62-67, Examiner’s note: teaches storing the results of the machine learning in a database).
As per claim 2, Can teaches
wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms. (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
As per claim 3, Can teaches
further including instruction that, when executed, cause the computing platform to: (see column 17 lines 58- column 18 lines 3, Examiner’s note: non-transitory computer readable medium having software stored on to cause the processing devices to operate as described).
execute the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms. (see column 2 lines 50-60, column 6 lines 20-35, column 6 lines 56-65, Examiner’s note: teaches comparing the highest ranked keywords where highest rank is based on frequency of occurrence (e.g. recurring) with previous highest ranked keywords (e.g. recurring), to determine a new keyword. Teaches this previous occurrence may be last week or month for example. Further teaches new or emerging themes may be provided, as well as not alive anymore (e.g. not recurring), and teaches certain clusters are still prevalent (e.g. recurring)).
As per claim 5, Can teaches
wherein pre- processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters (see column 13 lines 40-44, Examiner’s note: teaches stop word filtering).
As per claim 8, Can teaches
further including instructions that, when executed, cause the computing platform to: transmit the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device (see column 15 lines 40-47, Examiner’s note: topics labeled as emerging may be rendered in a graphical user interface (GUI) and provided to various components of the call center including manager displays and call agent displays).
As per claim 10, Can teaches A method, comprising: (see abstract, Examiner’s note: method to determine emerging topics from call center based communications).
receiving, by a computing platform, the computing platform having at least one processor, (see Figure 12, Examiner’s note: reference character 1204 describes a processor).
and memory, (see column 17 lines 58- column 18 lines 3, Examiner’s note: non-transitory computer readable medium having software stored on to cause the processing devices to operate as described).
customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates; (see column 3 lines 38-42 and column 3 lines 50-60, Examiner’s note: teaches call center calls are routed to a call agent through a call router which may analyze a user’s previous call interactions and once a call agent is selected a speech recognizer may analyze a speech in real time by analyzing utterances).
converting, by the at least one processor and using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues; (see column 5 lines 7-12 and column 7 lines 52-57, Examiner’s note: incoming calls are transcribed automatically into transcripts using the speech recognizer (see column 5 lines 7-12) and further using NPL for text generations or word phrase of sentence analysis or communication (see column 7 lines 52-57)).
for a first time period: generating, by the at least one processor, a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period; for a second time period subsequent to the first time period: generating, by the at least one processor, a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period; (see column 5 lines 8-10, column 6 lines 9-34, column 9 lines 1-5, and column 12 lines 50-57, Examiner’s note: teaches determining topics at time (t) (first time period as claimed) and new topics at time (t+1) (Second time period as claimed) based on frequency (number of appearances as claimed) of the keyword (term as claimed) (see column 6 lines 9-34), further teaches using stemming to perform analysis on the call center records (see column 9 lines 1-5), and teaches topics are predicted in a call transcript instance or aggregates call transcript instances (see column 12 lines 50-57), where a transcript relates to an interaction between a customer and a cell center (see column 5 lines 8-10)
subtracting, by the at least one processor, the first time period body of terms document matrix from the second document matrix time period body of terms to identify filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms; identifying, by the at least one processor, the filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms as emerging terms for the second time period; (see column 2 lines 2 lines 52-65 and column 6 lines 49- column 7 lines 5, Examiner’s note: teaches comparing current keywords to a previous time frame to understand what new themes appear, that were previously not found, the example here is COVID emerging).
executing, by the at least one processor, a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label; (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
and storing, by the at least one processor, the emerging topic label in a database of labels available to categorize subsequent customer issues. (see column 12 lines 62-67, Examiner’s note: teaches storing the results of the machine learning in a database).
As per claim 11, Can teaches
wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms. (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
As per claim 12, Can teaches
further including: executing, by the at least one processor, the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms. (see column 2 lines 50-60, column 6 lines 20-35, column 6 lines 56-65, Examiner’s note: teaches comparing the highest ranked keywords where highest rank is based on frequency of occurrence (e.g. recurring) with previous highest ranked keywords (e.g. recurring), to determine a new keyword. Teaches this previous occurrence may be last week or month for example. Further teaches new or emerging themes may be provided, as well as not alive anymore (e.g. not recurring), and teaches certain clusters are still prevalent (e.g. recurring)).
As per claim 14, Can teaches
wherein pre-processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters. (see column 13 lines 40-44, Examiner’s note: teaches stop word filtering)
As per claim 17, Can teaches
further including: transmitting, by the at least one processor, the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device. (see column 15 lines 40-47, Examiner’s note: topics labeled as emerging may be rendered in a graphical user interface (GUI) and provided to various components of the call center including manager displays and call agent displays).
As per claim 19, Can teaches One or more non-transitory computer-readable media storing instructions that, when executed by a computing platform comprising at least one processor, memory, and a communication interface, cause the computing platform to: (see column 17 lines 58- column 18 lines 3, Examiner’s note: non-transitory computer readable medium having software stored on to cause the processing devices to operate as described).
receive customer data, wherein the customer data includes voice data from one or more interactions between customers and call center associates; (see column 3 lines 38-42 and column 3 lines 50-60, Examiner’s note: teaches call center calls are routed to a call agent through a call router which may analyze a user’s previous call interactions and once a call agent is selected a speech recognizer may analyze a speech in real time by analyzing utterances).
convert, using natural language processing, the customer data including the voice data to generate digitized text data associated with a plurality of customer issues; (see column 5 lines 7-12 and column 7 lines 52-57, Examiner’s note: incoming calls are transcribed automatically into transcripts using the speech recognizer (see column 5 lines 7-12) and further using NPL for text generations or word phrase of sentence analysis or communication (see column 7 lines 52-57)).
for a first time period: generate a first time period body of terms identified from the digitized text data, the first time period body of terms being identified from the digitized text data captured during the first time period, wherein generating the first time period body of terms includes pre-processing the digitized text data including stemming terms in the digitized text data and wherein generating the first time period body of terms includes generating a first document matrix for the first time period wherein the first document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the first time period; for a second time period subsequent to the first time period :generate a second time period body of terms identified from the digitized text data, the second time period body of terms being identified from the digitized text data captured during the second time period and wherein generating the second time period body of terms includes generating a second document matrix for the second time period wherein the second document matrix identifies terms and a number of appearances of each term appearing in each document representing a respective interaction between a customer and a call center associate during the second time period; (see column 5 lines 8-10, column 6 lines 9-34, column 9 lines 1-5, and column 12 lines 50-57, Examiner’s note: teaches determining topics at time (t) (first time period as claimed) and new topics at time (t+1) (Second time period as claimed) based on frequency (number of appearances as claimed) of the keyword (term as claimed) (see column 6 lines 9-34), further teaches using stemming to perform analysis on the call center records (see column 9 lines 1-5), and teaches topics are predicted in a call transcript instance or aggregates call transcript instances (see column 12 lines 50-57), where a transcript relates to an interaction between a customer and a cell center (see column 5 lines 8-10)).
subtract the first document matrix time period body of terms from the second document matrix time period body of terms to identify filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms; identify the filtered terms appearing in the second document matrix time period body of terms but not in the first document matrix time period body of terms as emerging terms for the second time period;(see column 2 lines 2 lines 52-65 and column 6 lines 49- column 7 lines 5, Examiner’s note: teaches comparing current keywords to a previous time frame to understand what new themes appear, that were previous not found, the Examiner here is COVID emerging).
execute a machine learning model, executing the machine learning model including using the filtered terms as inputs in the machine learning model to output, by the machine learning model, an emerging topic label; (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
and store the emerging topic label in a database of labels available to categorize subsequent customer issues. (see column 12 lines 62-67, Examiner’s note: teaches storing the results of the machine learning in a database).
As per claim 22, Can teaches
wherein the machine learning model uses Latent Dirichlet Allocation (LDA) to analyze the filtered terms. (see column 6 lines 21-34 and lines 63-65, Examiner’s note: teaches using a Latent Dirichlet Allocation (LDA) technique to compare different topic keywords over time, and an example of a new cluster would be COVID topic).
As per claim 23, Can teaches
further including instruction that, when executed, cause the computing platform to: execute the machine learning model on the first time period body of terms to identify recurring terms, wherein executing the machine learning model on the first time period body of terms includes using the first time period body of terms as inputs in the machine learning model to output, by the machine learning model, the recurring terms. (see column 2 lines 50-60, column 6 lines 20-35, column 6 lines 56-65, Examiner’s note: teaches comparing the highest ranked keywords where highest rank is based on frequency of occurrence (e.g. recurring) with previous highest ranked keywords (e.g. recurring), to determine a new keyword. Teaches this previous occurrence may be last week or month for example. Further teaches new or emerging themes may be provided, as well as not alive anymore (e.g. not recurring), and teaches certain clusters are still prevalent (e.g. recurring)).
As per claim 24, Can teaches
wherein pre-processing the digitized text data further includes at least one of: converting all text to lowercase, removing stop words, or removing special characters. (see column 13 lines 40-44, Examiner’s note: teaches stop word filtering)
As per claim 25, Can teaches
further including instructions that, when executed, cause the computing platform to: transmit the emerging topic label to a computing device, wherein transmitting the emerging topic label to the computing device causes the emerging topic label to display on a display of the computing device. (see column 15 lines 40-47, Examiner’s note: topics labeled as emerging may be rendered in a graphical user interface (GUI) and provided to various components of the call center including manager displays and call agent displays).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Hardage et al. (United States Patent Number: US 11,869,016) teaches identifying topics being discussed on various platforms like telephone lines or web chats and then providing information on a timeline of when that topic appeared on various platforms (see abstract).
Ankan et al. (United States Patent Number: US 8,909,643) teaches a system for inferring evolving and emerging topics in a dataset (see abstract)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIERSTEN SUMMERS whose telephone number is (571)272-6542. The examiner can normally be reached Monday - Friday 7am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on 5712703923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users.
To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIERSTEN V SUMMERS/Primary Examiner, Art Unit 3626