DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
35 USC § 101
Claims 1-20 were evaluated under 35 USC 101. These claims, have been determined to be statutory under 35 USC 101. Although the claim elements, as a whole, gravitate toward, possibly being performed as a mental process, the human mind would not be able to, process, different machine-learning models for differing topics/keywords, and the presentation of these keywords, to identify skills needed and suggested, to support agents, and/or identifying/ quantifying trends, on certain products, by the user/caller. Although a mental process, may be considered on a higher/superficial level, the amount of decision making and trend identification, as claimed, would not be possible, in a timely manner, as a mental process (hence, the claims do pass, under Step2A Prong One test – See MPEP 2106.04 II; and for abstract ideas, mental steps, See MPEP 2106.04 (a)(2) II Certain Methods of Organizing Human Activity). Furthermore, referring to MPEP706(I), the standard to be applied in all cases in the “preponderance of evidence” test. Lastly, as to the computer product claims 15-20, these claims include a non-transitory computer readable medium, tied to executable program code by one or more processors; hence, these claims are statutory, and are not considered to be computer program claims, per se.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ellison et al (20210350385).
As per claim 1, Ellison et al (20210350385) teaches a system for a machine-learning pipeline for ontology generation via large language models (as generating machine learned models in a caller/agent environment – see fig 1, subblock 128 for the database generation and Figure 2, ML model training/generating), the system comprising:
one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed (para 0068, processor, executing stored instructions – para 0069), cause the one or more processors to:
extract, by each of the plurality of types of machine-learning models, historical keywords from historical communications between support agents and customers, in response to receiving the historical communications (as extracting keywords – para 0021, using ML models – para 0043, that is between agents/callers – para 0034);
select keywords which were extracted by at least a specific number of the plurality of types of machine-learning models; identify some of the selected keywords from communications between support agents and customers, in response to receiving the communications (as, tracking the results of the scoring determined by the ML model – para 0051);
and apply the identified keywords to at least one of recognizing skills required by a support agent to handle an open case, a trend in cases related to at least one of a product or a skill, or identifying skills for which support agents require additional training (examiner notes that this claim language is in the alternative; mapping Ellison et al (20210350385) to the claim feature of “one of the recognizing skills required by a support agent to handle an open case”, see para 0051, and in this instance, tracking keywords that reflect call features and predicting the responses that were effective, to the caller, and selecting by the ML to present to the agent, possible responses to achieve agent’s goals – para 0051). .
As per claim 2, Ellison et al (20210350385) teaches the system of claim 1, wherein the historical communications are received based on at least one of a time range or a maximum number of most recent historical communications (as tracking a date range of the historical communications – “conversation last 7 days”, as an example – para 0054).
As per claim 3, Ellison et al (20210350385) teaches the system of claim 1, wherein extracting historical keywords from historical communications between support agents and customers comprises
determining for each individual historical keyword that a count of historical communications which include the individual historical keyword exceeds a threshold number of historical communications (as, using a scoring system to track a score satisfying a threshold, tied to keywords for a certain topic – para 0080).
As per claim 4, Ellison et al (20210350385) teaches the system of claim 1, wherein each machine-learning model identifies any number of historical keywords in each historical communication as a set of historical keywords for the historical communication (as analyzing a change in keywords to detect a topic change – para 0053; examiner notes that in determining topic change, the use of stored/historical keywords tied to a topic, is necessary for the topic change determination, to work; also see para 0018 for historical data), .
As per claim 5, Ellison et al (20210350385) teaches the system of claim 1, wherein each historical communication comprises:
at least one of:
a textual communication or an audio communication, and is associated with a communication length that is at least one of as long as a minimum length or as short as a maximum length (as, the call analytics includes call length – see para 0054).
As per claim 6, Ellison et al (20210350385) teaches the system of claim 1, wherein each historical communication comprises
at least one of:
an initial comment, an inbound comment from a customer to a support agent, an outbound comment from a support agent to a customer, an internal note on a case made by a support agent for other support personnel, or metadata (as, the historical communication is based on conversations between customers and agents – see para 0018 –“historical conversations” tied to customer feedback).
As per claim 7, Ellison et al (20210350385) teaches the system of claim 1, wherein the plurality of types of machine-learning models are based on at least one of a statistical model, an unsupervised machine-learning model, a supervised machine-learning model, or at least one large language model (as, the ML model could be supervised/unsupervised, among other types of machine learning models – para 0046) .
Claims 8-14 are method claims whose steps are performed by the apparatus claims 1-7 above and as such, claims 8-14 are similar in scope and content to claims 1-7 above; therefore, claims 8-14 are rejected under similar rationale as presented against claims 1-7 above.
Claims 15-20 are computer product claims with a non-transitory computer readable medium whose stored instructions, when executed, perform steps that are performed by the apparatus claims 1-7 above and as such, claims 15-20 are similar in scope and content to claims 1-7 above; therefore, claims 15-20 are rejected under similar rationale as presented against claims 1-7 above. Furthermore, Ellison et al (20210350385) teaches storage media storing computer instructions to implement the disclosed techniques – para 0069.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO 892 form.
Furthermore, the following references, teaching elements disclosed in applicants specification and claims:
Le et al (20220414684) teaches the use of models to predict customer intent, using machine learning models -- para 0044.
Alikov et al (20220215452) teaches searchable keyword via multiple machine learning models (abstract) in evaluating customer feedback (para 0028)
Krishnan (20220131875) teaches sentiment analysis from transcribe text; training and validating the data – figure 1, using machine learning models – fig.2, for interpreting customer reactions – fig. 2, subblock 214; combining both metadata and call features – fig. 2, subblock 210.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Michael N Opsasnick/Primary Examiner, Art Unit 2658 12/13/2025