Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This is in response to Applicant’s amendment which was filed on 11/24/2025 and has been entered. No claims have been amended. No claims have been cancelled. No claims have been added. Claims 1-21 are pending in this application, with claims 1, 8, 15 being independent.
Response to Arguments
Applicant’s arguments, see Remarks, filed 11/24/2025, with respect to clams 5, 12 and 19 have been fully considered and are persuasive. The rejection has been withdrawn.
Applicant's arguments with respect to claims 1, 8, 15 and corresponding dependent claims have been fully considered but they are not persuasive.
Regarding claim 1, Applicant submits the applied art does not disclose or suggest the claim 1 requirement of a separate "generative review model" from the “generated call classification model” nor classifying a call based on both the generative review model and a call classification model. Examiner respectfully disagrees. Lemus discloses a method performed in two stages. In the first stage spoken interaction are converted into structured text to determined significant features and then segmented for categorizing. This functionality corresponds to the claimed generating a call classification model. In addition, Lemus also discloses generating natural language summary from the multiple reviews which corresponds to the claimed generative review model as claimed. While the patent does not present the two as entirely separate, both classification and generative review are performed as required by the claims as recited. The two models are used in conjunction to perform the call classification. Claims 8 and 15 recite similar subject matter and the rejections are also maintained along with corresponding dependent claims.
Regarding the arguments of claims 7, 14 and 21, there is no specific language in the claim which ties the determination of an undesirable call identifier in claims 7, 14 and 21 with the classifying in the independent claim. A person of ordinary skill in the art would have been motivated to modify Lemus's with Li's suspicious-call criteria, because the benefits of identifying an unwanted caller identifier to prevent unwanted communications. The combination of two well-known call classification methods in parallel would be met with a reasonable expectation of success to a person of ordinary skill in the art. Additionally, the claims are dependent upon claims 5, 12 and 19 reciting the “weight their relative contributions” as discussed in the remarks.
Claim Objections
Claims 5, 12 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recites a series of steps for collecting call data and classifying the call for which the call data was collected, and therefore is a process.
The limitations of “extracting significant features, generating a call classification model, extracting a text review, and generating a generative review model, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Additionally, this judicial exception is not integrated into a practical application because the claim only recites to classify the call which is considered insignificant extra solution activity that does not integrate the method into a practical application. The classification is just a step to organize information. Therefore, the claim is not patent eligible. The process claims are also ineligible because the methods are not tied to a particular machine or apparatus and do not transform a particular article. Under 35 U.S.C. §101 a process must 1) be tied to another statutory class (such as a particular apparatus) or 2) transform underlying subject matter (such as an article or materials) to a different state or thing.
Dependent claims 2-7 do not include additional elements that are sufficient to amount to significantly more than the judicial exception and are also ineligible.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-3, 6 , 8-10, 13, 15-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,841,424 (“Lemus et al.”).
Regarding claim 1, Lemus et al. discloses a method for classifying calls, the method comprising:
collecting call data for each call, wherein each call is associated with a unique call identifier
(fig.2, step 202 “receive phone call” step 204 “generate transcript for the call”, col. 1, the transcript contains text that corresponds to the audio for person on the call;
extracting significant features from the collected call data (col. 1, para. 4, call monitoring device
is configured to perform sentiment analysis to generate metadata for the call); generating a call classification model based on the extracted significant features, wherein the call classification model comprises a set of rules based on which a predetermined call class is assigned to the call (col. 1, para. 4, the transcript is input with the metadata into a machine learning model to generate a call profile, the call profile identifies a call classification for the phone call based on the information from the model);
extracting a text review from the collected call data (col. 3, para. 5, the transcripts 126 comprise text that corresponds with the dialog for people on a phone call 112. As an example, a transcript 126 may identify each speaker on a phone call 112 and may comprise text that corresponds with statements that are made by each speaker on the phone call 112); generating a generative review model based on the extracted text review, the generative review model used for correlating text reviews with a call class (col. 4, para. 1, The machine learning models 128 are generally configured to classify phone calls 112. In one embodiment, a machine learning model 128 is configured to receive a transcript 126 and metadata 136 for a phone call 112 as inputs and to output a call profile 138 based on text from the transcript 126 and the metadata 136); and
classifying the call for which the call data was collected based on the call classification model generated and the generative review model (fig. 2, step 212, step 218, step 226 described in col. 9-col. 11, at step 212 the classification engine 134 generates a feedback report 142 based on differences between the call profile and the call log 124 for the phone call 112” at step 218 the user feedback indicates whether the call classification determined by the machine learning module is correct, at step 226 the classification engine updates to include any additional instructions/information provided from the quality control unit).
Claim 1 defines generating a generative review model and classifying the call for which the call data was collected based on the call classification model and the generative review model. Lemus et al. discloses using one model in two stages, the transcript is input with the metadata into a machine learning model then updated with the feedback report. However, a person of ordinary skill in the art before the effective filing date of the claimed invention would have known how to implement two models with one input each instead of one model with the same two inputs of extracted features and user feedback to arrive at the claimed invention.
Claim 8 recites a system for classifying calls, comprising: one or more hardware processors; one or more memory media; a combination of the one or more processors configured to perform the method of claim 1. Lemus et al. discloses one or more hardware processors; one or more memory media; a combination of the one or more processors (Fig. 3), thus claim 8 is rejected in view of Lemus et al. for the same reasons discussed with respect to claim 1.
Claim 15 recites a non-transitory computer readable medium storing thereon computer executable instructions for classifying calls, including instructions for performing the method of claim 1. Lemus et al. discloses a non-transitory computer readable medium storing thereon computer executable instructions (claim 17), thus claim 15 is rejected in view of Lemus et al. for the same reasons discussed with respect to claim 1.
Regarding claims 2, 9 and 16, Lemus et al. discloses the method of claim 1, wherein the unique call identifier comprises: a phone number; or a unique identifier for the caller when using instant messaging and Voice over Internet Protocol (VoIP) services (col. 3, para. 2, “the call log 124 may comprise a call identifier 148 for the phone call 112”, examples of phone calls 112 include voice over internet protocol (VOIP) calls”).
Regarding claims 3, 10 and 17, Lemus et al. discloses the method of claim 1, wherein the call data includes at least one of: a significant feature that includes at least one of: a call identifier; a duration of the call; a time of the call; which participant of the call ended the call; and whether data was transferred between a calling party and a receiving party of the call; and a text review received from one or more users, the text review comprising at least one of: a list of classifications of calls; and information about classification errors by an automated system (col. 3, para. 5 “the call log may comprise a call identifier for the phone call, a timestamp, a text description, call classification, caller’s name, caller’s phone number, call receiver’s name, user identifier for a call receiver, or any other information associated with a phone call).
Regarding claims 6, 13 and 20, Lemus et al. discloses the method of claim 5, wherein the classifying of the call is further based on data on previous classified calls including at least one of: identification of the call by which the classification was made; a dictionary with word forms, the word forms being based on text reviews from users, the text reviews being associated with call identifiers with which the classification was performed; and previously formed classifying heuristics (col. 4, para. 2 “the machine learning model 128 has been previously trained using training data 130).
Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,841,421 (“Lemus et al.”) in view of U.S. Publication No. 2022/0201121 (“Kane”).
Regarding claims 4, 11 and 18, Lemus et al. does not specify the method of claim 1, wherein a Latent Dirichlet allocation (LDA) generative model is used as a generative review model.
In the same field of endeavor, Kane discloses [0197] the text associated with each call is treated as a “document”. This dataset of documents can be used as input to a topic modeling algorithm, for example, based on Latent Dirichlet Allocation, or LDA. Latent Dirichlet Allocation may be a generative statistical model. For example, the model posits that each document is a mixture of a small number of topics and that each word's presence is attributable to one of the document's topics ([0197]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to implement a Latent Dirichlet Allocation model which may be a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar which allows the algorithm to provide topic labels to each call.
Claims 7, 14, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. 10,841,421 (“Lemus et al.”) in view of U.S. Publication No. 2023/0012008 (“Li et al.”).
Regarding claims 7, 14 and 21, Lemus et al. does not specify the method of claim 1, wherein the unique call identifier is recognized as an undesirable when at least one of the following conditions is satisfied: a call duration associated with a suspicious call identifier being less than a predetermined threshold; and a party ending the call being the receiving party of the call.
In the same field of endeavor, LI discloses a method of processing calls. The screening data 118 may comprise a number of suspicious calls associated with the user. A call may be identified as suspicious based on any criteria. Example criteria may comprise criteria associated with identifying one-ring scam calls. A criteria may be a call duration of the call being below a first threshold of time. For example, the first threshold may have a value for detecting calls that are one ring or less in duration. ([0033]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to monitor call duration time as disclosed by Li et al. the use of a threshold of time may be a threshold associated with detecting one-ring scam calls or other similar nuisance call. A person of ordinary skill in the art would have been motivated to combine the two prior art to more accurately solve the problem of identifying suspicious calls.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIRAPON TULOP whose telephone number is (571)270-7491. The examiner can normally be reached Monday to Friday, 10:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIRAPON TULOP/Examiner, Art Unit 2693