Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-3 and 5-12 are pending. Claims 1 and 11-12 are independent and are amended. Claim 4 is canceled and included in the independent Claims.
This Application was published as U.S. 20240153529.
Apparent priority: 9 November 2022 and 11 September 2023.
Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims.
This action is Final.
Response to Amendments and Arguments
Applicant’s arguments are directed to the amended language and are moot in view of the modified grounds of rejection. While one limitation is derived from canceled Claim 4 the modifications to the next limitation present a new claim scope.
Claim 1 is amended as follows and the other independent Claims are amended similarly:
1. An information input support system comprising:
a communication terminal operated by an operator; and
an information processing apparatus communicable with the communication terminal,
the information processing apparatus including circuitry configured to:
acquire information on the operator and activity information that includes information on an activity of the operator for a customer through a dialogue with the operator;
determine a degree of progress of the activity based on the activity information;
determine needs of the customer by using, as input information, the activity information and the determined degree of progress; and
transmit speech information to the communication terminal, the speech information including the needs of the customer for display to the operator,
the communication terminal including another circuitry configured to output the speech information.
Examiner had mapped the “degree of progress of the activity” of cancelled Claim 4 to the “score for each behavioral metric” of step 308 of Figure 3. Edwards teaches that for some metrics a low score is bad and for some others a low score may be good. For example, if Empathy Score is low, it means that the caller is not happy with the progress of the call. This is shown the “This is frustrating” example in Edwards 7:4-14. Further, Examiner cited Figure 3, 308 and the description of this step at 6:25-7:14 explains how these scores can be considered good, bad, or neutral.
Applicant argues that the “behavioral scores” of Edwards should not be equated with a “degree” of progress of the activity. Response, 6-7.
In Reply: the Scores are numeric and thus do indicate a degree. Further, the examples provided in the description of element 308 of Figure 3, as provided above, include an example where the caller says “This is frustrating” which is assigned the scores of Edwards and the high or low scores, depending on what is being graded, indicate that progress is not good.
Additionally, for Claim 6 which was more express, a reference, Wright, was added which expressly includes the keywords by measuring “the degree of progress achieved toward a defined goal” during a phone conversation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-8, and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Edwards (U.S. 11283928) in view of Wright (US 20180165723).
Regarding Claim 1, Edwards teaches:
1. An information input support system comprising: [Edwards, Figure 5 shows the system and its components. Figure 3 shows the flowchart of the operation.]
a communication terminal operated by an operator; and [Edwards, Figure 1, showing the terminals associated with “Customer 102”/ “customer” and “Agent 104” / “operator” communicating over a “PSTN 106.”]
an information processing apparatus communicable with the communication terminal, [Edwards, Figure 1, “contact service platform 100.”]
the information processing apparatus including circuitry configured to: [Edwards, Figure 5, “processing component 504.”]
acquire information on the operator and activity information that includes information on an activity of the operator for a customer through a dialogue with the operator; [Edwards, Figure 1, the “Agent 104”/ “operator” is speaking with the “Customer 102” / “customer” over the telephone network PSTN 106 and Platform 100 is monitoring and collecting information. “Very generally, a customer 102 contacts a contact center by placing one or more telephone calls through a telecommunication network, for example, via the public switched telephone network (PSTN) 106. In some implementations, the customer 102 may also contact the contact center by initiating data-based communications through a data network (not shown), for example, via the Internet by using voice over internet protocol (VoIP) technology.” 3:5-12. “Upon receiving an incoming request, a control module 120 in the monitoring and processing engine 110 uses a switch 124 to route the call interaction to a contact center agent 104. Once call connections are established, a media gateway 126 is used to convert voice streams transmitted from the PSTN 106 or from the data network (not shown) into a suitable form of media data compatible for use by a media processing module 134.” 3: 13-20. “In some applications, the media processing module 134 records call interactions received from the media gateway 126 and stores them as media data 144 in a storage module 140….” 3:2—23. ] (operator of the instant App is a human operator in conversation with a human customer and the system of the instant App tries to help the operator by providing him with information regarding the customer.)
determine a degree of progress of the activity based on the activity information; [Edwards, Figure 3, 308. The “score” of Edwards is based on behavioral metrics which show the progress in the conversation/activity between the caller and the agent and teaches the “degree of progress” of the Claim: “At step 308, agent assist engine 220, via behavioral models module 224, computes a score for each of a plurality of behavioral metrics based at least in part on the words provided by word spotting module 222. The behavioral metrics to be scored are typically either manually selected in advance or are selected based on the words provided by the word spotting module 222. In various embodiments, the plurality of behavioral metrics includes two or more of sentiment, active listening, empathy, demonstration of ownership, building rapport, setting expectations, effective questioning, promotion of self-service, speech velocity, or interruption. In some embodiments, behavioral models module 224 also takes into account non-text based attributes of the call interaction, such as volume, pitch, and/or tone. In certain embodiments, behavioral models module 224 computes a score for about 7-12 behavioral metrics.” 6:25-40.]
determine needs of the customer by using, as input information, the activity information and the determined degree of progress; and [Edwards, Figures 1 and 2. “Monitoring and Processing Engine 110” shown as 210 in Figure 2 and generating all sorts of information on the collected conversations to “Agent Assist Engine 220.” “The monitoring and processing engine 110 also includes a call management module 132 that obtains descriptive information about each call interaction based on data supplied by the control module 120. ….” 3:32-43. Figure 3, 310-316: The system uses the scores and rules obtained at step 314 and decides what to present to the user to make him happy. “At step 316, agent assist engine 220, via knowledge article selection module 226, evaluates a combination of the phrase and the scores of the plurality of behavioral metrics against each of the plurality of knowledge selection rules. For example, knowledge article selection module 226 evaluates the rule provided in step 314 against the scores computed in step 308 and the phrase provided in step 310.” 7:30-36. “The recognized phrases and behavioral metrics are then passed to a knowledge article selection module 226, which accesses a knowledge base 244 in a contact center database 240 to select a set of knowledge articles for presentation to the agent…” 5:8-11. To find the knowledge article to present to the user, user’s feelings (frustrated) based on the behavioral scores (showing lack of progress) are taken into consideration.]
transmit speech information to the communication terminal, the speech information including the needs of the customer for display to the operator, [Edwards, Figure 1, “Agent Assist 162.” “One example of an application engine 160 is an agent assist engine 162 configured for monitoring conversations in real-time to provide agents with targeted messages and knowledge articles relevant to the ongoing conversation and/or the present caller. Agent assist engine 162 can help contact center agents reduce talk time and improve service by delivering the right information efficiently to the customer at the right time. It can also help the contact center or associated business entities to maximize revenue opportunities (e.g., by prompting the agents to suggest accessories, promotions, incentives, and other business information relevant to each such customer).” 4:6-17.] (Note that the “speech information” of the instant Application is not speech: ‘[0047] … The speech information includes, for display to the operator, a greeting sentence, a question sentence, answer candidates, needs of the customer, information to be proposed to the customer, and the progress of the dialogue, for example.”)
the communication terminal including another circuitry configured to output the speech information. [Edwards, Figure 2, “output unit 230” and Figure 5, “Display component 514.” “Once a knowledge article or a set of knowledge articles is determined, knowledge article selection module 226 outputs data representing the identified knowledge article(s) to knowledge article presentation module 228. Knowledge article presentation module 228 forms a visual representation of the identified knowledge article(s) for presentation on an output unit 230 (e.g., on the agent's or supervisor's computer screen). The knowledge article can be presented in various forms to deliver real-time support to one or more agents.” 5:62-6:5.]
Edwards:
PNG
media_image1.png
640
786
media_image1.png
Greyscale
Evaluation of degree of progress is implicit in the behavioral scores of Edwards that show whether the caller is happy or frustrated with the course (progress) of the call.
Edwards does not include the word “degree of progress” expressly.
Wright teaches:
1. An information input support system comprising:
a communication terminal operated by an operator; and [Wright, Figure 1, “Communication Platforms” are shown as messaging, sms, livechat, email and voice and the “user’s other systems” indicate a user engaging in communications. Wright may use a human agent/operator but its goal is to avoid human agents: “[0006] Such providers may wish to identify particular categories of customer. For example, a business may wish to identify high value customers and prioritize engagement with them via human agents, or low value customers and prioritize automated interactions with them. …” “[0018] …I t can be expensive to hire, train and manage teams of human agents to carry out these interactions…”
an information processing apparatus communicable with the communication terminal, [Wright, “system” shown in communication with the “communication platforms” and with the “user’s other systems.”]
the information processing apparatus including circuitry configured to:
acquire information on the operator and activity information that includes information on an activity of the operator for a customer through a dialogue with the operator; [Wright includes a human agents that are engaged in natural language conversation with customers: “[0021] Embodiments of the natural language processing system (“the system”) described herein generally relate to deriving structured data from unstructured natural language interactions between two or more entities, typically a provider of goods or services and a consumer, and using that data to understand characteristics of the interaction and the entities. In some embodiments, the provider of good or services (e.g., a merchant, retailer, etc.) acts as a user of the system. In some embodiments the customer, or other forms of end-user, acts as a user of the system. The characteristics may be the topics covered during the interaction or conversation, the progress and quality of the conversation, the category or segment of customer, as well as its outcome as it relates to defined goals. The system further uses the interaction characteristic data and other data to optimize interactions by making recommendations, providing automated responses or providing said data to external applications. The system thereby enhances the ability to identify effective and ineffective interaction trends between customers, potential customers, customer service agents, and other interaction participants.”]
determine a degree of progress of the activity based on the activity information; [Wright, “Characteristics of interaction” include “degree of progress toward a defined goal”: “[0021] … deriving structured data from unstructured natural language interactions between two or more entities, … and using that data to understand characteristics of the interaction and the entities. … The characteristics may be the topics covered during the interaction or conversation, the progress and quality of the conversation, ….” “[0023] automate the process of generating structured data to describe natural language interactions including describing the topics of interest discussed, the degree of progress achieved toward a defined goal, the overall quality of the interaction and the factors contributing to overall quality….”]
determine needs of the customer by using, as input information, the activity information and the determined degree of progress; and [Wright, the goal of Wright is to let a service/goods provider to serve the needs of his customer: “[0034] … The system may use the mapped data to determine topics of interest mentioned, progress made through a series of steps or toward resolving an issue or making a purchase, and the overall quality of the interaction….” The resolution of the issue and making a purchase are both needs of the customer.]
transmit speech information to the communication terminal, the speech information including the needs of the customer for display to the operator, [Wright, the reports output by the system help better address the needs of the provider or the customer: “[0032] In another example, the product or service SKU(s) the customer has purchased may be joined with natural language topic data and natural language user sentiment or satisfaction data to determine what products are leading to the most complaints about product quality, defective products or poor product fit. In this way, the merchant may assess whether they should consider making changes to their merchandizing mix, negotiate compensation or pricing terms with their vendors, or change the SKU being provided to their customer to one that may be a better fit for their needs….” See [0048] and “[0061] The process described above may be repeated one or more times during an initial set up period with or without feedback provided by a user (for example a business owner) until output tagging at a sufficient level of performance is achieved to meet the provider's needs for reporting, recommending or automating responses or marketing messages. …” Under Agent Assistant: “[0116] In another example, the information provided may include recommendations on what products or services the Target User should consider purchasing based off information on the Target User as described in the preceding example, or actions a human agent should take to better meet a Target User's needs for example switching to an allergy-friendly service.”]
the communication terminal including another circuitry configured to output the speech information. [Wright, Figure 1, “voice” as a communication platform requires a speaker/ circuitry to output speech.]
Edwards and Wright pertain to evaluation of natural language interactions which include interactions between operators and customers of a company and it would have been obvious to modify the system of Edwards which provides the necessary information to an operator for his use with the teachings of Wright to permit the operator access to more crucial information for his review and use. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 2, Edwards teaches:
2. The information input support system according to claim 1, wherein the activity information includes information on at least one of a company name of the customer, names of persons of the customer, a business meeting between the operator and the customer, or participants in the business meeting. [Edwards, “The monitoring and processing engine 110 also includes a call management module 132 that obtains descriptive information about each call interaction based on data supplied by the control module 120. Examples of such information includes customer identifier (e.g., phone number, IP address, customer number), agent identifiers, call duration, transfer records, day and time of the call, and general categorization of calls (e.g., as determined based on touchtone input), all of which can be saved as metadata 142 in the storage module 140. In some applications, information stored in the metadata 142 is also cross linked, for instance, agent IDs can be mapped to additional data such as supervisor IDs, sites, etc.” 3:32-45.]
Regarding Claim 3, Edwards teaches:
3. The information input support system according to claim 1, wherein the circuitry is configured to:
determine proposal information to be proposed to the customer based on the needs of the customer; and [Edwards, Figure 3, 316, 318, and 320. “In addition to the word spotting results, knowledge article selection module 226 can also make use of metadata associated with the call interaction to identify relevant knowledge articles. For instance, by obtaining caller ID information, analyzing caller-specific history of contact center interactions, and/or analyzing caller-specific history of product acquisition, knowledge article selection module 226 can extract information about a particular caller's concerns and interests. Using such caller-specific information in conjunction with the content of the conversation, personalized knowledge article selection can be performed.” 5:51-61. “… It can also help the contact center or associated business entities to maximize revenue opportunities (e.g., by prompting the agents to suggest accessories, promotions, incentives, and other business information relevant to each such customer).” 4:6-18.]
transmit the speech information including the proposal information to the communication terminal. [Edwards, Figure 3, 322 and 324. “At step 324, agent assist engine 220, via knowledge article presentation module 228, presents in real-time the visual representation on a graphical user interface. In some embodiments, the visual representation includes an alert, a message, a score, or a combination thereof.” 7:50-55. See also Figure 4 for a depiction of the presentation.]
Regarding Claim 5, Edwards presents the information to the Operator/Agent for his use and review while the Operator/Agent is helping the customer and therefore the check with the operator is part of the system of Edwards. Edwards does not mention evaluating the progress of the activity.
Wright teaches:
5. The information input support system according to claim 1, wherein the circuitry is further configured to control the dialogue so as to check with the operator whether the activity information, the degree of progress of the activity, and the needs of the customer are correct. [Wright is directed to an automatic method of keeping track of progress of the call and the achievement of the goal but teaches that the previous methods had the agent/operator handle such tasks: “[0023] automate the process of generating structured data to describe natural language interactions including describing the topics of interest discussed, the degree of progress achieved toward a defined goal, the overall quality of the interaction and the factors contributing to overall quality. Prior approaches to solving these problems relied on manual review of natural language data, for example by customer service agents and manual data entry of resulting metadata, …….”]
Edwards and Wright pertain to evaluation of natural language interactions which include interactions between operators and customers of a company and it would have been obvious to modify the system of Edwards which provides the necessary information to an operator for his use with the teachings of Wright to permit the operator access to more crucial information for his review and use. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 6, Edwards does not expressly discuss evaluating the degree of progress although that would be implicit in the evaluation of the interaction and the information that is provided to the operator.
Wright teaches:
6. The information input support system according to claim 1, wherein the circuitry is configured to transmit the speech information including the needs of the customer input by the operator, the degree of progress of the activity, and a progress of the dialogue to the communication terminal. [Wright keeps track of the degree of progress toward the goal which is based on the progress in the conversation and conveys that to the agent/operator for his use: “[0022] The system described herein can provide one or more of the following benefits: [0023] automate the process of generating structured data to describe natural language interactions including describing the topics of interest discussed, the degree of progress achieved toward a defined goal, the overall quality of the interaction and the factors contributing to overall quality….”]
Edwards and Wright pertain to evaluation of natural language interactions which include interactions between operators and customers of a company and it would have been obvious to modify the system of Edwards which provides the necessary information to an operator with the system of Wright to include the progress of the call and the desired goal of the customer as another datapoint that is helpful to operator for performing his job and to the managers for evaluating the operator.(See Wright: “[0003] Such providers may have the need to perform analysis on such interactions for a variety of purposes, such as to measure the effectiveness of their natural language interactions with customers in achieving a desired goal such as a sale, or reducing customer defections. This information may be used to for a variety of purposes, which may include the performance management of teams of customer agents, providing information to different parts of their organization to help in product decision making, marketing, or to increase operational efficiency or the management of vendors.”) This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Regarding Claim 7, Edwards teaches:
7. The information input support system according to claim 1, wherein the circuitry inputs a speech made by the operator using voice during the dialogue. [Edwards uses ASR to generate a transcript of the conversation. “The systems and methods described herein improve contact center agent performance by integrating real-time call monitoring with speech analytics to present agents with information useful to handling of current calls. Real-time automatic speech recognition (ASR) techniques are applied to process live audio streams into text. The text is then scored for a plurality of behavioral metrics. Behavioral metrics measure the actions of the agent….” 2:13-20.]
Regarding Claim 8, Edwards teaches:
8. The information input support system according to claim 1, wherein the circuitry is configured to obtain the needs of the customer from the activity information using a language model having been trained by using character strings on which sequence labeling is performed. [Edwards uses a machine learning model (ML) that is trained by labeled data and teaches the “language model” of the Claim: “In several embodiments, a machine learning (ML) model is trained to output a score for each behavioral metric based on features extracted from previous call interactions. The model may be trained in a training phase using labeled or tagged call interactions, e.g., call interactions that were associated or annotated with one or more behavioral labels, grades, scores or ratings that may grade the call with respect to one trait or attribute of a set of attributes or behavioral metrics. The trained ML model may then be used to calculate behavioral analytics of incoming call interactions. The behavioral analytics may include a plurality of behavioral metrics or attributes, and various embodiments may provide a score or rating for each of these behavioral metrics or attributes for each analyzed call interaction.” 6:41-54.]
Regarding Claim 10, Edwards teaches:
10. The information input support system according to claim 8, wherein the language model is trained again using the needs of the customer modified by the operator. [Edwards, Figure 4 shows the presentation of material to be used by the operator/agent and takes into account the changing needs and interests of the user and the recent interactions between the user and the company. The interactions include responses of the operator/agent and teach the use of “needs of the customer modified by the operator” of the Claim. The operations are performed by a ML model that is trained by “previous call interactions.” 6:41-43. “FIG. 4 is an exemplary visual representation 400 of multiple knowledge articles that may be presented to an agent or an agent supervisor on a graphical user interface in real-time. Visual representation 400 includes a real-time customer satisfaction score 405, various alerts 410, and several real-time guidance scores 415. As seen in FIG. 4, the customer satisfaction score 405 may be visually indicated by a sad face and/or red text, indicating a low or poor score. The alerts 410 include an article name and a brief description of the alert, along with the time the alert was provided. In some embodiments, the user interface can be configured to provide various functionalities to help agents respond rapidly to changing business needs and consumer interests. For example, the content of knowledge articles can account for and include promotions, incentives, service alerts, new products, product accessories, and/or service enhancements. In another embodiment, the content of knowledge articles can further account for recent interactions between the customer and the company. Guidance scores 415 provide the scores for a plurality of behavioral metrics, and can be color-coded to reflect a low (e.g., red), neutral (e.g., yellow), or high (e.g., green) score.” 7:55-8:10.]
Claim 11 is a system claim with limitations corresponding to the limitations of Claim 1 but broader and simpler and is rejected under similar rationale.
11. An information processing apparatus communicable with a communication terminal operated by an operator, the information processing apparatus comprising circuitry configured to:
acquire information on the operator and activity information that includes information on an activity of the operator for a customer through a dialogue with the operator;
determine needs of the customer based on the activity information; and
transmit speech information to the communication terminal, the speech information including the needs of the customer for display to the operator.
Claim 12 is a computer program product system claim with limitations corresponding to the limitations of Claim 11 and is rejected under similar rationale.
12. A non-transitory recording medium storing a plurality of program codes which, when executed by one or more processors, causes the one or more processors to perform a method, the method comprising:
acquiring information on an operator and activity information that includes information on an activity of the operator for a customer through a dialogue with the operator;
determining needs of the customer based on the activity information; and
transmitting speech information to a communication terminal, the speech information including the needs of the customer for display to the operator.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Edwards and Wright in view of Can (U.S. 20240073321) and Okura (U.S. 20200117710).
Regarding Claim 9, Edwards teaches:
9. The information input support system according to claim 3,
wherein the circuitry is configured to determine needs of another customer similar to the needs of the customer or information for satisfying the needs of the customer as the proposal information [Edwards, “Generally, the present methods allow contact center agents and managers to review and relate ongoing and past call interactions (for instance, occurred with the same customer, or occurred with the same agent and one or more other customers under similar circumstances) to gain valuable business insights….” 2: 40-55.]
based on information on a sentence vector whose degree of similarity to the needs of the customer is higher than a threshold value, the vector sentence being obtained by using a language model, the language model outputting a higher degree of similarity as meaning is closer.
Edwards teaches that its data can be used to serve customers that are similar to the customer whose call activity data has been collected and is being used. Edwards does not teach the particular method of determining similarity.
Wright also mentions similar topics ([0058]) use of similar language ([0067]) but does not teach the use of vector distance.
Can teaches:
wherein the circuitry is configured to determine needs of another customer similar to the needs of the customer or information for satisfying the needs of the customer as the proposal information [Can clusters similar customers such that it can use the data of one for providing advice to the customer service representative when dealing with a similar customer. See Figure 4 where the model is trained on pervious calls and then responds to a similar caller based on data of previous but similar callers. See Figure 9 that shows the phrases used by similar customer and by a dissimilar customers when handling a current customer. “[0036] Continuing with the example, the one or more tokens may be generated based on a variety of criteria or schemes that may be used to convert characters or text to numerical values. For example, in one embodiment, each word of a text string can be mapped to a vector of real values. The word may then be converted to one or more tokens based on a mapping of the word via a tokenization process. Tokenization processes are known in the art and will not be further discussed in detail here.” “[0039] After collecting data points regarding the customer and call as previously described above, the system aggregates this information into, for example, a single unit of analysis, to generate a customer profile 116. The customer profile may, in some embodiments, contain both metadata related to the customer, collected in an offline manner, as well as information collected by the various predictive models, which is iteratively updated as the call proceeds. A customer profile may contain a mix of data types, which are vectorized as part of any similar-customer comparison. All data types are vectorized and then concatenated to form a single fixed-length vector….” “[0044] To link customers by their profiles, the system relies on a family of approaches standard in product recommender systems. This involves vectorizing customer information into a common format and using vector-based similarity metrics to cluster these customers….”]
based on information on a sentence vector whose degree of similarity to the needs of the customer is higher than a threshold value, the vector sentence being obtained by using a language model, the language model outputting a higher degree of similarity as meaning is closer. [Can, Figure 2, “semantic analyzer module 206” converts the input sentences into vectors of numerical tokens. “[0035] … The contextualized vectors are generated through the processes and methods used in language models such as the BERT and RoBERTa language models, which are known in the art. For the purposes of discussion throughout this application it is assumed that the contextualized vectors are generated based on such processes and methods.”. “[0045] In general, customer similarity can be viewed as customers who interact with company products/services in a similar way (own the same credit cards, have similar spending habits, etc.). These features may be embedded in a vector space and with similarities computed across a customer base. Model-score-based similarity provides that, given a current call, the system may calculate previously mentioned features (e.g., sentiment score, call reason, complaint detection, etc.). Calculating these on each utterance allows the system to obtain a distribution over time. This information may be vectorized and compared with previous calls (e.g., cosine distance). The most similar calls may be provided as a reference, particularly the previous call agent's notes and actions. This can give the current agent a suggestion as to what the best next steps are.”]
Edwards/Wright and Can pertain to operator and agent interactions and both use customer profiles to expedite service to a new customer based on his similarity to previous customers and it would have been obvious to use the particular similarity evaluation method of Can which uses vector distance with the system of Edwards that does not specify the method of measuring similarity. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Can uses clustering that can be based on a threshold distance but the use of threshold is not taught.
Okura teaches:
based on information on a sentence vector whose degree of similarity to the needs of the customer is higher than a threshold value, the vector sentence being obtained by using a language model, the language model outputting a higher degree of similarity as meaning is closer. [Okuda uses comparison with a threshold to find sentences that are similar to a current input sentence. “[0068] For example, the search unit 126 computes a context filter for each word instance included in an input sentence and computes an extended vector by concatenating the context filter to a corresponding word vector, in the same way as the context information generation unit 125 does. The search unit 126 computes a similarity index value (or a distance index value), such as cosine similarity, between extended vectors stored in the vector storage unit 122 and the extended vectors of the input sentence. Here, context filters are also taken into account for the similarity. The search unit 126 extracts a sentence with an extended vector whose similarity exceeds a threshold (or an extended vector whose distance is less than a threshold)….”]
Edwards/Wright/Can and Okura include comparing sentence vectors to identify similar sentences and it would have been obvious to use a threshold similarity measure where the sentences that are closer than the threshold are grouped together from Okura with the system of combination which already teaches clustering of similar sentences together as one known method of generating groupings. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached 9 to 5, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Fariba Sirjani/
Primary Examiner, Art Unit 2659