DETAILED ACTION
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Reece et al. (Publication: US 2021/0264909 A1) in view of Kneller et al. (Publication: US 2020/0082810 A1).
Regarding claim 1, see rejection on claim 20.
Regarding claim 2, Reece in view of Kneller disclose all the limitations of claim 1.
Reece discloses determining that the first input comprises at least one prompt for the trained ML model (
[0075] - wherein the conversation analysis indicators include a series of instant scores. More specifically, the system can prompt the coach and/or mentee.
[0093], [0095] - Fig. 6, User 602 input Utterance output and received by concatenated speaker data for the Sequential Machine Learning System.
[0052] - a user interface configured for annotating/labeling utterances, input by user.
[0064] FIG. 4 illustrates a conversation analytics system with interfaces for human and/or machine learning data first level and synthesized conversation features for a multiparty conversation, conversation analysis using sequential machine learning, and interfaces and mappings to determine actions to take based on sequential machine learning results.).
Regarding claim 3, Reece in view of Kneller disclose all the limitations of claim 2.
Reece discloses generating an input confidence score associated with the first input ([0099] - a first user speaking sternly and a second user yelling may both be confidently identified with the emotional label of anger, and the second user may have a higher intensity score. Genuineness scores may define the veracity of emotional labels. For example, some users may intentionally appear excited (e.g., excessive smiling, abrupt change in tone) indicating a reduced genuineness score.); and
determining that the input confidence score exceeds a predetermined threshold ([0187] - conversation analysis indicators may be determined continuously throughout a conversation, and the conversation highlights may be determined where there are threshold levels of change (e.g., deltas between sequential conversation analysis indicators being over a threshold, slope of a line connecting multiple conversation analysis indicators being over a threshold, or such a slope changing between positive and negative)).
Regarding claim 4, Reece in view of Kneller disclose all the limitations of claim 1.
Reece discloses generating an output confidence score that indicates whether the digital content item is responsive to at least one of the first input or the second input (
[0080] FIG. 5 illustrates machine learning system 504 preprocessing utterance 502 to generate utterance output 512. Utterance 502 represents a segment of a conversation between at least two users (e.g., a coach and a mentee). In the illustrated implementation, utterance 502 includes acoustic data, video data, and text data, “digital content”.
[0081] Machine learning system 504 generates utterance output 512 based on utterance 502. In the illustrated implementation, machine learning system 504 is a neural network system, including individual neural network components. Utterance output 512 is a computational representation of features identified from utterance 502, and is an intermediate result used for further processing, as described in FIG. 6, “first input, second input”.
[0085] - the video feature output may include a set of emotional labels each with a confidence score (e.g., happy 0.78, aggressive 0.32, sad 0.08). In other words, video processing component 506 may include a neural network trained on labeled images of faces. Video processing component 506 may identify facial expressions, such as smiling, crying, laughing, and grimacing, and further determine the associated emotion. The confidence score indicates a relative confidence in the accuracy of the emotional label determined).
Regarding claim 5, see rejection on claim 14.
Regarding claim 6, Reece in view of Kneller disclose all the limitations of claim 5.
Reece discloses wherein the first weight value is designated to a first user of the first client device and the second weight value is designated to a second user of the second client device ([0085] - the video feature output may include a set of emotional labels each with a confidence score (e.g., happy 0.78, aggressive 0.32, sad 0.08). In other words, video processing component 506 may include a neural network trained on labeled images of faces. Video processing component 506 may identify facial expressions, such as smiling, crying, laughing, and grimacing, and further determine the associated emotion. The confidence score indicates a relative confidence in the accuracy of the emotional label determined.
[0091] Utterance output 512 is the aggregate of the computational (e.g., mathematical, statistical) representation of features identified in the various data modalities (e.g., acoustic, video). In the example implementation, individual neural network based components (e.g., recurrent neural networks, convolutional neural networks) are used to generate feature data for each of the data modalities. Utterance output 512 are used as inputs to an additional machine learning system configured to generate conversation analysis indicators, as described in FIG. 6.
User 1 in Fig 5 and User 2 in another device Fig 5.
Fig. 5 – designed scores for client to output to Concatenated speaker data 608, Fig. 602 (first and second client device). ).
Regarding claim 7, see rejection on claim 15.
Regarding claim 8, see rejection on claim 16.
Regarding claim 9, see rejection on claim 17.
Regarding claim 10, Reece in view of Kneller disclose all the limitations of claim 9.
Reece discloses the intent text is included in the first input, and the non-textual input is included in one or more non-textual inputs comprising the second input ([0090] the output (e.g., the computational representation of identified features) of video processing component 506, acoustic processing component 508, and textual processing component 510. Utterance output 512 are used as inputs to generate conversation analysis indicators, as described in FIG. 6 [0092] . User 1 in Fig 5 , textual , and User 2 in another device Fig 5, video.).
Regarding claim 11, see rejection on claim 20.
Regarding claim 12, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses to perform the steps of transmitting, by the server device, the [[composite]] prompt to a remote device executing the trained ML model (
[0080] - Fig. 5, a system for processing the utterance then transmitted the data to sequential machine learning system for machine learning process, remote device, Fig. 6 [0093]
[0064] FIG. 4 illustrates a conversation analytics system with interfaces for human and/or machine learning data first level and synthesized conversation features for a multiparty conversation, conversation analysis using sequential machine learning, and interfaces and mappings to determine actions to take based on sequential machine learning results.).
Regarding claim 13, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses wherein the digital content item comprises one of: a text, a computer-aided design (CAD) object, a geometry, an image, a sketch, a video, executable code, or an audio recording ([0090] the output (e.g., the computational representation of identified features) of video processing component 506, acoustic processing component 508, and textual processing component 510.) .
Regarding claim 14, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses applying a first weight value to the first input ([0085] - the video feature output may include a set of emotional labels each with a confidence score (e.g., happy 0.78, aggressive 0.32, sad 0.08). In other words, video processing component 506 may include a neural network trained on labeled images of faces. Video processing component 506 may identify facial expressions, such as smiling, crying, laughing, and grimacing, and further determine the associated emotion. The confidence score indicates a relative confidence in the accuracy of the emotional label determined, “first weight”.
[0091] Utterance output 512 is the aggregate of the computational (e.g., mathematical, statistical) representation of features identified in the various data modalities (e.g., acoustic, video). In the example implementation, individual neural network based components (e.g., recurrent neural networks, convolutional neural networks) are used to generate feature data for each of the data modalities. Utterance output 512 are used as inputs to an additional machine learning system configured to generate conversation analysis indicators, as described in FIG. 6.
Fig. 5 – designed scores for client to output to Concatenated speaker data 608, Fig. 602 ); and
applying a second weight value to the second input ([0085] - the video feature output may include a set of emotional labels each with a confidence score (e.g., happy 0.78, aggressive 0.32, sad 0.08). In other words, video processing component 506 may include a neural network trained on labeled images of faces. Video processing component 506 may identify facial expressions, such as smiling, crying, laughing, and grimacing, and further determine the associated emotion. The confidence score indicates a relative confidence in the accuracy of the emotional label determined, “second weight”.
[0091] Utterance output 512 is the aggregate of the computational (e.g., mathematical, statistical) representation of features identified in the various data modalities (e.g., acoustic, video). In the example implementation, individual neural network based components (e.g., recurrent neural networks, convolutional neural networks) are used to generate feature data for each of the data modalities. Utterance output 512 are used as inputs to an additional machine learning system configured to generate conversation analysis indicators, as described in FIG. 6.
Fig. 5 – designed scores for client to output to Concatenated speaker data 608, Fig. 602 ).
Regarding claim 15, Reece in view of Kneller disclose all the limitations of claim 14.
Kneller discloses receiving the first weight value for the first input via a graphical user interface (GUI) ([0008] – First user responses to one or more menu prompts from the series of interactions in the given customer journey, into a concatenated word string; calculate a similarity score between the concatenated word string and a category name of each category from the categories list; and map the given customer journey to the category whose category name produces the highest similarity score.
[0030] 3. Calculate a text-based similarity score for each category (e.g., each contact reason): [0031] i. Calculate a distance-based score for every word in the concatenated prompt and every word in the category (e.g., each word being represented), for example, via a dense vector (e.g., a pre-trained, domain specific word-embedding representation). The vector space may capture semantic similarities between words in the sense that similar words will be geometrically close to each other in the vector space. [0032] ii. For each one of the category's words, take the maximum similarity score. [0033] iii. The category score equals an average of those maximum scores.); and
receiving the second weight value for the second input via the GUI ([0008] – second user responses to one or more menu prompts from the series of interactions in the given customer journey, into a concatenated word string; calculate a similarity score between the concatenated word string and a category name of each category from the categories list; and map the given customer journey to the category whose category name produces the highest similarity score.
[0030] 3. Calculate a text-based similarity score for each category (e.g., each contact reason): [0031] i. Calculate a distance-based score for every word in the concatenated prompt and every word in the category (e.g., each word being represented), for example, via a dense vector (e.g., a pre-trained, domain specific word-embedding representation). The vector space may capture semantic similarities between words in the sense that similar words will be geometrically close to each other in the vector space. [0032] ii. For each one of the category's words, take the maximum similarity score. [0033] iii. The category score equals an average of those maximum scores.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Reece with receiving the first weight value for the first input via a graphical user interface (GUI); and receiving the second weight value for the second input via the GUI generated a composited prompt as taught by Kneller. The motivation for doing is to be more efficiency on the process to save time instead of week.
Regarding claim 16, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses executing a second trained ML model on the generative prompt to generate a second digital content item ([0179] - conversation analysis indicators 2006 include one or more of an openness score, an ownership score, an engagement score, a goal score, an interruptions score, a “time spent listening” score, emotion labels etc. More specifically, conversation analysis indicators 2006 may include the output of multiple versions of sequential machine learning systems (e.g., sequential machine learning system 610, shown in FIG. 6). For example, a first machine learning system may be trained to determine an ownership score, and a second may be trained to determine an engagement score. Conversation analysis indicators 2006 may include the output scores/indicators from multiple machine learning systems trained on conversation features, “second trained ML model”.
[0184] - coach computing device may subsequently review conversation analysis indicators and conversation video. interface and mapping system hosts an API (e.g., HTTP-based API) in communication with a mobile application. For example, mentee computing device may execute the mobile application and communicate with interface and mapping system using the API to review conversation analysis indicators and conversation video, review “display digital content after conversation analysis indicators, prompt”.
[0177] includes an interface and mapping system in communication with conversation analysis and multiple client computing devices. Interface and mapping system can include multiple user interfaces for users (e.g., coaches, mentees) to review their performance in both a particular conversation, and in the context of a coaching relationship (e.g., across multiple conversations). Interface and mapping system can map analytical data (e.g., conversation analysis indicators and corresponding scores) generated by a machine learning systems (as shown in FIG. 4) to fill in templates for user interfaces, to determine inferences about the conversation, and to identify actions to be taken);
displaying the second digital content item in the multiparty interface ([0184] - coach computing device may subsequently review conversation analysis indicators and conversation video. interface and mapping system hosts an API (e.g., HTTP-based API) in communication with a mobile application. For example, mentee computing device may execute the mobile application and communicate with interface and mapping system using the API to review conversation analysis indicators and conversation video, review “display digital content after conversation analysis indicators, prompt”.
[0177] includes an interface and mapping system in communication with conversation analysis and multiple client computing devices. Interface and mapping system can include multiple user interfaces for users (e.g., coaches, mentees) to review their performance in both a particular conversation, and in the context of a coaching relationship (e.g., across multiple conversations). Interface and mapping system can map analytical data (e.g., conversation analysis indicators and corresponding scores) generated by a machine learning systems (as shown in FIG. 4) to fill in templates for user interfaces, to determine inferences about the conversation, and to identify actions to be taken
[0064] FIG. 4 illustrates a conversation analytics system with interfaces for human and/or machine learning data first level and synthesized conversation features for a multiparty conversation, conversation analysis using sequential machine learning, and interfaces and mappings to determine actions to take based on sequential machine learning results.).
Regarding claim 17, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses includes at least an intent text and a non-textual input, and wherein the non-textual input comprises at least one of: a computer-aided design (CAD) object, a geometry, an image, a sketch, a video, an application state, or an audio recording ([0090] the output (e.g., the computational representation of identified features) of video processing component 506, acoustic processing component 508, and textual processing component 510. Utterance output 512 are used as inputs to generate conversation analysis indicators, as described in FIG. 6 [0092] . User 1 in Fig 5 , textual , and User 2 in another device Fig 5, video.).
Regarding claim 18, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses wherein the trained ML model is trained using at least a combination of a first modality associated with text and at least one other modality associated with a non-textual input (
[0090] the output (e.g., the computational representation of identified features) of video processing component 506, acoustic processing component 508, and textual processing component 510. Utterance output 512 are used as inputs to generate conversation analysis indicators, as described in FIG. 6 [0092] . User 1 in Fig 5 , textual , and User 2 in another device Fig 5, video.
[0107], Fig. 6 - The inputs from user 1 and user 2 are concatenated, 608 to be trained in Sequential machine learning system. ).
Regarding claim 19, Reece in view of Kneller disclose all the limitations of claim 11.
Reece discloses executing a second trained ML model on the [[composite]] prompt to generate a second digital content item ([0179] - conversation analysis indicators 2006 include one or more of an openness score, an ownership score, an engagement score, a goal score, an interruptions score, a “time spent listening” score, emotion labels etc. More specifically, conversation analysis indicators 2006 may include the output of multiple versions of sequential machine learning systems (e.g., sequential machine learning system 610, shown in FIG. 6). For example, a first machine learning system may be trained to determine an ownership score, and a second may be trained to determine an engagement score. Conversation analysis indicators 2006 may include the output scores/indicators from multiple machine learning systems trained on conversation features, “second trained ML model”.
[0184] - coach computing device may subsequently review conversation analysis indicators and conversation video. interface and mapping system hosts an API (e.g., HTTP-based API) in communication with a mobile application. For example, mentee computing device may execute the mobile application and communicate with interface and mapping system using the API to review conversation analysis indicators and conversation video, review “display digital content after conversation analysis indicators, prompt”.
[0177] includes an interface and mapping system in communication with conversation analysis and multiple client computing devices. Interface and mapping system can include multiple user interfaces for users (e.g., coaches, mentees) to review their performance in both a particular conversation, and in the context of a coaching relationship (e.g., across multiple conversations). Interface and mapping system can map analytical data (e.g., conversation analysis indicators and corresponding scores) generated by a machine learning systems (as shown in FIG. 4) to fill in templates for user interfaces, to determine inferences about the conversation, and to identify actions to be taken);
displaying the second digital content item in the multiparty interface ([0184] - coach computing device may subsequently review conversation analysis indicators and conversation video. interface and mapping system hosts an API (e.g., HTTP-based API) in communication with a mobile application. For example, mentee computing device may execute the mobile application and communicate with interface and mapping system using the API to review conversation analysis indicators and conversation video, review “display digital content after conversation analysis indicators, prompt”.
[0177] includes an interface and mapping system in communication with conversation analysis and multiple client computing devices. Interface and mapping system can include multiple user interfaces for users (e.g., coaches, mentees) to review their performance in both a particular conversation, and in the context of a coaching relationship (e.g., across multiple conversations). Interface and mapping system can map analytical data (e.g., conversation analysis indicators and corresponding scores) generated by a machine learning systems (as shown in FIG. 4) to fill in templates for user interfaces, to determine inferences about the conversation, and to identify actions to be taken).
Regarding claim 20, Reece discloses a system comprising: one or more memories storing instructions; and one or more processors coupled to the one or more memories that, when executing the instructions, perform the steps of ([0045], [0049] - Fig. 1 , Computing system 100 can include one or more input devices 120 that provide input to the processor(s) 110 (e.g., CPU(s), GPU(s), HPU(s), etc.), notifying it of actions. instruction stored in the memory and performed by the processor to process the following:):
generating a multiparty interface that communicates with at least a trained machine learning (ML) model, a first client device, and a second client device (
[0043] The conversation analytics system provides a user interface for users to interrogate the conversation analysis indicators. In the example implementation, a coach/mentee user interface may be provided to track the effectiveness of a coaching relationship across multiple conversations. The conversation analytics system may further provide notification/alerts based on changes in the conversation analysis indicators. For example, a decline in the conversation analysis indicators may trigger an alert to the coach.
[0093], [0095] - Fig. 6, two user devices communicated with sequential machine learning system 610. Conversation analytics system 600 includes sequential machine learning system 610, and the associated input utterances (e.g., utterance output 512, described at FIG. 2). Sequential machine learning system 610 is configured to sequentially process utterances. In other words, a sequence of utterances are transformed into a sequence of conversation analysis indicators 612. The conversation analysis indicators include, for example, an emotional/behavioral state of the conversation as of the most recently processed utterance.
[0064] FIG. 4 illustrates a conversation analytics system with interfaces for human and/or machine learning data first level and synthesized conversation features for a multiparty conversation, conversation analysis using sequential machine learning, and interfaces and mappings to determine actions to take based on sequential machine learning results.);
combining at least a first input from the first client device and a second input from the second client device to generate a prompt (
[0036], Fig. 6 - a multiparty conversation can be segmented into utterances, and data for multiple modalities can be generated for each utterance. For each utterance, data for each modality of that utterance can be provided to a model trained for the modality (e.g., producing video-based output, acoustic-based output, etc.), which can be combined into utterance output. The utterance outputs can be input to a sequential model which can also receive its own output from processing one or more previous utterance outputs from the multiparty conversation, to generate conversation analysis indicators.
[0095] - Fig. 6 - The utterance outputs are concatenated together.
[0075] - wherein the conversation analysis indicators include a series of instant scores. More specifically, the system can prompt the coach and/or mentee.);
transmitting the prompt to the trained ML model for execution ([0036], Fig. 6 - a multiparty conversation can be segmented into utterances, and data for multiple modalities can be generated for each utterance. For each utterance, data for each modality of that utterance can be provided to a model trained for the modality (e.g., producing video-based output, acoustic-based output, etc.), which can be combined into utterance output. The utterance outputs can be input to a sequential model which can also receive its own output from processing one or more previous utterance outputs from the multiparty conversation, to generate conversation analysis indicators.
[0075] - wherein the conversation analysis indicators include a series of instant scores. More specifically, the system can prompt the coach and/or mentee to review the conversation (either alone or as a team), and provide with the prompt a subsegment of the recorded conversation with one or more corresponding inference labels. In some implementations, the training moments can be labeled as high points or low points in the conversation or a segment of the conversation. identifying a training moment as a high or low point can include applying yet another machine learning module trained “the trained ML model for execution” .);
receiving a digital content item from the trained ML model that was generated in response to the prompt (
[0075] - the training moments can be labeled as high points or low points in the conversation or a segment of the conversation. In some implementations, identifying a training moment as a high or low point can include applying yet another machine learning module trained to identify conversation high and low points based on human labeled data. the system 400 can operate as the conversation progresses live. The system can map a change in instant scores above a threshold to an action to provide an alert to one or both users, e.g., using notifications 410.
[0076], [0078] - Prompt is generated, Notification 410, Mantee Reports 412, Coaching Dashboard 414, Coach Match 416, “digital content” );
and displaying the digital content item in the multiparty interface ([0078] The interfaces of block 408 can generate various visualizations of the conversation analysis indicators. For example, a mentee may view, in mentee reports 412 via a web application, a conversation impact score, an overall composite score, an excitement score, an agreement score, a progress score, instant scores, coaching scores, match scores, etc. In some implementations, one or more of these scores can be provided in a mentee report with corresponding explanations and/or a baseline or comparison value so the mentee can interpret the values in terms of their progress, goals, or as a comparison to other mentees. Similarly, system 400 can include a coaching dashboard, e.g., as another web application, to show individual mentee scores, combinations of mentee scores, coaching or matching scores, coaching suggestion inferences or actions, etc.
[0064] FIG. 4 illustrates a conversation analytics system with interfaces for human and/or machine learning data first level and synthesized conversation features for a multiparty conversation, conversation analysis using sequential machine learning, and interfaces and mappings to determine actions to take based on sequential machine learning results.).
Reece does not, Kneller discloses
generated a composited prompt ([0029] - concatenate or combine prompts (e.g., the text played by the system) of the remaining menus and corresponding customer responses.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Reece with generated a composited prompt as taught by Kneller. The motivation for doing is to be more efficiency on the process to save time instead of week.
Response to Arguments
Examiner suggests to amend a specific element in the claim that when reading a claim in light of the invention, it directs to a unique technology. The examiner can be reached at 571-270-0724 for further discussion.
Claim Rejection Under 35 U.S.C. 103
Applicant assets “As disclosed in Reece, the sequential ML model receives the utterance outputs from the multimodal ML models but does not receive the user utterances from the first and second devices associated with the first and second users. Based on these corrected claim mappings, to teach or suggest the above limitations of claim 1, Reece would have to disclose combining a first user utterance from the first device and a second user utterance from the second device to generate a prompt, transmitting the prompt to a multimodal ML model for execution, receiving an utterance output from the multimodal ML model, and displaying the utterance output in the interface. Importantly, Reece contains no such teachings. Instead, Reece discloses that each user utterance associated with a particular user is processed separately by at least one multimodal ML model to generate at least one utterance output. See&, paragraphs [0036], [0038], [0081 ], and [0095] and Figures 5-6. Thus, in Reece, each prompt to a multimodal ML model would include only a user utterance from a single device associated with a single user. Consequently, Reece cannot and does not teach or suggest combining a first user utterance from a first device and a second user utterance from a second device to generate a prompt. Reece is silent in this regard. In addition, Reece discloses only that the conversation analysis indicators are displayed in the interface at the devices for review by the users. See kt_, paragraph [0043]. However, Reece does not teach or suggest that the utterance outputs are displayed in the interface. Reece is also silent in this regard.”
Examiner disagrees.
During patent examination, the pending claims must be given their broadest reasonable interpretation consistent with the specification. See MPEP § 2111. Further, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). See also MPEP § 2145(VI).”
In response to Applicant's arguments against the references individually, one cannot show nonobviousness by referencing references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
It is the combination of Reece in view of Kneller that discloses
combining a first user utterance from a first device and a second user
utterance from a second device to generate a prompt.
Reece discloses [0036], Fig. 6 - a multiparty conversation can be segmented into utterances, and data for multiple modalities can be generated for each utterance. For each utterance, data for each modality of that utterance can be provided to a model trained for the modality (e.g., producing video-based output, acoustic-based output, etc.), which can be combined into utterance output. The utterance outputs can be input to a sequential model which can also receive its own output from processing one or more previous utterance outputs from the multiparty conversation, to generate conversation analysis indicators.
[0095] - Fig. 6 - The utterance outputs are concatenated together.
[0075] - wherein the conversation analysis indicators include a series of instant scores. More specifically, the system can prompt the coach and/or mentee.
Kneller discloses [0029] - concatenate or combine prompts (e.g., the text played by the system) of the remaining menus and corresponding customer responses.
Regarding claims 2 – 10, and 12 – 19, the Applicant asserts that they are not obvious over based on their dependency from independent claims 1, and 11 respectively. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claims 1, and 11 respectively.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571) 270-0724. The examiner can normally be reached on Monday-Thursday and alternate Fridays (9:30am - 6:00pm) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ming Wu/
Primary Examiner, Art Unit 2618