DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of the Claims
The pending claims in the present application are claims 1-20 of the Response filed on 27 November 2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The paragraphs below provide rationales for the rejection. The rationales are based on the multi-step subject matter eligibility test outlined in MPEP 2106.
Step 1 of the eligibility analysis involves determining whether a claim falls within one of the four enumerated categories of patentable subject matter recited in 35 USC 101. (See MPEP 2106.03(I).) That is, Step 1 asks whether a claim is to a process, machine, manufacture, or composition of matter. (See MPEP 2106.03(II).) Referring to the pending claims, the “method” of claims 1-10 constitutes a process under 35 USC 101, the “system” of claims 11-19 constitutes a machine under the statute, and the “medium” of claim 20 constitutes a “manufacture” under the statute. Accordingly, claims 1-20meet the criteria of Step 1 of the eligibility analysis. The claims, however, fail to meet the criteria of subsequent steps of the eligibility analysis, as explained in the paragraphs below.
The next step of the eligibility analysis, Step 2A, involves determining whether a claim is directed to a judicial exception. (See MPEP 2106.04(II).) This step asks whether a claim is directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea. (See id.) Step 2A is a two-prong inquiry. (See MPEP 2106.04(II)(A).) Prong One and Prong Two are addressed below.
In the context of Step 2A of the eligibility analysis, Prong One asks whether a claim recites an abstract idea, law of nature, or natural phenomenon. (See MPEP 2106.04(II)(A)(1).) Using claim 1 as an example, the claim recites the following abstract idea limitations:
“A method of ... a behavioral interview to identify behavioral attributes in a text passage, the method comprising: ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... developing a taxonomy of behaviors; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... annotating a training data set of text passages to identify a classification and/or string index location of behaviors associated with the text passages; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... predict one or more behaviors based on an input text passage; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... identifying one or more behavioral attributes required for a job; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... generating an assessment for prospective candidates, said assessment including one or more questions targeting evaluation of said one or more behavioral attributes, the one or more questions presented to the prospective candidates; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... recording ... a response to said assessment from one or more prospective candidates, wherein said response includes at least one of audio and text data; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... converting said response to a text passage; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... identify one or more predicted behaviors; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... weighting an importance for each of said one or more predicted behaviors; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... determining scores for the behavioral attributes using at least one of said importance and said identified predicted behaviors, the scores used to generate a corpus of transcript data objects having word-level annotations indicative of an estimated probability of a word belonging to a particular behavioral class” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... rendering ... a visual rendering of an annotated transcript having visual graphical characteristics modified based at least on the word-level annotations corresponding to at least one of the one or more behavioral attributes required for the job; ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... sentence-level annotation during a first phase of annotating the training data set of text passages, and ... string index location information during a second phase of annotating the training data set of text passages; and ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... wherein annotating the training data set of text passages includes ... conduct the first phase at a sentence-by-sentence level to identify one or more behaviours from a pre-established list of behavioural clusters; and ...” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
“... conduct the second phase to identify one or more string index locations corresponding to each of the identified one or more behaviours, the one or more string index locations each corresponding to an exact sub-sentence sequence, the one or more string index locations used to generate the word-level annotations.” - See below regarding MPEP 2106.04(a), certain methods of organizing human activity, and mental processes
The above-listed limitations of independent claim 1, when applying their broadest reasonable interpretations in light of their context in the claim as a whole, fall under enumerated groupings of abstract ideas outlined in MPEP 2106.04(a). For example, limitations of the claim can be characterized as: commercial interactions, including marketing associated with matching job seekers with employers; and managing personal behavior or relationships or interactions between people, associated with matching job seekers with employers, which fall under the certain methods of organizing human activity grouping of abstract ideas (see MPEP 2106.04(a)). Limitations of the claim also can be characterized as: concepts performed in the human mind, including observation (e.g., the recited “recording” step), and evaluation, judgment, and/or opinion (e.g., the recited “developing,” “annotating,” “predict,” “identifying,” “generating,” “converting,” “applying,” “weighting,” “determining,” “rendering,” “annotating,” and “conduct” steps), which fall under the mental processes grouping of abstract ideas (see MPEP 2106.04(a)). Accordingly, for at least these reasons, claim 1 fails to meet the criteria of Step 2A, Prong One of the eligibility analysis.
In the context of Step 2A of the eligibility analysis, Prong Two asks if the claim recites additional elements that integrate the judicial exception into a practical application. (See MPEP 2106.04(II)(A)(2).) Continuing to use claim 1 as an example, the claim recites the following additional element limitations:
The claimed “method” involves “automating” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “predict” involves “training a machine learning model” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “presented” is “through graphical control elements rendered on a graphical user interface” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “recording” is performed “using a video or audio interface coupled to the graphical user interface” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “identify” involves “applying said machine learning model to said text passage” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “rendering” is “on a scoring dashboard graphical user interface” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
“...wherein said machine learning model includes a first behavioural class machine learning model adapted for” performing the claimed “sentence-level annotation” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
“... a second location-based machine learning model adapted for” performing the claimed “string index location information” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The claimed “conduct” is performed by “operating the machine learning model” - See below regarding MPEP 2106.05(a)-(c) and (f)-(h)
The above-listed additional element limitations of claim 1, when applying their broadest reasonable interpretations in light of their context in the claim as a whole, are analogous to: accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, mere automation of manual processes, instructions to display two sets of information on a computer display in a non-interfering manner, without any limitations specifying how to achieve the desired result, and arranging transactional information on a graphical user interface in a manner that assists traders in processing information more quickly, which courts have indicated may not be sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)(I)); a commonplace business method being applied on a general purpose computer, gathering and analyzing information using conventional techniques and displaying the result, and selecting a particular generic function for computer hardware to perform from within a range of fundamental or commonplace functions performed by the hardware, which courts have indicated may not be sufficient to show an improvement to technology (see MPEP 2106.05(a)(II)); a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions, and merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions, which do not qualify as a particular machine or use thereof (see MPEP 2106.05(b)(I)); a machine that is merely an object on which the method operates, which does not integrate the exception into a practical application (see MPEP 2106.05(b)(II)); use of a machine that contributes only nominally or insignificantly to the execution of the claimed method, which does not integrate a judicial exception (see MPEP 2106.05(b)(III)); transformation of an intangible concept such as a contractual obligation or mental judgment, which is not likely to provide significantly more (see MPEP 2106.05(c)); remotely accessing user-specific information through a mobile interface and pointers to retrieve the information without any description of how the mobile interface and pointers accomplish the result of retrieving previously inaccessible information, which courts have found to be mere instructions to apply an exception, because they recite no more than an idea of a solution or outcome (see MPEP 2106.05(f)); use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea, a commonplace business method or mathematical algorithm being applied on a general purpose computer, and requiring the use of software to tailor information and provide it to the user on a generic computer, which courts have found to be mere instructions to apply an exception, because they do no more than merely invoke computers or machinery as a tool to perform an existing process (see MPEP 2106.05(f)); mere data gathering in the form of obtaining information about transactions using the Internet to verify transactions and consulting and updating an activity log, and selecting a particular data source or type of data to be manipulated in the form of selecting information, based on types of information and availability of information in an environment, for collection, analysis, and display, which courts have found to be insignificant extra-solution activity (see MPEP 2106.05(g)); and specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer, which courts have described as merely indicating a field of use or technological environment in which to apply a judicial exception (see MPEP 2106.05(h)). For at least these reasons, claim 1 fails to meet the criteria of Step 2A, Prong Two of the eligibility analysis.
The next step of the eligibility analysis, Step 2B, asks whether a claim recites additional elements that amount to significantly more than the judicial exception. (See MPEP 2106.05(II).) The step involves identifying whether there are any additional elements in the claim beyond the judicial exceptions, and evaluating those additional elements individually and in combination to determine whether they contribute an inventive concept. (See id.) The ineligibility rationales applied at Step 2A, Prong Two, also apply to Step 2B. (See id.) For all of the reasons covered in the analysis performed at Step 2A, Prong Two, claim 1 fails to meet the criteria of Step 2B. As a result, claim 1 is rejected under 35 USC 101 as ineligible for patenting.
Regarding pending claims 2-10, the claims depend from claim 1, and expand upon limitations introduced by claim 1. The dependent claims are rejected at least for the same reasons as claim 1. For example, the dependent claims recite abstract idea elements similar to the abstract idea elements of claim 1, that fall under the same abstract idea groupings as the abstract idea elements of claim 1 (e.g., the “wherein converting said response to a text passage comprises using ... speech recognition service on said audio data” of claim 2, the “wherein said calculating scores comprises applying at least one of a rubric and a benchmark” of claim 3, the “wherein said taxonomy of behaviors includes a binary classification of behaviors” of claim 4, the “wherein applying said ... model comprises applying said ... model to a subsection of said text passage” of claim 5, the “wherein said text passage is at least one of an entire input text passage, a paragraph, a sentence, or a subsection of said input text passage” of claim 6, the ”wherein the exact sub-sentence sequence and corresponding one or more behaviours is rendered ... for display to one or more users” of claim 7, the “wherein the exact sub-sentence sequence and corresponding one or more behaviours is appended into a training data set, the training data set used to” of claim 8, and the “combining pre-annotated sentences back into an original transcript with boundaries of sentence-level annotations of the first phase, wherein words that convey a meaning of a classified behaviour are included in the boundaries” of claim 10). The dependent claims recite further additional elements that are similar to the additional elements of claim 1, that fail to warrant eligibility for the same reasons as the additional elements of claim 1 (e.g., the “automated” of claim 2, the “machine learning” of claim 5, the “on a graphical user interface ... of the graphical user interface” of claim 7, the “re-train the machine learning model based on whether a prospective candidate of the prospective candidates is selected or not selected” of claim 8, the “wherein the first behavioural class machine learning model is a shallow machine learning model” of claim 9, and the “wherein the second phase of operation of the machine learning model includes” of claim 10). Accordingly, claims 2-10 also are rejected as ineligible under 35 USC 101.
Regarding claims 11-19, while the claims are of different scope relative to claims 1-9, the claims recite limitations similar to the limitations of claims 1-9. As such, the rejection rationales applied to reject claims 1-9 also apply for purposes of rejecting claims 11-19. Limitations recited by claims 11-19 that do not have a counterpart in claims 1-9, such as the recited “computer system configured for automating a behavioral interview to identify behavioral attributes in a text passage, the system comprising a processor coupled to computer memory, the processor configured to” limitations of claim 11, fail to warrant a finding of eligibility, because such limitations amount to additional elements that fail to meet the criteria of Step 2A, Prong Two and Step 2B, for the same reasons as the additional elements of claims 1-9. Claims 11-19 are, therefore, also rejected as ineligible under 35 USC 101.
Claim 20, while of different scope relative to claims 1 and 11, recites limitations similar to those recited by claims 1 and 11. As such, the rationales applied for purposes of rejecting claims 1 and 11 also apply for purposes of rejecting claim 20. Claim 20 is, therefore, also rejected as ineligible under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. App. Pub. No. 2021/0233030 A1 to Preuss et al. (hereinafter referred to as “Preuss”), in view of U.S. Pat. App. Pub. No. 2015/0127567 A1 to Menon et al. (hereinafter referred to as “Menon”).
Regarding independent claim 1, Preuss discloses the following limitations:
“A method of automating a behavioral interview to identify behavioral attributes in a text passage, the method comprising: ...” - Preuss discloses, “systems methods for automating recorded candidate assessments include receiving a submission for an available position including a question response recording for each of one or more interview questions. For each question response recording, a transcript can be generated by applying a speech-to-text algorithm to an audio portion of the recording. The systems and methods can detect, within the transcript, identifiers each associated with the personality aspects by applying a natural language classifier trained to detect words and phrases associated with the personality aspects of the personality model” (Abstract). A method for automating recorded candidate assessments to identify personality aspects in a transcript of a record, in Preuss, reads on the recited limitation.
“... developing a taxonomy of behaviors; ...” - See the aspects of Preuss that have been cited above. A natural language classifier trained to detect words and phrases associated with personality aspects of a personality model, in Preuss, reads on the recited limitation.
“... annotating a training data set of text passages to identify a classification and string index location of behaviors associated with the text passages; ...” - See the aspects of Preuss that have been cited above. Preuss also discloses, “the speech-to-text algorithm is trained with a customized dictionary of terms associated with a plurality of personality aspects of a personality model that indicate an aptitude of the candidate for the available position” (Preuss, para. [0006]), and “the training data 124 of customized words, phrases, and synonyms make up a customized dictionary for the STT algorithm. In some examples, the entries in the customized dictionary are assigned a higher identification priority (weight) than other words, making the entries more resistant to missed detection. In one example, the customized dictionary used to train the STT algorithm includes over 16,000 words and phrases plus synonyms associated with each of the entries. In some examples, the STT algorithm can be further customized by training with one or more language model data sets and/or acoustic model data sets” (Preuss, para. [0069]). Customizing words and phrases in training data to make entries of a customized dictionary resistant to missing detection of words and phrases associated with personality aspects, in Preuss, reads on the recited limitation.
“.. training a machine learning model to predict one or more behaviors based on an input text passage; ...” - See the aspects of Preuss that have been cited above. Preuss also discloses, “the machine learning algorithm used by the STT conversion engine 134 can be trained by artificial intelligence (AI) training engine 142 to detect keywords, phrases, and synonyms associated with aspects in the personality model 500 (FIG. 5) with greater accuracy than other words, which in turn improves the performance of the natural language classifier that is trained to detect the personality aspects” (para. [0069]), “the video assessment system 108 can also include a language classification engine 138 that is trained by AI training engine 142 to detect personality aspect identifiers within interview question transcripts submitted by candidates 102. In some examples, the language classification engine 138 uses a commercial natural language classifier, such as IBM WATSON, Google Cloud Speech, or Amazon Polly, that has been specifically trained to detect personality aspects within interview question transcripts” (para. [0076]), and “FIG. 12 illustrates an interview question transcript 1200 that is provided as input to the natural language classifier. In the example, the transcript includes a response to a question asking a candidate to discuss a time when he or she worked in a team environment. FIG. 12 also shows highlighted positive and negative personality aspect identifiers 1202-1226 that are detected and output by the natural language classifier. In some embodiments, the natural language classifier outputs identifiers for personality aspects that are mapped to the respective interview question (e.g., question-aspect mapping 800 in FIG. 8). In other examples, the natural language classifier outputs all of the detected personality aspects whether they are associated with the respective question or not” (para. [0077]). Training a machine learning algorithm to facilitate outputting personality traits detected from a transcript, in Preuss, reads on the recited limitation.
“... identifying one or more behavioral attributes required for a job; ...” - Preuss discloses, “FIG. 2 shows a competency-enabler mapping table 200 for an available position provided to the employer at a UI screen. For each competency 202 identified for an available position, the employer 104 can select one or more enabler attributes 204 from a dropdown menu provided in the UI screen. Examples of enabler attributes 204 include ‘acts with humility,’ ‘adapts to individual differences,’ ‘attends to critical detail,’ ‘behaves flexibly,’ ‘builds relationships,’ ‘champions change,’ and ‘coaches for performance.’ In one example, an identified competency of ‘self-awareness’ may be mapped to the enabler attribute of ‘acts with humility.’” (para. [0049]). Identifying enabler attributes, including behaving flexibly and acts with humility, among other attributes, indicative of competency in eyes of an employer, in Preuss, reads on the recited limitation.
“... generating an assessment for prospective candidates, said assessment including one or more questions targeting evaluation of said one or more behavioral attributes, the one or more questions presented to the prospective candidates through graphical control elements rendered on a graphical user interface; ...” - Preuss discloses, “in response to receiving competency-enabler attribute selections, the employer management engine 144 produces an interview question selection UI screen that allows employers 104, with or without the assistance of consultants 106, to select interview questions for the position that align with each of the employer-identified competencies. For example, FIG. 3 provides a set of interview questions or prompt 300 where each question/prompt is associated with a particular competency from the competency model for a position” (para. [0050]), “the interview question selection UI screen provides sets of questions for selection based on the identified enabler attributes. For example, FIG. 4 shows a question-enabler mapping table 400 that allows employers 104 to select interview questions 402 associated with each of the identified enabler attributes 404” (para. [0051]), “FIG. 9 illustrates a question summary UI screen 900 for ‘Position A,’ which has four corresponding interview questions 902-908 which the candidate can review prior to selecting one of the questions in order to record a response” (para. [0063]), and “FIG. 10 illustrates an example of a question input UI screen 1000 where a candidate 102 can record an interview questions response at an external computing device 158 such as a mobile device, tablet, wearable device, or laptop. In some implementations, the data acquisition engine 146 presents the selected question 1002 in the UI screen 1000 in addition to a recording display window 1004 where the candidate 102 can see herself as she records the video response. In some examples, the UI screen 1000 may also include a visual recording indicator 1006 that allows the candidate to view how much of a maximum amount of recording time they have used to answer the question. The UI screen 1000 can also include other controls such as a start/stop recording selector 1008 as well as a selector to finish and link the video file to the selected question” (para. [0064]). Providing a question summary UI screen to a candidate, wherein questions presented facilitate identifying enabler attributes, in Preuss, reads on the recited “generating an assessment for prospective candidates, said assessment including one or more questions targeting evaluation of said one or more behavioral attributes” limitation. Displaying the questions on the question input UI screen, from where the candidate selects questions to respond to, and displaying the question and controls on the question input UI screen, in Preuss, reads on the recited “the one or more questions presented to the prospective candidates through graphical control elements rendered on a graphical user interface” limitation.
“... recording, using a video or audio interface coupled to the graphical user interface, a response to said assessment from one or more prospective candidates, wherein said response includes at least one of audio and text data; ...” - See the aspects of Preuss that have been referenced above. The candidate recording the video response using the controls of the UI screen, wherein the response data can undergo processing using STT, in Preuss, reads on the recited limitation.
“... converting said response to a text passage; ...” - See the aspects of Preuss that have been cited above. Preuss also discloses, “the video assessment system 108 can also include a speech-to-text (STT) conversion engine 134 that converts the audio data of each captured video interview question into written text in real-time. In some implementations, the STT conversion engine 134 uses a Speech-To-Text Service to perform the STT conversion. In other embodiments, other STT services that can also transform audio data into a written transcript can also be used by language classification engine 138 to detect personality aspects used to assess suitability of a candidate 102 for a particular position” (para. [0068]). The STT conversion, in Preuss, reads on the recited limitation.
“... applying said machine learning model to said text passage to identify one or more predicted behaviors; ...” - See the aspects of Preuss that have been cited above. Applying artificial intelligence and machine learning to a written transcript to identify personality aspects, in Preuss, reads on the recited limitation.
“... weighting an importance for each of said one or more predicted behaviors; ...” - Preuss discloses, “In some implementations, each question can be mapped to multiple competencies and/or enabler attributes, which allows the system 108 to automatically detect multiple personality aspect identifiers within a single question response. Additionally, as discussed further below, the system 108 can weight each competency associated with a question differently based on the relevance of the competency to the question, which further allows for a customized, automated solution for accurately identifying the best candidates 102 for available positions in an unbiased manner” (para. [0050]). Weighting competencies based on relevance, wherein competencies are predicted based on interview question responses, in Preuss, reads on the recited limitation.
“... determining scores for the behavioral attributes using at least one of said importance and said identified behaviors, the scores used to generate a corpus of transcript data objects having word-level annotations indicative of an estimated probability of a word belonging to a particular behavioral class; and ...” - See the aspects of Preuss that have been cited above. Preuss also discloses, “FIG. 12 illustrates an interview question transcript 1200 that is provided as input to the natural language classifier. In the example, the transcript includes a response to a question asking a candidate to discuss a time when he or she worked in a team environment. FIG. 12 also shows highlighted positive and negative personality aspect identifiers 1202-1226 that are detected and output by the natural language classifier. In some embodiments, the natural language classifier outputs identifiers for personality aspects that are mapped to the respective interview question (e.g., question-aspect mapping 800 in FIG. 8). In other examples, the natural language classifier outputs all of the detected personality aspects whether they are associated with the respective question or not.” (para. [0077]), “The natural language classifier of the language classification engine 138, in some embodiments, assigns a positive or negative polarity to each detected identifier based on whether the respective identifier is associated with a positive or negative feature of the personality aspect” (para. [0078]), “Returning to FIG. 1, in some implementations, the video assessment system 108 can also include a candidate scoring engine 140 that calculates, for a candidate 102 submitting responses to a set of interview questions to the system 108, scores per aspect for each question and per interview. In some examples, the calculated scores can take into account relative numbers of positive and negative indicators for each aspect, confidence in the accuracy of the STT transcript for each question, amount of raw evidence for each personality aspect in the interview transcript, and relevance of each personality aspect to each interview question” (para. [0080]), and “The platform 1608, in some implementations, adds each of the detected positive and negative personality aspects into groups for each of the aspects, and using the aspect groupings (e.g., groupings 1302, 1304, 1306 in FIG. 13), computes scores for the candidate 1602 for each of the aspects, total scores per question, and/or total scores per interview (1638). In some examples, the calculated scores can take into account relative numbers of positive and negative indicators for each aspect, confidence in the accuracy of the STT transcript for each question, amount of raw evidence for each personality aspect in the interview transcript, and relevance of each personality aspect to each interview question” (para. [0142]). Calculating scores for personality aspects based on responses, the scores being related to generating the transcript with words identified that, at some level of confidence, indicate the personality aspects being in particular groups, in Preuss, reads on the recited limitation.
“... rendering, on a scoring dashboard graphical user interface, a visual rendering of an annotated transcript having visual graphical characteristics modified based at least on the word-level annotations corresponding to at least one of the one or more behavioral attributes required for the job; ...” - See the aspects of Preuss that have been referenced above. Preuss also discloses, “FIG. 15 shows a reporting and feedback UI screen 1500 that is presented to an external device 158 of an employer 104 in response to analyzing and scoring each of the submitted interview responses provided by a candidate 102. In some implementations, reporting and feedback UI screen 1500 can include a video replay window 1510 that allows an employer 104 to view a candidates' response to an interview question 1512” and “the UI screen 1500 can also include score summaries for each of the personality aspects 1502-1506 assessed by the interview question 1512” (para. [0100]). Rendering, on the reporting and feedback UI screen, video of candidate responses and score summaries of personality aspects, based on the annotated transcript, including the highlighted positive and negative personality aspect identifiers, in Preuss, reads on the recited limitation.
The combination of Preuss and Menon (hereinafter referred to as “Preuss/Menon”) teaches limitations below of independent claim 1 that do not appear to be disclosed in their entirety by Preuss:
“... wherein said machine learning model includes a first behavioural class machine learning model adapted for sentence-level annotation during a first phase of annotating the training data set of text passages, and a second location-based machine learning model adapted for string index location information during a second phase of annotating the training data set of text passages; and ...” - See the aspects of Preuss that have been referenced above. Preuss also discloses, “the STT conversion engine 134 uses machine learning algorithms to combine knowledge of grammar, language structure, and the composition of audio and voice signals to accurately transcribe the human voice in received interview question files” (para. [0068]), “the machine learning algorithm used by the STT conversion engine 134 can be trained by artificial intelligence (AI) training engine 142 to detect keywords, phrases, and synonyms associated with aspects in the personality model 500 (FIG. 5) with greater accuracy than other words, which in turn improves the performance of the natural language classifier that is trained to detect the personality aspects” (para. [0069]), “FIG. 12 illustrates an interview question transcript 1200 that is provided as input to the natural language classifier. In the example, the transcript includes a response to a question asking a candidate to discuss a time when he or she worked in a team environment. FIG. 12 also shows highlighted positive and negative personality aspect identifiers 1202-1226 that are detected and output by the natural language classifier” (para. [0077]), “the natural language classifier nodes 1752 can communicate with a machine learning service 1758 that has been specifically trained to detect personality aspects within interview question transcripts. In some aspects, the machine learning service 1758 can be configured to perform more than one type of machine learning algorithm associated with conducting video assessments of job candidate interviews (e.g., natural language classification, speech-to-text conversion). In some examples, the natural language classifier nodes 1752 can apply classifier training data to the machine learning service 1758, provide interview question transcripts to the machine learning service 1758, and process received personality aspect detection results” (para. [0117]). The machine learning algorithm used by the STT to detect phrases of sentences associated with aspects in the personality model during the STT process, and the machine learning algorithm used by the natural language classifier to annotate specific strings of text in the transcript, in Preuss, reads on the recited “wherein said machine learning model ... adapted for sentence-level annotation during a first phase of annotating the training data set of text passages, and ... adapted for string index location information during a second phase of annotating the training data set of text passages” limitation. It is not entirely clear that there are multiple machine learning models in use in Preuss (although it appears so, per the “and/or” phrasing in para. [0106]). Nevertheless, Menon discloses, “A data extraction layer 612 is illustrated, one example embodiment of the data extraction layer is to use a variety of techniques--some machine automated and some driven by human beings, to take the documents to be processed and output competency statements, with as much auxiliary information (such as "level" of skill) as is necessary and possible. The automated extraction is accomplished by a pipeline of one or more different machine learning algorithms, each with a specific purpose to continue enriching the data from the acquisition layer” (para. [0079]), and “The training and extraction pipeline are quite similar. The given job description 1002 is first passed through a "sentence segmentation stage" of a sentence segmentor 1004 to extract sentences from a job description. The extracted sentences are then passed through a Part-of-Speech tagger to tag the tokens with their equivalent part-of-speech tags. This part of the pipeline is common for most natural language processing (NLP) tasks. The next stage in the pipeline (i.e., valid requirement classifier 1008) determines the probability that a given sentence could be a job requirement. This stage helps distinguish generic sentences in a job description from sentences that may indicate a requirement. Sentences that are potentially valid job requirements are then passed through a number of named entity recognizers (NER) 1010 and a word class annotator 1018 to understand the structure of the sentence. The output from this stage is then sent to a feature generator 1020, which massages the output from the NERs and the word class annotator into a format understood by the Sequence Tagging algorithm 1022. The Sequence Tagging algorithm 1022 uses the sentence structure as described by the feature generator to extract structured information from requirements. The extracted output 1034 is post-processed through the same NER processes 1024 to extract the relevant information from the extracted output” (para. [0128]). Use of the pipeline of different machine learning algorithms, in Menon, within the machine learning processing, of Preuss, reads on the recited “first behavioural class machine learning model” and “second location-based machine learning model” limitations.
“... wherein annotating the training data set of text passages includes operating the machine learning model to conduct the first phase at a sentence-by-sentence level to identify one or more behaviours from a pre-established list of behavioural clusters; and ...” - See the aspects of Preuss that have been referenced above. Use of the machine learning to perform STT, with particular detail being used on phrases of sentences associated with the personality model, in Preuss, reads on the recited limitation.
“... operating the machine learning model to conduct the second phase to identify one or more string index locations corresponding to each of the identified one or more behaviours, the one or more string index locations each corresponding to an exact sub-sentence sequence, the one or more string index locations used to generate the word-level annotations.” - See the aspects of Preuss that have been referenced above. Using the machine learning to isolate specific strings of words within phrases of sentences, and to annotate the with personality aspects and positive and negative indicators, in Preuss, reads on the recited limitation.
Menon discloses “processing natural language text provided about job candidates” (para. [0001]), similar to the claimed invention and to Preuss. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the machine learning processes, in Preuss, to include use of pipelines of different machine learning algorithms, as in Preuss, so the machine learning is tailored to the tasks (or in other words, so that processing steps can be performed by machine learning algorithms “each with a specific purpose,” per Menon (para. [0079])).
Regarding claim 2, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein converting said response to a text passage comprises using automated speech recognition service on said audio data.” - See the aspects of Preuss that have been cited above. Converting an audio part of a video interview responses into a written transcript by using a STT conversion engine on the audio part, in Preuss, reads on the recited limitation.
Regarding claim 3, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein said calculating scores comprises applying at least one of a rubric and a benchmark.” - See the aspects of Preuss that have been referenced above. Using detected occurrences of personality aspects in interview question transcripts to compute scores, in Preuss, reads on the recited limitation
Regarding claim 4, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein said taxonomy of behaviors includes a binary classification of behaviors.” - Preuss discloses, “Task style” involving “Drive” as “relaxed vs. focused” (FIG. 5).
Regarding claim 5, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein applying said machine learning model comprises applying said machine learning model to a subsection of said text passage.” - See the aspects of Preuss that have been referenced above. Using artificial intelligence and machine learning on words of a written transcript, in Preuss, reads on the recited limitation.
Regarding claim 6, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein said text passage is at least one of an entire input text passage, a paragraph, a sentence, or a subsection of said input text passage.” - See the aspects of Preuss that have been referenced above. Using artificial intelligence and machine learning on words of a written transcript, in Preuss, reads on the recited limitation. Words of a written transcript, in Preuss, read on the recited limitation. See also the disclosing of full sentences in para. [0092] of Preuss.
Regarding claim 7, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein the exact sub-sentence sequence and corresponding one or more behaviours is rendered on a graphical user interface for display to one or more users of the graphical user interface.” - See the aspects of Preuss that have been referenced above. The responses giving rise to the identification of personality aspects being rendered by video on the display shown in FIG. 15 of Preuss, reads on the recited limitation.
Regarding claim 8, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein the exact sub-sentence sequence and corresponding one or more behaviours is appended into a training data set, the training data set used to re-train the machine learning model based on whether a prospective candidate of the prospective candidates is selected or not selected.” - Preuss discloses, “Additionally, the UI screens generated by the reporting and feedback engine 152 can provide a candidate 102 with post-interview and selection feedback regarding why the candidate was or was not selected for the position” (para. [0105]), and “the transcripts for each interview question, question mapping data, calculated scores, and received feedback on the scores (e.g., adjusted scores at score fields 1518, comments at comment input fields 1520, and no rating inputs 1522) can be added to the training data sets 124 used by the AI training engine 142 in training the natural language classifier for the language classification engine 138 and/or STT algorithm for the STT conversion engine 134” (para. [0106])
Regarding claim 10, Preuss/Menon teaches the following limitations:
“The method of claim 1, wherein the second phase of operation of the machine learning model includes combining pre-annotated sentences back into an original transcript with boundaries of sentence-level annotations of the first phase, wherein words that convey a meaning of a classified behaviour are included in the boundaries.” - See the aspects of Preuss that have been referenced above. The combining of unannotated text with annotated text to form the transcript, with shading of text to show boundaries of annotations of phrases of sentences that relate to personality aspects of the personality model, in Preuss, reads on the recited limitation.
Regarding claim 11, while the claim is of different scope relative to claim 1, the claim recites limitations similar to those recited by claim 1. As such, the rationales applied to reject claim 1 also apply for purposes of rejecting claim 11. Claim 11 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon. It should be noted that any limitations recited by claim 11 that do not appear to have a counterpart in claim 1, such as the recited “computer system configured for automating a behavioral interview to identify behavioral attributes in a text passage, the system comprising a processor coupled to computer memory, the processor configured to” limitations, are taught by Preuss/Menon. See, for example, FIG. 21 of Preuss.
Regarding claim 12, while the claim is of different scope relative to claim 2, the claim recites limitations similar to those recited by claim 2. As such, the rationales applied in the rejection of claim 2 also apply for purposes of rejecting claim 12. Claim 12 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 13, while the claim is of different scope relative to claim 3, the claim recites limitations similar to those recited by claim 3. As such, the rationales applied in the rejection of claim 3 also apply for purposes of rejecting claim 13. Claim 13 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 14, while the claim is of different scope relative to claim 4, the claim recites limitations similar to those recited by claim 4. As such, the rationales applied in the rejection of claim 4 also apply for purposes of rejecting claim 14. Claim 14 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 15, while the claim is of different scope relative to claim 5, the claim recites limitations similar to those recited by claim 5. As such, the rationales applied in the rejection of claim 5 also apply for purposes of rejecting claim 15. Claim 15 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 16, while the claim is of different scope relative to claim 6, the claim recites limitations similar to those recited by claim 6. As such, the rationales applied in the rejection of claim 6 also apply for purposes of rejecting claim 16. Claim 16 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 17, while the claim is of different scope relative to claim 7, the claim recites limitations similar to those recited by claim 7. As such, the rationales applied in the rejection of claim 7 also apply for purposes of rejecting claim 17. Claim 17 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 18, while the claim is of different scope relative to claim 8, the claim recites limitations similar to those recited by claim 8. As such, the rationales applied in the rejection of claim 8 also apply for purposes of rejecting claim 18. Claim 18 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Regarding claim 20, while the claim is of different scope relative to independent claims 1 and 11, the claim recites limitations similar to those recited by claims 1 and 11. As such, the rationales applied in the rejection of claims 1 and 11 also apply for purposes of rejecting claim 20. Claim 20 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Preuss, in view of Menon, and further in view of U.S. Pat. App. Pub. No. 2022/0350970 A1 to Rizk et al. (hereinafter referred to as “Rizk”).
Regarding claim 9, the combination of Preuss, Menon, and Rizk (hereinafter referred to as “Preuss/Menon/Rizk”) teaches the following limitations that do not appear to be taught in their entirety by Preuss/Menon:
“The method of claim 1, wherein the first behavioural class machine learning model is a shallow machine learning model.” - Rizk discloses, “Embodiments described herein may include various types of machine learning models and techniques for training the machine learning models are used in improving intent determination in the messaging dialog manager system. For example, supervised learning techniques may be used on shallow models (e.g., SVM, k-nearest neighbor (kNN), random forest, decision tree, naïve Bayes) to serve as a baseline for comparison with deep learning models” (para. [0031]), “While the foregoing describes implementation of a machine learning model, the present disclosure is not limited thereto. In at least some embodiments, a machine learning model may implement a trained component or trained model configured to perform the processes described above. The trained component may include one or more machine learning models, including but not limited to, one or more classifiers, one or more neural networks, one or more probabilistic graphs, one or more decision trees, and others. In other embodiments, the trained component may include a rules-based engine, one or more statistical-based algorithms, one or more mapping functions or other types of functions/algorithms to determine whether a natural language input is a complex or non-complex natural language input. In some embodiments, the trained component may be configured to perform binary classification, where the natural language input may be classified into one of two classes/categories. In some embodiments, the trained component may be configured to perform multiclass or multinomial classification, where the natural language input may be classified into one of three or more classes/categories. In some embodiments, the trained component may be configured to perform multi-label classification, where the natural language input may be associated with more than one class/category” (para. [0050]), and “the first machine learning model may include a shallow model, as described above herein, wherein the shallow model is trained on various features (e.g., sentence embeddings, syntactic features) configured to generate model output data in response to receiving and processing NL text data” (para. [0054]). The use of the shallow models, in Rizk, reads on the recited limitation.
Rizk discloses “Focusing on natural language understanding” (para. [0020]), similar to the claimed invention and to Preuss/Menon. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to have modified the machine learning and related classifiers of Preuss/Menon (see, e.g., para. [0039] of Menon), to include use of shallow models, as in Rizk, as this amounts to substituting equivalents known for the same purpose (see MPEP 2144.06), and for the improvements in performance associated therewith, per Rizk (see para. [0031]).
Regarding claim 19, while the claim is of different scope relative to claim 9, the claim recites limitations similar to those recited by claim 9. As such, the rationales applied in the rejection of claim 9 also apply for purposes of rejecting claim 19. Claim 19 is, therefore, also rejected under 35 USC 103 as obvious in view of Preuss/Menon.
Response to Arguments
On pp. 6-8 of the Response, the applicant requests reconsideration and withdrawal of the claim rejection under 35 USC 101. The applicant contends that the claims have been amended with additional technical features, and to integrate the technical features into a practical solution. (Response, p. 6.) The applicant also contends that reasoning from Ex Parte Desjardins is applicable to the claims in the present application. (Response, p. 6.) According to the applicant, “the claims recite a specific, technical implementation of a pipeline that improves machine learning-based transcript analysis, yields concrete, machine-generated data structures, and produces a specialized GUI-based visualization integrated with those structures.” (Response, p. 6.) The applicant contends that this reflects technological improvements that the USPTO and courts have recognized as patent-eligible under the 2019 PEG. (Response, p. 6.) The applicant contends that the claims not describe a two-phase modeling architecture with first and second machine learning models (Response, p. 6), generation of transcript data objects having word-level annotations indicative of estimated probability of a word belonging to a behavioral class is directed to a machine-produced data structure (Response, p. 6), all of which weighs heavily toward integration into a practical application (Response, p. 6).
The examiner finds the arguments above unpersuasive. The examiner views Ex Parte Desjardins as applicable in instances involving claims directed to improving technology, that is, improving ML model training. The applicant’s claimed invention, on the other hand, is directed to using trained ML models for a purpose (analyzing behavioral interview data). Neither the ML models nor their training appears to be improved as part of the applicant’s claimed invention. The improvement, in the applicant’s claimed invention, appears to be to the analysis of behavioral interview data, which is not an improvement to technology, and is instead the use of technology to improve something performed manually (by, for example, annotating a transcript with notes using pen and paper). For at least these reasons, the applicant’s claims do not establish integration into a practical application.
The applicant also contends that the claimed visual rendering of the annotated transcript having visual graphical characteristics modified based on word-level annotations, its relationships to per-token probabilities and string indices, and related features, is a type of specific GUI rendering control behavior recognized as a technological improvements when directly reflecting a new data structure or pipeline that solve a technical presentation/interpretability problem. (Response, p. 7.)
The examiner finds the arguments above unpersuasive. The applicant’s claims only establish use of a generic, conventional GUI and control elements to display content. While the content may be specific and useful, the GUI is not improved. Any improvement is to the subject matter of the content displayed by the GUI, which is more an improvement to an abstract idea, not an improvement to any technology. Improvements to abstract ideas do not warrant eligibility. (MPEP 2106.05(a)(II).)
The applicant also contends that claim limitations relating to capture and interface (via graphical control elements on a GUI, and recording using a video or audio interface) indicate an end-to-end computerized pipeline and a technically implemented system. (Response, p. 7.) The applicant also contends that the machine learning aspects are not disembodied math, and are instead implemented, trained, and deployed in an end-to-end apparatus producing structured outputs and specific UI behavior. (Response, p. 7.)
The examiner finds the arguments unpersuasive. The technology recites in the applicant’s claims (the GUI, control elements of the GUI, the recording interface, the ML pipeline, and the like), are not abstract idea elements. But they are additional elements, and ones that do not establish an improvement to computers, technology, or technical fields. Rather, the technology (additional elements) appear to be generic, conventional elements operating in their usual manner. Such additional elements do not warrant a finding of eligibility.
The applicant also contends that the amended claims integrate claimed features into a practical application, claim a concrete and technology-centered improvement in the form of improved detection, localization, explainability, and interpretability of behavioral signals in noisy interview transcripts through a specific, dual-phase modeling pipeline and a GUI that renders token-level inferences. (Response, p. 7.)
The examiner finds the arguments above unpersuasive. The applicant’s claimed invention might recite a concrete and technology-centered improvement, but only so far as using a combination of generic, conventional technological elements to perform something that would have previously been performed mentally and manually with pen and paper, that something being analyzing behavior interview transcripts and present outputs or reports. The use of technology in this context may be faster at generating outputs or reports, or may be more accurate or efficient than performance by the human mind and hand, but none of this establishes an eligibility-warranting improvement under MPEP 2106.05(a).
Regarding Step 2A, Prong One of the eligibility analysis under the 2019 PEG, the applicant contends that the claims recite far more than managing human activity or mental steps, in that they recite a specific machine learning pipeline with two models in order, creation of token-level probability annotations at string indices, and a specialized GUI. (Response, p. 7.)
The examiner finds the arguments above unpersuasive. The claims do indeed recite more than certain methods or organizing human activity or mental processes, but the “more” amounts to additional elements that are forms of generic, conventional computer and software technology. This does not warrant a finding of eligibility at Step 2A, Prong One, as additional elements in a claim are not indicative of eligibility at Step 2A, Prong One. Rather, additional elements signal that Step 2A, Prong Two and Step 2B must be performed.
Regarding Step 2A, Prong Two under the 2019 PEG, the applicant contends that the claims integrate the claimed features into a practical application by specifying the model architecture and ordered processing of transcripts, producing a non-generic, computer-generate corpus of transcript data objects with token-level probability annotations, and a specialized scoring dashboard to improve interpretability and reliability of machine inferences. (Response, p. 7.) The applicant also contends that the claims establish a solution to a technical problem in NLP inference and model transparency for unstructured text, that is, how to locate behavior phrases accurately within long transcripts and expose those inferences in a GUI so reviewers can understand and rely on them, which is a computer-technology improvement. (Response, p. 8.)
The examiner finds the arguments above unpersuasive. The only “computer-technology improvement” is that forms of generic, conventional computer technology are used to perform transcript data analysis, annotation, and display steps that could have been performed mentally and manually with pen and paper. This is not an eligibility-warranting improvement under MPEP 2106.05(a).
Regarding Step 2B under the 2019 PEG, the applicant contends that the claims recite concrete, non-conventional elements in combination, in the form of the two-phase model pipeline, specific word-level probability annotations, and a GUI, that are not well-understood, routine, or conventional. (Response, p. 8.)
The examiner finds the arguments above unpersuasive. The rejection is not relying on the well-understood, routine, conventional activity rationale as a ground for ineligibility. The applicant’s claimed invention is viewed as ineligible based on multiple other ineligibility rationales in MPEP 2106.05.
On pp. 8 and 9 of the Response, the applicant requests reconsideration and withdrawal of the claim rejection under 35 USC 102. The applicant contends that Preuss does not disclose the two-phase ML architecture as claimed. (Response, p. 8.) The applicant also contends that Preuss does not disclose generation of a corpus of transcript data objects having word-level annotations indicative of an estimated probability of a word belonging to a particular behavioral class, where the one or more string index locations are used to generate the word-level annotations. (Response, p. 9.) The applicant also contends that Preuss does not disclose per-token probability annotations tied to explicit string index spans produced by a dedication location model, nor creation of a corpus of transcript data objects storing those word-level probabilities. (Response, p. 9.) The applicant also contends that Preuss does not describe the specific claimed GUI control steps coupled to the word-level annotations. (Response, p. 9.)
The examiner finds the arguments above unpersuasive. The claim rejection under 35 USC 102 has been dropped. Preuss/Menon teaches the two-phase ML architecture allegedly missing from Preuss by itself. Further, Preuss does disclose generation of a corpus of transcript data objects having word-level annotations, as shown in FIG. 12. The annotations in Preuss are essentially guesses or predictions, some with associated confidence scores (see, e.g., para. [0066]), and thus, they are indicative of estimated probabilities that words and phrases in the transcript should be associated with specific annotations about personality aspects. And locations of words and phrases in transcripts are string index locations. Finally, Preuss discloses GUI features that read on those claimed by the applicant, as explained in more detail in the 35 USC 103 section above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Such prior art includes the following:
U.S. Pat. No. 10,366,160 B2 to Castelli et al. discloses, “A method and system are provided for assisting users in a conversation. The method includes identifying concepts in the conversation. The method further includes linking identified concepts in the conversation by matching the identified concepts in the conversation to concepts in a knowledge base. The method also includes generating and displaying on the display device, one or more context dependent suggestions for the conversation based on attributes and values associated with the linked concepts in the knowledge base.” (Abstract.)
U.S. Pat. App. Pub. No. 2022/0129784 A1 to Parsons et al. discloses, “A facility for determining sentiments expressed by a natural-language text string for each of one or more topics is described. In the natural-language text string, the facility identifies one or more topics. For each identified topic, the facility replaces the topic in the natural-language text string with a masking tag that occupies the same position in the natural-language text string as the topic. After the replacing, the facility applies a machine learning model to the natural-language text string to obtain a predicted sentiment for each of the identified topics.“ (Abstract.)
U.S. Pat. App. Pub. No. 2022/0366901 A1 to Rathaur et al. discloses, “Systems for performing intelligent interactive voice recognition functions are provided. In some aspects, natural language data may be received from a plurality of users. The natural language data may be used to train a machine learning model. After training the machine learning model, additional or subsequent natural language input data may be received. The natural language data may include a user query, such as a request to obtain information from the system, to process a transaction, or the like. The natural language data may be processed to remove noise associated with the audio data. The data may then be further processed using the machine learning model to interpret the query of the user and generate an output. The output may be transmitted to the user and feedback data may be received from the user. The user-specific machine learning dataset may then be validated and/or updated based on the feedback data.” (Abstract.)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS Y. HO, whose telephone number is (571)270-7918. The examiner can normally be reached Monday through Friday, 9:30 AM to 5:30 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor, can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS YIH HO/Primary Examiner, Art Unit 3624