Prosecution Insights
Last updated: April 19, 2026
Application No. 17/962,917

USER-CENTRIC CONVERSION OF NATURAL LANGUAGE RESPONSES TO POTENTIAL MULTIPLE CHOICE STATEMENTS

Final Rejection §101§103
Filed
Oct 10, 2022
Examiner
FLANDERS, ANDREW C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Bts Usa Inc.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
574 granted / 775 resolved
+12.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
9 currently pending
Career history
784
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 775 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 14 July 2025, with respect to the objection to the specification have been fully considered and are persuasive. The objection to the specification has been withdrawn. Applicant’s arguments, filed 14 July 2025, with respect to the objections to claims 1, 2, and 15 have been fully considered and are persuasive. The objections to claims 1, 2, and 15 have been withdrawn. Applicant's arguments filed 17 July 2025 with respect to the rejections under 35 U.S.C. 103(a) have been fully considered but they are not persuasive. In section VI. A. applicant alleges: A. Kim does not teach or suggest “functionality ... for [a] user to provide a natural language response to [a] selected question” that is “associated with a plurality of potential responses” as recited in the claims Because “[s]tudies have shown that student-created questions can improve learning”, Kim describes a platform for students to collaboratively write,’ edit,'° and answer'! closed-ended questions with a finite selection of potential answer choices”? or open-ended questions without preselected potential answer choices.’ Additionally, Kim also describes functionality for users to convert open-ended questions into closed-ended questions by writing potential answer choices: ... Because all of the claims recite that “each question [is] associated with a plurality of potential responses”, however, the Applicant respectfully submits that the claimed “functionality ... for the user to provide a natural language response to the selected question” requires functionality to provide a natural language response to a question associated with a plurality of potential responses. Meanwhile Kim does not disclose that functionality in the cited paragraphs or elsewhere. Instead, paragraphs [0015] and [0018] simply describe receiving an open-ended question from a user of a first client device and converting it into a “finalized closed-ended question” by “receiving input of answer choices on the user interface of [a] second client device.” Examiner respectfully disagrees. First, applicant construes the limitations noted above in a narrow manner. The claim as presented does not require “functionality to provide a natural language response to a question associated with a plurality of potential responses.” As currently presented, the claim requires functionality to provide a natural language response to the selected question, but it need not be associated with a plurality of potential responses. There is no explicit language in the claim tying these elements together in such a manner that requires the alleged interpretation set forth above. While each question may be required to be associated with a plurality of potential responses, the [provided] natural language response need not be associated with these potential responses. Second, the claim only requires providing the functionality... for the user to provide a natural language response to the selected question.” Examiner submits that the computer system of Kim’s ability to receive input, particularly, a natural language input, can be said to be a system that “provides” the functionality at issue, or simply put, provides the “ability” to receive natural language input questions/answers. Since the system allows for these inputs, it effectively provides this functionally. Third, even if applicant’s allegations are true, to which the examiner does not necessarily agree, the examiner submits that simply inputting a question effectively results in a response/answer provided by the system through a number of intermediary steps. In section A. applicant further alleges: More generally, Kim describes functionality for users to collaboratively write questions that are either closed-ended questions that “limit users to respond with a list of answer choices from which they must choose” or open ended questions that “may be answered without such a limit to predetermined answer choices”.!> At no point does Kim describe any functionality to answer a question with “a list of answer choices” by providing a natural language response. Even when describing functionality to convert an open-ended question into a closed-ended one, Kim does not describe functionality to (for whatever reason) answer that closed-ended question in the original, open-ended manner. Instead, Kim states that “[q]uestions having associated answer choices may then be considered closed-ended questions and the question metadata may be modified to indicate this status.” Examiner respectfully disagrees. In cited paragraph [0015] Kim explicitly discloses “displaying the finalized closed-ended question and answer choices on a user interface of the third client device,” which allows for “receiving input of the selection of one or more of the answer choices on the user interface of the third client device,” or “functionality to answer a question with ‘a list of answer choices’ by providing a natural language response.” In section VI. B. applicant alleges: B. It would not have been obvious in view of Tomkins to modify the teachings of Kim “for the purpose of adding the capacity for unsupervised or semi- supervised approaches to compare closed-ended and open-ended answers” as argued by the Patent Office because neither Kim nor Tomkins make any suggestion to “compare closed-ended and open-ended answers” As Kim does not even disclose receiving a natural language response to a question having “a plurality of potential responses” as described above, Kim also does not teach or suggest selecting some of those potential responses to provide as a set of multiple-choice statements by using semantic similarity to identify the potential responses that are most similar to the natural language response provided by the user as recited in the claims. Instead, the Patent Office argues that the claimed functionality would be taught by the combination of Kim and Tomkins, which describes using similarity scores to determine an association between a message trail and a task entry: ... Because Tomkins does not describe “compar[ing] closed-ended and open-ended answers’, the argument put forth by the Patent Office (as far as the Applicant can tell) appears to be that Kim describes comparing closed-ended and open-ended answers and it would have been obvious in view of Tomkins to do so using semantic similarity. However, at no point does Kim even suggest comparing open-ended answers to a set of closed-ended answers. In fact, the only questions described in Kim that receive both open-ended answers and closed-ended answers are those questions that were initially constructed as open-ended and later converted to closed-ended questions by users that “input (622) answer choices”!’ on “the user interface of the second client device”. Meanwhile, at no point does Kim describe comparing any of those “answer choices” to any answer provided when the question was open-ended. However, the Kim platform does not have open-ended and closed-ended answers to the same questions with which to compare as described above. Meanwhile, the “scoring” process described in the cited paragraph of Kim based on the quality of the questions and answer choices: ... Singh, which is cited merely for describing identifying synonyms for extracted key terms, does not cure any of those deficiencies. Accordingly, the Applicant respectfully requests reconsideration and withdrawal of the rejections under § 103. Examiner respectfully disagrees. Applicant has misconstrued the nature of the reasoned combination of Kim, Tomkins and Singh, particularly Applicant asserts that because Tomkins does not describe compar[ing] closed-ended and open-ended answers’, and Kim doesn’t “suggest comparing open-ended answers to a set of closed-ended answers,” the reasoned combination would not have been obvious to one of ordinary skill in the art. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Specifically, it is the comparison elements of Tomkins data/information, now adapted to compare the responses/information/data in Kim that makes obvious the claimed limitations in the reasoned combination. In section VII. A and B. applicant alleges: A. “[U|sing a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between [a] natural language response [to a question] provided by [a] user and each of [a plurality of] potential responses” associated with the question cannot “practically” be done by a human In the pending Office Action, the Patent Office argues that “Using a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response provided by the user and each of the potential responses” as recited in the claims is “A mathematical calculation achievable in the human mind with pen and paper.” However, the standard set forth for whether a claim is directed to an abstract “mental process” under MPEP § 2106.04(a)(2)(III)(A) is whether the claim recites “limitations that can practically be performed in the human mind” or if “the human mind is not equipped to perform the claim Meanwhile, the Patent Office provides no example of even a single “semantic similarity algorithm” that can “practically” be performed in a human mind. B. More generally, the entire process recited in the claims can hardly be described as one of “the building blocks of human ingenuity” that the Supreme Court has excluded from patentability The determination of whether a claim recites a judicial exception under Prong One of Step 2A is, at its core, a determination as to whether one of “the basic tools of scientific and technological work” — one of “the building blocks of human ingenuity””° — is “set forth or described” in the claim. Here, the Patent Office argues that the entire claimed process is a judicial exception under § 101: ... At a high level, the pending claims can be characterized as a method and system for receiving a natural language response to a question having potential responses and then determining which of the potential responses to provide to the user as a set of multiple-choice answers to the same question. Even when characterized at such a high level, the Applicant respectfully submits that that concept is not one of “the basic tools of scientific and technological work” that the Supreme Court sough to exclude from patentability. Furthermore, it can hardly be said that the entire, specific process recited in the claim and set forth in the pending Office Action is one of “the building blocks of human ingenuity”. Instead, even if the claims are said to involve some broad concept excluded from patentability under § 101, the Applicant respectfully submits that the specific processing steps recited in the claims go “beyond generally linking the use of [that broad concept] to a particular technological environment” and, instead, limit the claim to the specific recited process “such that the claim as a whole is more than a drafting effort designed to monopolize” that broad concept. Examiner respectfully disagrees. Applicant’s argument is moot and/or unpersuasive on it’s face. Applicant has not provided any reasoning, evidence, or analysis as to why the reasoning set forth in the prior action rejecting the claims under 35 U.S.C. 101, instead the allegations simply conclude the limitations “cannot “practically” be done by a human,” “the entire process recited in the claims can hardly be described as one of “the building blocks of human ingenuity,” and “that concept is not one of “the basic tools of scientific and technological work” that the Supreme Court sough to exclude from patentability.” Further, the examiner submits that humans mental perform semantic similarity nearly every time they are reading to ascertain the context and meaning of words, phrases, and sentence, both known and unknown, (e.g. asking themselves “does this word/phrase mean ___?). Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (1 and 8) or apparatus/machine claims (15) under Step 1 of the subject matter eligibility test (MPEP 2106.03), but under Step 2A Prong 1 (MPEP 2106.04) all of these claims recite abstract ideas and specifically mental processes—concepts performed in the human mind. Recited in independent claims 1, 8 and 15 as: Stor(es/ing) a plurality of questions, each question associated with a plurality of potential responses, each potential response being associated with a score associated with used to evaluate selection of the potential response in response to the question; Select(ing) one of the questions; Displaying/outputs the selected question to a user via a graphical user interface; Provid(es/ing) functionality, via the graphical user interface, for the user to provid(e/ing) a natural language response to the selected question; Us(es/ing) a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response provided by the user and each of the potential responses by: (Claim 1 only) for a predetermined value of n, identifying each n-gram in the natural language response provided by the user; (Claim 1 only) using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each identified n-gram and the natural language response provided by the user; (Claim 1 only) selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; (Claim 1 only) identifying synonyms for each of the word in the selected n-grams; (Claim 1 only) identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; (Claim 1 only) and using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential response associated with the question and the synonymous phrases identified using the natural language response provided by the user; Output(s/ting), via the graphical user interface, a set of multiple-choice statements that include the potential responses associated with the selected question having the highest similarity scores indicative of the semantic similarity between the potential responses and the natural language response provided by the user; and Provid(es/ing) functionality for the user to respond to the selected question by selecting one of the multiple-choice statements output via the graphical user interface. These limitations, under their broadest reasonable interpretation, cover performance of the limitation in the mind with the use of a physical aid such as a pen and paper but for the recitation of generic computer components. Storing a plurality of questions, each question associated with a plurality of potential responses, each potential response being associated with a score associated with used to evaluate selection of the potential response in response to the question; This can be easily done on pen and paper by a person Selecting one of the questions; Akin to a person looking through the paper records of the plurality of questions and choosing one Displaying the selected question to a user Handing said paper record to another person Providing functionality for the user to provide a natural language response to the selected question; The user could respond verbally or write their response on the paper record Using a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response provided by the user and each of the potential responses. A mathematical calculation achievable in the human mind with pen and paper. For a predetermined value of n, identifying each n-gram in the natural language response provided by the user; Determining n-grams is achievable in the human mind. Using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each identified n-gram and the natural language response provided by the user; A mathematical calculation achievable in the human mind with pen and paper. Selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; A judgement a person may easily perform. Identifying synonyms for each of the word in the selected n-grams; A native speaker would have little difficulty. Identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; More challenging but it is something that a native speaker could do. Using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential response associated with the question and the synonymous phrases identified using the natural language response provided by the user; A mathematical calculation achievable in the human mind with pen and paper. outputting a set of multiple-choice statements that include the potential responses associated with the selected question having the highest similarity scores indicative of the semantic similarity between the potential responses and the natural language response provided by the user; Akin to writing the set down or verbal telling the user. Providing functionality for the user to respond to the selected question by selecting one of the multiple-choice statements. Listening to or reading a written response from the user. It is noted that the above analysis is according to the 2019 Revised Patent Subject Matter Eligibility Guidance published in the Federal Register (84 FR 50) on January 7, 2019 and MPEP 2106.04(a)(2)(III). Consider also that “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea” as per MPEP 2106.04(a)(2)(III)(B). See also footnotes 14 and 15 of the Federal Register Notice. As detailed above, the steps of stor(es/ing), select(ing), displaying, provid(es/ing), output(s/ting), identifying and calculating may be practically performed in the human mind with the use of a physical aid such as a pen and paper. Similarly, dependent claims 2-7, 9-14 and 16-20 include additional steps that are considered “insignificant extra-solution activity to the judicial exception” because they fail to provide meaningful significance that go beyond generally linking the use of an abstract idea to a particular technological environment. For example, claims 2 and 3 adds performing the similarity calculation at the sentence level. Claim 4 adds a comparison greater than a predetermined threshold. Claim 5 selecting the highest cosine similarity. Claim 6 adds the ability for revision. Claim 7 merely sums the scores of the user’s responses. Claims 9-14 and 16-20 have similar limitations, all of which are insignificant extra-solution activity to the judicial exception. Under Step 2A Prong 2, this judicial exception is not integrated into a practical application because each of claims 1-20 do not recite additional elements that integrate the exception into a practical application. The only additional elements {a non-transitory computer readable storage media (claim 15), a hardware computer processing unit (claim 15) and a graphical user interface (claims 1, 8 and 15)} are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). See also MPEP 2106.04(a)(2)(III) with respect to Mental Processes: “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”. See also MPEP 2106.04(a)(2)(III)(C)(3) Using a computer as tool to perform a mental process and MPEP 2106.04(a)(2)(III)(D) as well as the case law cited therein. Finally, under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using processing circuitry and memory to perform the steps of stor(es/ing), select(ing), displaying, provid(es/ing), output(s/ting), identifying and calculating amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims 1-20 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-7, 9-12, and 16-20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication US20200357297A1 (Hong Suk Kim), hereinafter Kim, in view of US Patent US9424247B1, (Tomkins et al), hereinafter Tomkins and further in view of US Patent US11386897B2, (Singh et al), hereafter Singh. Regarding Claim 1, Kim discloses: A method of converting natural language responses (Kim: “open-ended questions”) to potential multiple-choice statements (Kim: “close-ended questions”), the method comprising: (Kim [0070]: “A process for generating open-ended questions and converting open-ended questions into closed-ended questions”, An open-ended question reads on a natural language response. Closed-ended questions have selectable answers, i.e., multiple choice.) storing a plurality of questions, (Kim [0015]: “storing question data”, The question data is stored in a question repository.) each question associated with a plurality of potential responses, (Kim [0015]: ”generating answer data in response to receiving input of answer choices”, The potential responses are the answer choices.) each potential response being associated with a score used to evaluate selection of the potential response in response to the question; (Kim [0062]: “a question may be retrieved by any of a number of question criteria including, but not limited to, by topic, subject, grade level, quality rating,”, That quality rating may be associated with the potential responses (i.e., answers), Kim [0077]: “data indicating a score assigned to the question and/or various components of the question can be provided”, And answer data is a component of the question, Kim [0076]: “captured answer choices can be saved as answer data, sent to the question repository, and associated with the corresponding question data”) selecting one of the questions; (Kim [0015]: “sending the question data from the question repository to a second client device,”) displaying the selected question to a user via a graphical user interface; (Kim [0015]: “displaying the open-ended question on a user interface”) providing functionality, via the graphical user interface, for the user to provide a natural language response to the selected question; (Kim [0018]: “receiving input of an open-ended question on a user interface”) Kim does not explicitly disclose: using a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response provided by the user and each of the potential responses. In the same field of natural language queries, however, Tomkins teaches: using a semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response (Tomkins: “message”) provided by the user and each of the potential responses (Tomkins: “task entry”) by: (Tomkins [Column 11, Line 53-55]: “Message engine 120 may determine one or more similarity scores for each n-gram for an associated task entry.”, Tomkins is comparing the natural language message with the associated task entry, a static piece of text.). It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim to combine the teaching of Tomkins for the purpose of adding the capacity for unsupervised or semi-supervised approaches to compare closed-ended and open-ended answers. Kim relies on an administrator for scoring (Kim [0054]). There is no mechanism for unsupervised acceptance based on similarity threshold nor pre-ranking based on similarity score to aid the administrator. Tomkins approach is a means of predicting administrator approval. (Tomkins [Column 1, Line 31 - 34]: “A similarity score between the n-gram and one or more aspects of the associated task entry may be determined that is indicative of a likelihood that the user has interest in associating the n-gram with the aspects of the task entry.”) Kim does not explicitly disclose: for a predetermined value of n, identifying each n-gram in the natural language response provided by the user. In the same field of natural language queries, however, Tomkins teaches: for a predetermined value of n, identifying each n-gram in the natural language response (“message trail”) provided by the user; (Tomkins [Column 11, Line 52-53]: “multiple n-grams in a message trail may be identified by message engine”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim to combine the teaching of Tomkins for the purpose of using n-grams to improve the similarity algorithm. (Tomkins [Column 1, Line 31-34]). Kim does not explicitly disclose: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each identified n-gram and the natural language response provided by the user; However, Tomkins in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each identified n-gram and the natural language response provided by the user; (Tomkins [Column 11, Line 53-55]: “Message engine 120 may determine one or more similarity scores for each n-gram for an associated task entry.”, Tomkins compares a list of n-grams to static text.) Kim does not explicitly disclose: selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; However, Tomkins in the combination discloses: selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; (Tomkins [Column 12, Line 24-27]: “message engine 120 may determine a similarity score between an n-gram and a task entry, and provide the n-gram to task engine 115 only if the similarity score satisfies a threshold”) Kim, in view of Tomkins does not disclose: identifying synonyms for each In the same field of natural language processing, however, Singh teaches: identifying synonyms for each (“identified key-terms”); ([Column 5, Line 14-16]: “a set of synonyms are determined for the identified key-terms based on a language-based approach or domain language specific approach.”) It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim and Tomkins to combine the teaching of Singh for the purpose of including synonyms and synonymous phrases as a means of improving the semantic similarity algorithm. [Singh Column 2, Line 8-10]. Kim, in view of Tomkins does not disclose: identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; and However, Singh in the combination discloses: identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; and (Singh [Column 15, Line 66-67 – Column 17, Lines 1-2]: “the extracted basic set of key-terms includes a plurality of keywords and a plurality of key-phrases comprising of a plurality of an n-gram terms”). Tomkins in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential response associated with the question (information fields of task) and the synonymous phrases identified using the natural language response (n-gram) provided by the user; (Tomkins [Column 12, Line 17-20]: “a similarity score may be determined based on similarity between information associated with one or more of the information fields of task and the n-gram.”) Kim in the combination discloses: outputting, via the graphical user interface, a set of multiple-choice statements (Kim [0015]: “displaying the finalized closed-ended question and answer choices on a user interface of the third client device,”) Tomkins in the combination discloses: that include the potential responses associated with the selected question (closed-ended) having the highest similarity scores indicative of the semantic similarity between the potential responses and the natural language response (open-ended) provided by the user; (Tomkins [Column 14, Line 25-28]: ” task engine 115 may provide a confirmation message to the user only when the similarity score between the n-gram and the information field that is to be updated with the n-gram is below a threshold value.”, The required action is being performed based on a threshold, the only difference being below instead of above said threshold.) Kim in the combination discloses: and providing functionality for the user to respond to the selected question by selecting one of the multiple-choice statements output via the graphical user interface. (Kim [0015]: “receiving input of the selection of one or more of the answer choices on the user interface”) Regarding Claim 2, in addition to the elements stated above regarding claim 1, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 1, wherein calculating the similarity score indicative of the semantic similarity between the natural language response and each of the potential responses comprises: Kim, in view of Tomkins does not disclose: identifying a plurality of sentences in the natural language response; and In the same field of natural language processing, however, Singh teaches: identifying a plurality of sentences in the natural language response; and (Singh [Column 11, Line 47-49]: “the method (400) includes extracting a set of sentences containing the final set of key-terms from the input document.”) It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim and Tomkins to combine the teaching of Singh for the purpose of breaking up the response into semantically coherent pieces, i.e., sentences. Singh in the combination discloses: for each of the plurality of sentences: (Singh [Column 17, Line 12-14]: “the method (400) includes determining the set of synonyms for the based on Conditional Frequency Distribution CFD) techniques using the plurality of the extracted set of sentences”) Tomkins in the combination discloses: identifying each n-gram in the sentence; (Tomkins [Column 11, Line 52-53]: “multiple n-grams in a message trail may be identified by message engine”) Singh in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each of the identified n-grams and the sentence; (Singh [Column 15, Line 66-67 – Column 17, Lines 1-2]: “the extracted basic set of key-terms includes a plurality of keywords and a plurality of key-phrases comprising of a plurality of an n-gram terms”) Tomkins in the combination discloses: selecting one or more identified n-grams having a similarity score that is greater than or equal to the predetermined threshold; (Tomkins [Column 12, Line 24-27]: “message engine 120 may determine a similarity score between an n-gram and a task entry, and provide the n-gram to task engine 115 only if the similarity score satisfies a threshold”) Singh in the combination discloses: identifying synonyms for each (Singh [Column 5, Line 14-16]: “a set of synonyms are determined for the identified key-terms based on a language-based approach or domain language specific approach.”) Singh in the combination discloses: identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; (Singh [Column 15, Line 66-67 – Column 17, Lines 1-2]: “the extracted basic set of key-terms includes a plurality of keywords and a plurality of key-phrases comprising of a plurality of an n-gram terms”) Tomkins in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential responses associated with the question and the synonymous phrases. (Tomkins applied with synonymous aspects of Singh, Tomkins [Column 12, Line 17-20]: “a similarity score may be determined based on similarity between information associated with one or more of the information fields of task and the n-gram.”) Regarding Claim 3, in addition to the elements stated above regarding claim 2, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 2, wherein outputting the set of multiple-choice statements comprises Kim in the combination discloses: outputting a set of multiple-choice statements (Kim [0015]: “displaying the finalized closed-ended question and answer choices on a user interface of the third client device,”) Singh in the combination discloses: for each sentence (Singh [Column 11, Line 47-49]: “the method (400) includes extracting a set of sentences containing the final set of key-terms from the input document.”) Tomkins in the combination discloses: that include the potential responses associated with the selected question having the highest similarity scores indicative of the semantic similarity between the potential responses and the sentence. (Tomkins [Column 14, Line 25-28]: ” task engine 115 may provide a confirmation message to the user only when the similarity score between the n-gram and the information field that is to be updated with the n-gram is below a threshold value.”, The required action is being performed based on a threshold, the only difference being below instead of above said threshold.) Regarding Claim 4, in addition to the elements stated above regarding claim 1, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 1, Singh in the combination discloses: wherein the semantic similarity algorithm selects the one or more identified n-grams having cosine similarity scores that are greater than or equal to the predetermined threshold. (Singh [Column 11, Line 34-37]: ”a pre-defined value of "0.5" (estimated based on cosine similarity techniques) is used as threshold value for identifying a set of similar terms from the domain language model.”) Regarding Claim 5, in addition to the elements stated above regarding claim 1, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 1, Singh in the combination discloses: wherein the semantic similarity algorithm identifies the set of multiple-choice statements by selecting the potential responses associated with the selected question having the highest cosine similarity scores. (Singh [Column 11, Line 31-33]: “the method (300) includes identifying a set of similar terms from the domain language model based on cosine similarity techniques.”, ) Regarding Claim 6, in addition to the elements stated above regarding claim 1, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 1, Kim in the combination discloses: wherein the graphical user interface provides functionality for the user to respond to the selected question by selecting one of the multiple-choice statements output via the graphical user interface or revising the natural language response to the selected question. (Kim [0076]: “Question data representing the open-ended question(s) can be sent to the client devices and the question(s) displayed on the user interfaces of the client devices to solicit the creation of answer choices.”) Regarding Claim 7, in addition to the elements stated above regarding claim 1, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 1, further comprising: Kim in the combination discloses: selecting a plurality of questions; (Kim [0075]: “One or more open-ended questions are selected and presented (620) to users.”) providing functionality for the user to answer each of the selected questions by providing natural language responses and (Kim [0076]: “Question data representing the open-ended question(s) can be sent to the client devices and the question(s) displayed on the user interfaces of the client devices to solicit the creation of answer choices.”) selecting one of the identified potential responses; and (Kim [0077]: “The data indicating the selected answer choices can be received (626) from the client devices and a determination made whether the answer(s) are correct.”) evaluating the user by summing the scores associated with each of the potential responses selected by the user. (Kim [0056]: “scores of users including an indication of the individual or group with the highest score of correctly answered questions”) Regarding Claim 9, in addition to the elements stated above regarding claim 8, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 8, wherein using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response (“open-ended answer”) provided by the user and each of the potential responses (“closed-ended answer set”) comprises: Kim does not disclose: for a predetermined value of n, identifying each n-gram in the natural language response provided by the user; In the same field of natural language queries, however, Tomkins teaches: for a predetermined value of n, identifying each n-gram in the natural language response provided by the user; (Tomkins [Column 11, Line 52-53]: “multiple n-grams in a message trail may be identified by message engine”) It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim to combine the teaching of Tomkins for the purpose of using n-grams to improve the similarity algorithm. (Tomkins [Column 1, Line 31-34]) Tomkins in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each identified n-gram and the natural language response provided by the user; (Tomkins [Column 11, Line 53-55]: “Message engine 120 may determine one or more similarity scores for each n-gram for an associated task entry.”, Tomkins compares a list of n-grams to static text.) selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; (Tomkins [Column 12, Line 24-27]: “message engine 120 may determine a similarity score between an n-gram and a task entry, and provide the n-gram to task engine 115 only if the similarity score satisfies a threshold”) Kim, in view of Tomkins does not disclose: identifying synonyms for each In the same field of natural language processing, however, Singh teaches: identifying synonyms for each (“identified key-terms”); ([Column 5, Line 14-16]: “a set of synonyms are determined for the identified key-terms based on a language-based approach or domain language specific approach.”) It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim and Tomkins to combine the teaching of Singh for the purpose of including synonyms and synonymous phrases as a means of improving the semantic similarity algorithm. [Singh Column 2, Line 8-10] Singh in the combination discloses: identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; and (Singh [Column 15, Line 66-67 – Column 17, Lines 1-2]: “the extracted basic set of key-terms includes a plurality of keywords and a plurality of key-phrases comprising of a plurality of an n-gram terms”) Tomkins in the combination discloses: using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential response associated with the question and the synonymous phrases identified using the natural language response provided by the user. (Tomkins [Column 11, Line 53-55]: “Message engine 120 may determine one or more similarity scores for each n-gram for an associated task entry.”, Tomkins compares a list of n-grams to static text.) Regarding Claim 10, in addition to the elements stated above regarding claim 8, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 8, wherein using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the natural language response provided by the user and each of the potential responses comprises: Kim in view of Tomkins does not disclose: identifying a plurality of sentences in the natural language response; and In the same field of natural language processing, however, Singh teaches: identifying a plurality of sentences in the natural language response; and (Singh [Column 11, Line 47-49]: “the method (400) includes extracting a set of sentences containing the final set of key-terms from the input document.”) It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the teachings of Kim and Tomkins to combine the teaching of Singh for the purpose of identifying sentences as a logical means of splitting text into semantically coherent portions during preprocessing. [Singh Column 8, Line 31-32] Singh in the combination teaches: for each of the plurality of sentences: (Singh [Column 17, Line 12-14]: “the method (400) includes determining the set of synonyms for the based on Conditional Frequency Distribution CFD) techniques using the plurality of the extracted set of sentences”) Tomkins in the combination teaches: identifying each n-gram in the sentence; (Tomkins [Column 11, Line 52-53]: “multiple n-grams in a message trail may be identified by message engine”) using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between each of the identified n-grams and the sentence (Tomkins [Column 11, Line 53-55]: “Message engine 120 may determine one or more similarity scores for each n-gram for an associated task entry.”, Tomkins compares a list of n-grams to static text.) Kim, in view of Tomkins does not disclose: selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; However, Singh in the combination discloses: selecting one or more identified n-grams having similarity scores that are greater than or equal to a predetermined threshold; (Singh [Column 11, Line 34-37]: ”a pre-defined value of "0.5" (estimated based on cosine similarity techniques) is used as threshold value for identifying a set of similar terms from the domain language model.”) identifying synonyms for each of the word in the selected n-grams; (Singh [Column 5, Line 14-16]: “a set of synonyms are determined for the identified key-terms based on a language-based approach or domain language specific approach.”) identifying synonymous phrases that include each combination of the words in the one or more selected n-grams and the identified synonyms; and (Singh [Column 15, Line 66-67 – Column 17, Lines 1-2]: “the extracted basic set of key-terms includes a plurality of keywords and a plurality of key-phrases comprising of a plurality of an n-gram terms”) using the semantic similarity algorithm to calculate similarity scores indicative of the semantic similarity between the potential responses associated with the question and the synonymous phrases. (Tomkins applied with synonymous aspects of Singh, Tomkins [Column 12, Line 17-20]: “a similarity score may be determined based on similarity between information associated with one or more of the information fields of task and the n-gram.”) Regarding Claim 11, in addition to the elements stated above regarding claim 10, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 10, wherein outputting the set of multiple-choice statements comprises Kim in the combination discloses: outputting a set of multiple-choice statements for each sentence that (Kim [0015]: “displaying the finalized closed-ended question and answer choices on a user interface of the third client device,”) Tomkins in the combination teaches: include the potential responses associated with the selected question having the highest similarity scores indicative of the semantic similarity between the potential responses and the sentence (Tomkins [Column 14, Line 25-28]: ” task engine 115 may provide a confirmation message to the user only when the similarity score between the n-gram and the information field that is to be updated with the n-gram is below a threshold value.”, The required action is being performed based on a threshold, the only difference being below instead of above said threshold.) Regarding Claim 12, in addition to the elements stated above regarding claim 8, the combination of Kim in view of Tomkins in further view of Singh further discloses: The method of claim 8, wherein the semantic similarity algorithm: Kim, in view of Tomkins does not disclose: selects the one or more identified n-grams having cosine similarity scores that are greater than or equal to the predetermined threshold; and In the same field of natural language processing, however, Singh teaches: selects the one or more identified n-grams having cosine similarity scores that are greater than or equal to the predetermined threshold; and (Singh [Column 11, Line 34-37]: ”a pre-defined value of "0.5" (estimated based on cosine similarity techniques) is used as threshold value for identifying a set of similar terms from the domain language model.”) It would have been obvious to one of ordinary skil
Read full office action

Prosecution Timeline

Oct 10, 2022
Application Filed
Jan 03, 2025
Non-Final Rejection — §101, §103
Jul 14, 2025
Response Filed
Nov 04, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562160
ARBITRATION BETWEEN AUTOMATED ASSISTANT DEVICES BASED ON INTERACTION CUES
2y 5m to grant Granted Feb 24, 2026
Patent 12547835
AUTOMATIC EXTRACTION OF SEMANTICALLY SIMILAR QUESTION TOPICS
2y 5m to grant Granted Feb 10, 2026
Patent 12512089
TESTING CASCADED DEEP LEARNING PIPELINES COMPRISING A SPEECH-TO-TEXT MODEL AND A TEXT INTENT CLASSIFIER
2y 5m to grant Granted Dec 30, 2025
Patent 12394416
DETECTING NEAR MATCHES TO A HOTWORD OR PHRASE
2y 5m to grant Granted Aug 19, 2025
Patent 11328007
GENERATING A DOMAIN-SPECIFIC PHRASAL DICTIONARY
2y 5m to grant Granted May 10, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
88%
With Interview (+14.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 775 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month