Prosecution Insights
Last updated: April 19, 2026
Application No. 17/766,616

RE-RANKING RESULTS FROM SEMANTIC NATURAL LANGUAGE PROCESSING MACHINE LEARNING ALGORITHMS FOR IMPLEMENTATION IN VIDEO GAMES

Non-Final OA §101§103
Filed
Apr 05, 2022
Examiner
LEE, MICHAEL CHRISTOPHER
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
80 granted / 136 resolved
+3.8% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
54 currently pending
Career history
190
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/11/2025 has been entered. Response to Amendment Applicant’s Amendment and remarks dated 12/11/2025 have been considered. Claims 1-23 are pending. Response to Arguments On page 7 of Applicant’s 12/11/2025 Amendment and remarks, Applicant identifies portions of the instant specification and asserts that such portions provide sufficient written description support for the claim amendments. The examiner agrees that the portions of the specification identified by Applicant provide sufficient written description support for the claim amendments. On pages 8-9 of Applicant’s 12/11/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to step 2A, prong 2, Applicant argues that claims 1 and 12 “provide a technical solution to a technical problem and improve the functioning of a computer.” PNG media_image1.png 390 642 media_image1.png Greyscale ... PNG media_image2.png 118 630 media_image2.png Greyscale The examiner respectfully disagrees. The “re-ranking mechanism” is a mental step, and therefore any such improvement with respect to “re-ranking” is an improvement to the judicial exception and not to any actual technology. As explained in further detail in the detailed rejections, associating a “first phrase with a second phrase in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase” is a mental step because if the NLP ML algorithm cannot detect irony or sarcasm, a human can mentally associate a different second phrase in order to account for such irony or sarcasm (e.g., the manner contrary to an expected usage). On page 9 of Applicant’s 12/11/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to step 2A, prong 2, Applicant argues that the “modifying execution of the program code based on the re-ranked set of candidate responses” improves computer functionality. PNG media_image3.png 340 656 media_image3.png Greyscale The examiner respectfully disagrees. The “modifying execution of the program code based on the re-ranked set of candidate responses” simply applies the mental steps using a generic computer, where no improvement is being made to the underlying computer technology. On pages 9-11 of Applicant’s 12/11/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 103, Applicant argues that the prior art of record does not teach the newly-added “in a manner contrary to an expected usage” limitation. The examiner agrees that such newly-added limitation is not taught by the previous prior art of record. After further search and consideration, this limitation is taught by the SINGH reference as explained below, and updated rejections to the independent claims are provided below with respect to the URBANEK, KIM, SABIR, and SINGH references. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Step 1 of the Alice/Mayo framework, Claims 1-11 are directed to a method (a process), Claims 12-22 are directed to an apparatus (a machine), and Claim 23 is directed to a non-transitory computer readable medium (a machine), which each fall within one of the four statutory categories of inventions. Regarding Claim 1 Step 2A, prong 1 (Is the claim directed to a law of nature, a natural phenomenon or an abstract idea). Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components (e.g., “semantic natural language processing (NLP) machine learning (ML) algorithm”, “program code”). generating, ... a ranked set of candidate responses based initial scores that represent a degree of matching between the set of candidate responses and an input phrase provided by a user ...; (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally review an input phrase provided by a user and then mentally assign scores to each candidate response based on the human’s perception of how well the candidate response matches the input phrase, and then ranking such scores) post-processing results of the pre-trained NLP algorithm without retraining the pre-trained NLP algorithm by modifying at least one of the initial scores using at least one rule that associates a first phrase with a second phrase in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase, (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally review the results output by the pre-trained NLP algorithm and mentally post-process such results, e.g., mentally using such results to apply a rule modifying the scores mentally, such as a rule that associates a sample input with a sample output response, where a human can detect sarcasm or irony (which uses a phrase in a manner contrary to an expected usage) in the first phrase which is used to determine a second phrase) wherein the at least one rule is selected to modify the at least one of the initial scores based on semantic similarity of the input phrase and the first phrase determined ... and the semantic similarity of the second phrase with a corresponding candidate response (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally modify scores based on semantic similarity, e.g., based on the human’s understanding of the meaning of the input phrase and first phrase, and the meaning of the response phrase and the corresponding candidate response) re-ranking the set of candidate responses based on the modified scores response to increase the likelihood of particular results relative to the ranked set of candidate responses ... (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally re-rank (e.g., by re-sorting) candidate responses based on modified scores, where such re-ranking will naturally rank some results higher than the original ranked set) Step 2A, prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?). The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements (e.g., “semantic natural language processing (NLP) machine learning (ML) algorithm”, “program code”) which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “using a pre-trained semantic natural language processing (NLP) machine learning (ML) algorithm ... during execution of program code” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). In particular, the recited “semantic natural language processing (NLP) machine learning (ML) algorithm” is recited at a high-level, and is a generic computer component because it is merely recited to perform the mental processes explained above with respect to step 2a, prong 1, and the claims do not recite any particular structure for how such “semantic natural language processing (NLP) machine learning (ML) algorithm” is implemented. Regarding the “by the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). Regarding the “by the pre-trained NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). Regarding the “modifying execution of the program code based on the re-ranked set of candidate responses” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements (e.g., “semantic natural language processing (NLP) machine learning (ML) algorithm”, “program code”) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “using a pre-trained semantic natural language processing (NLP) machine learning (ML) algorithm ... during execution of program code” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding the “by the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding the “by the pre-trained NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding the “modifying execution of the program code based on the re-ranked set of candidate responses” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding Claim 2 Step 2A, Prong 1 wherein modifying the at least one of the initial scores comprises generating, ... a first score that represents semantic similarity of the first phrase and the input phrase (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally generate a score based on semantic similarity, e.g., based on the human’s understanding of the meaning of the input phrase and first phrase) Step 2A, Prong 2 Regarding the “using the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). Step 2B Regarding the “using the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding Claim 3 Step 2A, Prong 1 wherein modifying the at least one of the initial scores comprises generating, ... second scores that represent semantic similarities of the set of candidate responses and the second phrase. (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally generate a score based on semantic similarity, e.g., based on the human’s understanding of the meaning of each of the set of candidate responses and second phrase) Step 2A, Prong 2 Regarding the “using the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). Step 2B Regarding the “using the semantic NLP ML algorithm” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding Claim 4 Step 2A, Prong 1 wherein the at least one rule indicates an input threshold for the first phrase and a response threshold for the second phrase (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally create and select a rule having input and response thresholds) wherein modifying the at least one of the initial scores comprises converting the first score to an input weight using a first functional relationship between the first score and the input threshold (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally convert the first score to an input weight by subtracting the input threshold from the first score; the examiner notes that this is also a mathematical relationship, which is another type of abstract idea) wherein modifying the at least one of the initial scores comprises converting the second scores to response weights using a second functional relationship between the second scores and the response threshold (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally convert the second score to each of the response weights by subtracting the response threshold from the second score; the examiner notes that this is also a mathematical relationship, which is another type of abstract idea) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 5 Step 2A, Prong 1 wherein the first functional relationship sets the input weight to zero in response to the first score being below the input threshold and increases the input weight linearly from zero to one in response to the first score being above the input threshold and below a maximum score (under the broadest reasonable interpretation, this is a mathematical relationship, where anything below the threshold is a zero-score, and linearly scaling the range between the input threshold and maximum possible first score from 0 to 1, merely determines weights according to a particular function depicted in Fig. 5 to the instant disclosure, which is a mathematical relationship; the examiner further notes that a human can mentally chart this functional relationship) wherein the second functional relationship sets each of the response weights to zero in response to the corresponding second score being below the response threshold and increases each of the response weights linearly from zero to one in response to the corresponding second score being above the response threshold and below a maximum score (under the broadest reasonable interpretation, this is a mathematical relationship, where anything below the threshold is a zero-score, and linearly scaling the range between the response threshold and maximum possible second score from 0 to 1, merely determines weights according to a particular function depicted in Fig. 6 to the instant disclosure, which is a mathematical relationship; the examiner further notes that a human can mentally chart this functional relationship) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 6 Step 2A, Prong 1 wherein the at least one rule specifies a bias (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally associate a “bias” value with the rule) wherein a total bias for each of the candidate responses is equal to a product of the input weight, the corresponding response weight, and the bias specified by the rule. (under the broadest reasonable interpretation, this is a mathematical relationship, where total bias = sum(input weightn * corresponding response weightn * biasn) where n is an index for each corresponding response weight; the examiner further notes that a human can mentally chart this functional relationship) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 7 Step 2A, Prong 1 wherein the at least one rule is not used to modify each of the candidate responses that have a total bias of zero due to at least one of the first score being below the input threshold and the corresponding second score being below the response threshold (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally determine not to reply a rule in the circumstances claimed) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 8 Step 2A, Prong 1 wherein modifying the at least one of the initial scores comprises adding the total bias to the initial scores for the set of candidate responses to generate final scores for the set of candidate responses based on the at least one rule. (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally perform the addition operation cited in this claim; the examiner notes that this is also a mathematical relationship, which is another type of abstract idea) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 9 Step 2A, Prong 1 re-ranking the set of candidate responses based on the final scores (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally re-rank the set of candidate responses in ascending or descending order based on the final scores) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 10 Step 2A, Prong 2 Regarding the “applying the re-ranked set of candidate responses to influence player experience during execution of a video game” limitation, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (video games). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not integrate a judicial exception into a practical application. The examiner notes that at p. 2, lines 14-30 of the instant disclosure, Applicant provides a particular example explaining how the invention is used to “enhance player experience” with respect to video games where the video game world redefines concepts that contrasts with real-world interpretations (using a raccoon suit to endow a character with the ability to fly). The examiner respectfully suggests that amending each claim so that the “claim itself reflects the disclosed improvement” to video game technology would progress the claims towards subject matter eligibility. See MPEP 2106.04(d). Step 2B Regarding the “applying the re-ranked set of candidate responses to influence player experience during execution of a video game” limitation, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use as explained above, which does not amount to significantly more than the judicial exception. MPEP 2106.05(h). Regarding Claim 11 Step 2A, Prong 1 adding an additional rule (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally add an additional rule to a set of rules) modifying the at least one rule or the additional rule (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally modify one of the rules) removing the at least one rule or the additional rule (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can mentally remove one of the rules) Step 2A, Prong 2 Regarding the “to the video game at runtime; ... in the video game at runtime; and ... from the video game at runtime” limitations, such limitations amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (video games). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not integrate a judicial exception into a practical application. Step 2B Regarding the “to the video game at runtime; ... in the video game at runtime; and ... from the video game at runtime” limitations, such limitations amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use as explained above, which does not amount to significantly more than the judicial exception. MPEP 2106.05(h). Regarding Claim 12 Step 2A, prong 1 (Is the claim directed to a law of nature, a natural phenomenon or an abstract idea). Claim 12 corresponds to the method of claim 1, and the analysis under Step 2A, prong 1 explained with respect to claim 1 also applies to this claim 12. Step 2A, prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?). Claim 12 corresponds to the method of claim 1, and the analysis under Step 2A, prong 2 explained with respect to claim 1 also applies to this claim 12. The additional computer components recited in claim 12 (“memory” and “processor”) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) Claim 12 corresponds to the method of claim 1, and the analysis under Step 2B explained with respect to claim 1 also applies to this claim 12. The additional computer components recited in claim 12 (“memory” and “processor”) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Claims 13-20 depend from claim 12 and correspond to the methods claimed in claims 2-9, respectively, and therefore claims 13-20 are rejected for the same reasons explained above with respect to claim 12 and claims 2-9, respectively. Regarding Claim 21 Step 2A, Prong 1 apply the re-ranked set of candidate responses to influence player experience in a game, ... or to modify an association between the first phrase and the second phrase in a manner contrary to conventional usage of the first phrase or the second phrase. (under the broadest reasonable interpretation, a human can mentally perform this limitation, for example, a human can take the highest ranked candidate response and use it in a game (the broadest reasonable interpretation of “game” includes mental games and is not limited to video games), and a human can mentally modify associations between the first and second phrase in a manner contrary to conventional usage of the first or second phrase) Step 2A, Prong 2 Regarding the “or to choose non-player character responses to character statements or actions in the game” limitation, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (video games – the examiner notes that “non-player character” is a known term of art with respect to video games). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not integrate a judicial exception into a practical application. Step 2B Regarding the “or to choose non-player character responses to character statements or actions in the game” limitation, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use as explained above, which does not amount to significantly more than the judicial exception. MPEP 2106.05(h). Claim 22 depends from claim 21 and corresponds to the method claimed in claim 11 and therefore claim 22 is rejected for the same reasons explained above with respect to claims 11 and 21. Claim 23 depends from claim 1 and recites a “non-transitory computer readable medium embodying a set of executable instructions.” The analyses under Step 2A, prong 1, Step 2A, prong 2, and Step 2B with respect to claim 1 are also applicable to claim 23. Moreover, a “non-transitory computer readable medium embodying a set of executable instructions” is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 12-14, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Urbanek, Jack, et al. "Learning to speak and act in a fantasy text adventure game." arXiv preprint arXiv:1903.03094 (2019), hereinafter referenced as URBANEK (disclosed by Applicant’s 4/5/2022 IDS), in view of US 20190392066 A1, hereinafter referenced as KIM, and further in view of Sabir, Ahmed, et al. "Semantic relatedness based re-ranker for text spotting." arXiv preprint arXiv:1909.07950 (2019), hereinafter referenced as SABIR, and further in view of US 20160042359 A1, hereinafter referenced as SINGH. Regarding Claim 1 URBANEK discloses: A method comprising: (URBANEK, p. 1, section 1: “In particular, we adapt the BERT contextual language model (Devlin et al., 2018) to the task of dialogue in two ways: as a bi-ranker, which is fast and practical as a retrieval model, and as a cross-ranker which is slower at inference time but allows more feature cross-correlation between context and response. Both models outperform existing methods.”) generating, using a pre-trained semantic natural language processing (NLP) machine learning (ML) algorithm, a ranked set of candidate responses based on initial scores that represent a degree of matching between the set of candidate responses and an input phrase provided by a user during execution of program code; (URBANEK, p. 1, section 1: “LIGHT is a multi-player fantasy text adventure world designed for studying situated dialogue, and allows interactions between humans, models as embodied agents, and the world itself.” URBANEK, p. 4, section 4: “For all models, we represent context as a large text sequence with a special token preceding each input type (persona, setting, self emote, partner emote, etc.). We work with two model classes: ranking models that output the maximal scoring response from a set of potential candidate responses and generative models that decode word by word.” URBABEK, p. 5, section 4: “We adapt the BERT pretrained language model (Devlin et al., 2018) to the tasks of dialogue and action prediction. We explore two architectures for leveraging BERT. First, we use the BERT-based Bi-Ranker to produce a vector representation for the context and a separate representation for each candidate utterance. This representation is obtained by passing the first output of BERT’s 12 layers through an additional linear layer, resulting in an embedding of dimension 768. It then scores candidates via the dot product between these embeddings and is trained using a ranking loss.”; Examiner’s Note (EN): the BERT-based Bi-Ranker language model corresponds to the recited “pre-trained semantic natural language processing (NLP) machine learning (ML) algorithm”, and the scores calculated using the dot product of the candidate embeddings (corresponding to candidate responses) and the context embedding corresponds to the recited “initial scores” which are ranked according to the BERT Bi-Ranker; the context includes an input phrase (which can come from a player of the fantasy text adventure game, corresponding to recited “input phrase provided by a user during execution of program code”), and candidate responses for the context are evaluated using the Bi-Ranker so that the response semantically makes sense (corresponding to recited “semantic natural language processing (NLP)”) However, URBANEK fails to explicitly teach: post-processing results of the pre-trained NLP ML algorithm without retraining the pre-trained NLP ML algorithm by modifying at least one of the initial scores using at least one rule that associates a first phrase with a second phrase in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase, wherein the at least one rule is selected to modify the at least one of the initial scores based on semantic similarity of the input phrase and the first phrase determined by the semantic NLP ML algorithm and the semantic similarity of the second phrase with a corresponding candidate response; re-ranking the set of candidate responses based on the modified scores to increase the likelihood of particular results relative to the ranked set of candidate responses generated by the pre-trained NLP ML algorithm; modifying execution of the program code based on the re-ranked set of candidate responses. However, in a related field of endeavor (“semantic analysis-based query result retrieval for natural language queries”, see para. 0001), KIM teaches: modifying at least one of the initial scores using at least one rule that associates a first phrase with a second phrase, ... wherein the at least one rule is selected to modify the at least one of the initial scores based on semantic similarity of the input phrase and the first phrase determined by the semantic NLP ML algorithm and the semantic similarity of the second phrase with a corresponding candidate response; and (KIM, para. 0048: “Paraphrase mining engine 370 can identify paraphrases based on query-to-query similarity, generate semantic representations for the paraphrases, and generate paraphrasing rules 380, which include semantic representations of paraphrases and similarity scores (e.g., between −1 and 1 or between 0 and 1) determined based on query-to-query similarity scores.” KIM, para. 0053: “a query result scoring engine (e.g., alignment-based scoring engine 330) obtains paraphrasing rules indicating similarity scores between semantic representations of paraphrases. For example, the query result scoring engine can obtain predetermined paraphrasing rules stored in a list, a table, or a database. As described above with respect to FIG. 3, the paraphrasing rules can be determined in advance and stored in a list, a table, or a database by, for example, paraphrase mining engine 370 using user interaction data 360 associated with a website or a webpage, such as the question-and-answer page or the help or technical support site of a commercial product. The paraphrasing rules include semantic representations of paraphrases and the associated similarity score (e.g., between −1 and 1 or between 0 and 1).” KIM, para. 0055: “In some embodiments, the semantic representation includes one or more semantic structures, such as triples in the form of (a, r, v), where a represents an action, r represents a role, and v represents a value, as described above. The semantic analysis engine uses a rule-based technique, a machine-learning-based technique, or a combination of rule-based technique and machine-learning-based technique to perform the semantic analysis.” Examiner’s Note: KIM teaches rules-based query-to-query similarity scores for paraphrases based on a semantic representation (action, role, value) for queries and query results; a paraphrased query corresponds to the recited “first phrase” and any of the (a,r,v) values corresponding to such paraphrased query corresponds to the recited “second phrase”, where the rules use query-to-query similarity on the input phrase and the paraphrased query (corresponding to recited “semantic similarity of the input phrase and the first phrase determined by the semantic NLP ML algorithm”); the URBANEK-KIM combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, use the query-to-query similarity scores of KIM, to apply a rule having first and second phrases (paraphrased query and result (a, r, v), which can be a paired Q&A) determine the semantic similarity between the user’s input phrase and the paraphrased query (“first phrase”) and additionally determine the semantic similarity between a corresponding candidate phrase and the result (a, r, v) (corresponding to recited “second phrase”), and if the paraphrased query and corresponding result (a, r, v) are selected, the dot product of the context of URBANEK and the embedding of the corresponding result (a, r, v) are used to modify and update the initial score using the new result (a, r, v)). Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with KIM as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so because “paraphrasing rules can be used to more accurately align the semantic structures and determine the similarity between user queries and candidate query results, thereby further improving the accuracy of natural language query result retrieval, in particular, in cases where the user query and the query result have a same meaning but are expressed in different ways.” (para. 0026). One of ordinary skill would understand that using a paraphrase for a user input, as applied to a known set of questions-and-answers (as disclosed by KIM), would more easily enable retrieval of corresponding answers to questions that are worded differently. However, URBANEK and KIM fail to explicitly teach: post-processing results of the pre-trained NLP ML algorithm without retraining the pre-trained NLP ML algorithm by ... in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase re-ranking the set of candidate responses based on the modified scores to increase the likelihood of particular results relative to the ranked set of candidate responses generated by the pre-trained NLP ML algorithm; modifying execution of the program code based on the re-ranked set of candidate responses. However, in a related field of endeavor (re-ranking candidate texts based on context, see p. 1, section 1), SABIR teaches: post-processing results of the pre-trained NLP ML algorithm without retraining the pre-trained NLP ML algorithm by ... (SABIR, p. 1, section 1: “We use existing pretrained architectures for Text Recognition, and add a shallow deep-network that performs a postprocessing operation to re-rank the proposed candidate texts.”; (EN): Fig. 1 of SABIR shows the detailed architecture for a post-process scoring of semantic relatedness between a candidate text and context; the URBANEK-KIM-SABIR combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, modify the initial scores of URBANEK using a post-processing network of SABIR, where such post-processing network is separate from the BERT Bi-Ranker and therefore the BERT Bi-Ranker is not re-trained) re-ranking the set of candidate responses based on the modified scores to increase the likelihood of particular results relative to the ranked set of candidate responses generated by the pre-trained NLP ML algorithm; (SABIR, p. 1, section 1: “We use existing pretrained architectures for Text Recognition, and add a shallow deep-network that performs a postprocessing operation to re-rank the proposed candidate texts. In particular, we re-rank the candidates using their semantic relatedness score with other visual information extracted from the image (e.g. objects, scenario, image caption). Extensive evaluation shows that our approach consistently improves other semantic similarity methods.”; (EN): SABIR teaches re-ranking candidate texts based on semantic relatedness scores based on context, where the re-ranked scores score some candidate responses higher than the original rankings, which increases the likelihood of those increased results relative to the ranked set; the URBANEK-KIM-SABIR combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, modify the initial scores of URBANEK (as in KIM) using a post-processing network of SABIR, and then re-ranking the candidate responses of URBANEK based on the modified scores as in SABIR) modifying execution of the program code based on the re-ranked set of candidate responses. (Examiner’s Note: As explained above, the URBANEK-KIM-SABIR combination provides modifies scores that are re-ranked, and now can use such re-ranked candidate responses to update the game dialog, as shown in Table 7 on p. 8 of URBANEK) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM and SABIR as explained above. As disclosed by SABIR, one of ordinary skill would have been motivated to do so because SABIR teaches the importance of context with respect to “scene understanding” and one would understand that such “scene understanding” would also apply to the video game disclosures of URBANEK. (SABIR, p. 3, section 5). However, URBANEK, KIM, and SABIR fail to explicitly teach: in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase However, in a related field of endeavor (scoring text, see para. 0008), SINGH teaches and makes obvious: modifying at least one of the initial scores using at least one rule that associates a first phrase with a second phrase in a manner contrary to an expected usage based on pre-training of the NLP ML algorithm of the first phrase or the second phrase (SINGH, para. 0040: “In at least one example embodiment, the apparatus 200 may determine an emotion score for one or more segmented text portions, such that at least one emotion score is determined for text corresponding to each language. In at least one embodiment, the apparatus 200 may be caused to perform a correction of each emotion score based on at least one of a semantic analysis, a sarcasm analysis, an experiential analysis and an engagement-based analysis of the text. The determination of emotion scores and their subsequent correction is explained later with reference to FIG. 3.”;SINGH, para. 0060: “In an example embodiment, the sarcasm analyzer 322 is configured to detect presence of sarcasm in segmented text portions corresponding to each language and correct opinion from a negative/positive opinion to a positive/negative opinion. For example, some segmented text portions may include lines/textstrings such as “This device is just awesome” or “My phone has conked out again. Isn't this just great?”, which are meant to be said in a sarcastic sense. In such a case, the sarcasm analyzer 322 detects the presence of sarcasm and provides a corrected emotion score to those text strings or segmented text portions.”; Examiner’s Note: SINGH teaches a sarcasm analyzer that detects the presence of sarcasm in a phrase and then corrects a corresponding score, where sarcasm provides a meaning to a phrase that is “contrary to an expected usage”; the URBANEK-KIM-SABIR-SINGH combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, modify the initial scores of URBANEK, using the rules-based query-to-query similarity scores for paraphrases based on a semantic representation (action, role, value) for queries and query results of KIM, where such rules now utilize the sarcasm ranker of SINGH to modify scores to account for sarcasm, which is contrary to the customary meaning of terms) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM, SABIR, and SINGH as explained above. As disclosed by SINGH, one of ordinary skill would have been motivated to do so because SINGH teaches that a range of emotions, including sarcasm, should be taken into consideration when scoring a natural language response. (para. 0006). Regarding Claim 2 URBANEK, KIM, SABIR, and SINGH disclose the method of claim 1 as explained above. However, URBANEK fails to explicitly teach: wherein modifying the at least one of the initial scores comprises generating, using the semantic NLP ML algorithm, a first score that represents semantic similarity of the first phrase and the input phrase. However, in a related field of endeavor (“semantic analysis-based query result retrieval for natural language queries”, see para. 0001), KIM teaches: wherein modifying the at least one of the initial scores comprises generating, using the semantic NLP ML algorithm, a first score that represents semantic similarity of the first phrase and the input phrase. (KIM, para. 0048: “Paraphrase mining engine 370 can identify paraphrases based on query-to-query similarity, generate semantic representations for the paraphrases, and generate paraphrasing rules 380, which include semantic representations of paraphrases and similarity scores (e.g., between −1 and 1 or between 0 and 1) determined based on query-to-query similarity scores.”; Examiner’s Note: the URBANEK-KIM-SABIR-SINGH combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, use the query-to-query similarity scores of KIM, to compare the input phrase and a paraphrased query (corresponding to the “first phrase”), resulting in a score corresponding to the recited “first score”) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM, SABIR, and SINGH as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so because “paraphrasing rules can be used to more accurately align the semantic structures and determine the similarity between user queries and candidate query results, thereby further improving the accuracy of natural language query result retrieval, in particular, in cases where the user query and the query result have a same meaning but are expressed in different ways.” (para. 0026). One of ordinary skill would understand that using a paraphrase for a user input phrase could enable one of ordinary skill to efficiently search for corresponding answers that correspond to paraphrased queries. Regarding Claim 3 URBANEK, KIM, SABIR, and SINGH disclose the method of claim 1 as explained above. However, URBANEK fails to explicitly teach: wherein modifying the at least one of the initial scores comprises generating, using the semantic NLP ML algorithm, second scores that represent semantic similarities of the set of candidate responses and the second phrase. However, in a related field of endeavor (“semantic analysis-based query result retrieval for natural language queries”, see para. 0001), KIM teaches: wherein modifying the at least one of the initial scores comprises generating, using the semantic NLP ML algorithm, second scores that represent semantic similarities of the set of candidate responses and the second phrase. (KIM, para. 0048: “Paraphrase mining engine 370 can identify paraphrases based on query-to-query similarity, generate semantic representations for the paraphrases, and generate paraphrasing rules 380, which include semantic representations of paraphrases and similarity scores (e.g., between −1 and 1 or between 0 and 1) determined based on query-to-query similarity scores.”; Examiner’s Note: the URBANEK-KIM-SABIR-SINGH combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, use the query-to-query similarity scores of KIM, to compare a candidate response with the corresponding result of KIM, for example, to ensure that the candidate response and corresponding result are also sufficiently aligned) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM, SABIR, and SINGH as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so because “paraphrasing rules can be used to more accurately align the semantic structures and determine the similarity between user queries and candidate query results, thereby further improving the accuracy of natural language query result retrieval, in particular, in cases where the user query and the query result have a same meaning but are expressed in different ways.” (para. 0026). One of ordinary skill would understand that aligning the candidate responses would be beneficial to make sure that the relationship between the input phrase and paraphrased query (of KIM) sufficiently extends to the corresponding result (of KIM). Regarding Claim 12 URBANEK teaches: An apparatus, comprising: a memory configured to store a program code representative of a semantic natural language processing (NLP) machine learning (ML) algorithm; and a processor configured to execute the semantic NLP ML algorithm to (URBANEK, p.6, section 4.1: “We use the BERT (Devlin et al., 2018) implementation provided by Hugging Face with pre-trained weights, then adapted to our Bi-Ranker and Cross-Ranker setups.”; Examiner’s Note: As shown in footnote 2 on page 6 of URBANEK, the computer code is available at Github, which means that the BERT model is going to be implemented on a computer having at least memory and a processor for implementing the BERT language model and the improvements of the LIGHT platform of URBANEK) The remaining limitations correspond to the method of claim 1, and therefore this claim is rejected for the same reasons explained above with respect to claim 1 under 35 U.S.C. 103 in view of the URBANEK, KIM, SABIR, and SINGH references. Claim 13 depends from claim 12 and claims an apparatus that corresponds to the method of claim 2 and is therefore rejected for the same reasons explained with respect to claims 2 and 12. Claim 14 depends from claim 12 and claims an apparatus that corresponds to the method of claim 3 and is therefore rejected for the same reasons explained with respect to claims 3 and 12. Regarding Claim 23 URBANEK teaches: A non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate at least one processor (URBANEK, p.6, section 4.1: “We use the BERT (Devlin et al., 2018) implementation provided by Hugging Face with pre-trained weights, then adapted to our Bi-Ranker and Cross-Ranker setups.”; Examiner’s Note: As shown in footnote 2 on page 6 of URBANEK, the computer code is available at Github, which means that the BERT model is going to be implemented on a computer having at least non-transitory memory (such as a hard disk drive or solid state drive) and a processor for implementing the BERT language model and the improvements of the LIGHT platform of URBANEK) to perform the method of claim 1 (see rejection of claim 1 above) Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over URBANEK in view of KIM, SABIR, and SINGH and further in view of US 20130262465 A1, hereinafter referenced as GALLE. Regarding Claim 4 URBANEK, KIM, SABIR, and SINGH disclose the method of claim 1 as explained above. However, URBANEK, by itself, fails to explicitly teach: wherein the at least one rule indicates an input threshold for the first phrase and a response threshold for the second phrase, wherein modifying the at least one of the initial scores comprises converting the first score to an input weight using a first functional relationship between the first score and the input threshold, and wherein modifying the at least one of the initial scores comprises converting the second scores to response weights using a second functional relationship between the second scores and the response threshold. However, in a related field of endeavor (“semantic analysis-based query result retrieval for natural language queries”, see para. 0001), KIM teaches: wherein the at least one rule indicates an input threshold for the first phrase and a response threshold for the second phrase, (KIM, para. 0042: “If the selected query result has a match score greater than a threshold value, the selected query result is provided to the user”; KIM, para. 0048: “Paraphrase mining engine 370 can identify paraphrases based on query-to-query similarity”; Examiner’s Note: the URBANEK-KIM-SABIR-SINGH combination now has the rules-based comparison of KIM utilize a threshold of KIM to determine if the query-to-query similarity (between input phrase and query paraphrase) exceeds the threshold, and further determines if the candidate result to the candidate response (a, r, v) of KIM exceeds the threshold) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM, SABIR, and SINGH as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so because “paraphrasing rules can be used to more accurately align the semantic structures and determine the similarity between user queries and candidate query results, thereby further improving the accuracy of natural language query result retrieval, in particular, in cases where the user query and the query result have a same meaning but are expressed in different ways.” (para. 0026). One of ordinary skill would understand that aligning the queries and the candidate responses would be beneficial to make sure that the relationship between the input phrase and paraphrased query (of KIM) sufficiently extends to the corresponding result (of KIM). However, URBANEK, KIM, SABIR, and SINGH fail to explicitly teach: wherein modifying the at least one of the initial scores comprises converting the first score to an input weight using a first functional relationship between the first score and the input threshold, and wherein modifying the at least one of the initial scores comprises converting the second scores to response weights using a second functional relationship between the second scores and the response threshold. However, in a related field of endeavor (“threshold-based clustering algorithm, suited to clustering news articles”, see para. 0001, which corresponds to natural language processing), GALLE teaches: wherein modifying the at least one of the initial scores comprises converting the first score to an input weight using a first functional relationship between the first score and the input threshold, and wherein modifying the at least one of the initial scores comprises converting the second scores to response weights using a second functional relationship between the second scores and the response threshold. (GALLE, para. 0071: “In Equation 1, the score is defined as the sum over all clusters and over all points assigned to that cluster of the similarity between the data point and the representative point minus the threshold. A point whose similarity to its cluster representative is less than the comparison measure threshold τ will obviously have a negative contribution to the total score.”; Examiner’s Note: the URBANEK-KIM-SABIR-SINGH-GALLE combination now has the fantasy adventure game engine of URBANEK, utilizing the BERT Bi-Ranker, use the query-to-query similarity score and threshold of KIM, and subtracting the threshold from the similarity score as in GALLE (corresponding to recited “converting the first score to an input weight using a first functional relationship between the first score and the input threshold”) and further using the candidate response – corresponding result score and threshold described above and subtracting the threshold from such score (corresponding to recited “converting the second scores to response weights using a second functional relationship between the second scores and the response threshold”) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the multi-player fantasy text adventure world platform (LIGHT) of URBANEK with the teachings of KIM, SABIR, SINGH, and GALLE as explained above. One of ordinary skill would be motivated to do so because GALLE discloses several well-known threshold-based clustering algorithms used for document clustering (see para. 0004), and one of ordinary skill would understand the benefit of using such known and peer-reviewed algorithms that have already been evaluated. Moreover, one of ordinary skill would further be motivated to do so in order to normalize a score by subtracting a threshold, to more easily compare scores to one another (just the portion of the original score that exceeds the threshold). Claim 15 depends from claim 12 and claims an apparatus that corresponds to the method of claim 4 and is therefore rejected for the same reasons explained with respect to claims 4 and 12. Allowable Subject Matter Claims 5-11 and 16-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 5 would be allowable, if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome, since none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specified in claim 5, including at least: wherein the first functional relationship sets the input weight to zero in response to the first score being below the input threshold and increases the input weight linearly from zero to one in response to the first score being above the input threshold and below a maximum score, and wherein the second functional relationship sets each of the response weights to zero in response to the corresponding second score being below the response threshold and increases each of the response weights linearly from zero to one in response to the corresponding second score being above the response threshold and below a maximum score. The closest prior art of record discloses: The URBANEK, KIM, SABIR, SINGH, and GALLE references disclose the method of claim 4, from which claim 5 depends. However, the examiner has found that the distinct feature of the Applicant's claimed invention over the prior art is the explicit claiming of the aforementioned limitations in combination with all the other limitations as specified in claim 5, which would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome. To the extent that these features are not found in the prior art cited by the examiner, the present case is held allowable over the art of record. The examiner further finds that one of ordinary skill would not have been motivated to specifically design first and second functional relationships in the precise manner claimed without the hindsight aid of Applicant’s disclosure. Claim 6 would be allowable, if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome, since none of the references of record either alone or in combination fairly disclose or suggest the combination of limitations specified in claim 6, including at least: wherein the at least one rule specifies a bias, and wherein a total bias for each of the candidate responses is equal to a product of the input weight, the corresponding response weight, and the bias specified by the rule. The closest prior art of record discloses: The URBANEK, KIM, SABIR, SINGH, and GALLE references disclose the method of claim 4, from which claim 6 depends. However, the examiner has found that the distinct feature of the Applicant's claimed invention over the prior art is the explicit claiming of the aforementioned limitations in combination with all the other limitations as specified in claim 6, which would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 and 112(b) are overcome. To the extent that these features are not found in the prior art cited by the examiner, the present case is held allowable over the art of record. The examiner further finds that one of ordinary skill would not have been motivated to specifically design a rule in the precise manner claimed without the hindsight aid of Applicant’s disclosure. Claims 7-11 depend from claim 6, and would similarly be allowable for the same reasons explained with respect to claim 6 if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome. Claim 16 depends from claim 12 and claims an apparatus that corresponds to the method of claim 5 and would similarly be allowable for the same reasons explained with respect to claim 5 if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome. Claims 17-22 depend from claim 7, and would similarly be allowable for the same reasons explained with respect to claim 7 if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the rejections under 35 U.S.C. 101 are overcome. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11132993 B1 (McDaniel). “ Humans use many other aspects apart from words during conversation to convey information. For example, a speaker may use intonation, emotion, loudness, etc. to convey the intended meaning of a spoken phrase. In some instances, aspects apart from words can even turn a spoken phrase into the opposite meaning, known as sarcasm or irony. Thus, a person listening to another person speaking will generally take all these facets into consideration in processing a spoken phrase to form a meaning of the phrase.” (col. 1, lines 9-19). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C LEE whose telephone number is (571)272-4933. The examiner can normally be reached M-F 12:00 pm - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL C. LEE/Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Apr 05, 2022
Application Filed
Apr 11, 2025
Non-Final Rejection — §101, §103
Jul 22, 2025
Response Filed
Aug 05, 2025
Final Rejection — §101, §103
Oct 10, 2025
Response after Non-Final Action
Dec 11, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 25, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603081
METHOD AND SERVER FOR A TEXT-TO-SPEECH PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602605
QUANTUM COMPUTER ARCHITECTURE BASED ON MULTI-QUBIT GATES
2y 5m to grant Granted Apr 14, 2026
Patent 12591915
METHODS AND SYSTEMS FOR DETERMINING RECOMMENDATIONS BASED ON REAL-TIME OPTIMIZATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12585743
INTERFACE ACCESS PROCESSING METHOD, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12568935
AI-BASED LIVESTOCK MANAGEMENT SYSTEM AND LIVESTOCK MANAGEMENT METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
86%
With Interview (+27.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month