Prosecution Insights
Last updated: April 19, 2026
Application No. 18/960,490

SYSTEMS AND METHODS FOR AUTOMATIC AUDIT INFORMATION LABELLING

Non-Final OA §101§103
Filed
Nov 26, 2024
Examiner
SCHEUNEMANN, RICHARD N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Royal Bank Of Canada
OA Round
1 (Non-Final)
6%
Grant Probability
At Risk
1-2
OA Rounds
4y 7m
To Grant
15%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
35 granted / 551 resolved
-45.6% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
56 currently pending
Career history
607
Total Applications
across all art units

Statute-Specific Performance

§101
37.4%
-2.6% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 551 resolved cases

Office Action

§101 §103
DETAILED ACTION Introduction This Non-Final Office Action is in response to the application with serial number 18/960,490, filed on November 26, 2024. Claims 1-22 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows. Claims 1-22 are rejected under 35 U.S.C. 101. The claimed invention is directed to non-statutory subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under Step 1 of the subject matter eligibility analysis, claims(s) 1-22 are all directed to one of the four statutory categories of invention. However, under step 2A, prong one, the claims recite a judicial exception: labeling issues from an audit (as evidenced by the preamble of exemplary independent claim 1), an abstract idea. Certain methods of organizing human activity are ineligible abstract ideas, including managing personal behavior or relationships or interactions between people. See MPEP §2106.04(a). The limitations of exemplary claim 1 include: “receiving an issue description comprising a text description;” “combining the text description with a plurality of hypotheses texts to generate a plurality of description: hypothesis pairs . . . associated with a sub-risk description;” “applying each of the description: hypothesis pairs;” “determining relevance of each sub-risk in the taxonomy to the issue description;” and “outputting a plurality of relevant sub-risks.” The steps are all steps for managing personal behavior related to the abstract idea of labeling issues from an audit that, when considered alone and in combination, are part of the abstract idea of labeling issues from an audit. The dependent claims further recite steps for managing personal behavior that are part of the abstract idea of labeling issues from an audit. These claim elements, when considered alone and in combination, are considered to be abstract ideas because they are directed to a method of organizing human activity which includes classifying audit risks based on descriptions of financial issues and accounting. Under step 2A, prong two, of the subject matter eligibility analysis, a claim that recites a judicial exception must be evaluated to determine whether the claim provides a practical application of the judicial exception. Additional elements of the independent claims amount to generic computer hardware that does not provide a practical application (no hardware is recited in independent claim 1; a computer readable medium in independent claim 14; and a computing system in dependent claim 22). See MPEP §2106.04(d)[I]. The claims do not recite an improvement to another technology or technical field, nor do they recite an improvement to the functioning of the computer itself. See MPEP §2106.05(a). The claims do recite the use of a zero-shot classification model, but the abstract idea of labeling issues from an audit is generally linked to a computing environment with a zero-shot classification model for implementation. Therefore, the zero-shot classification model does not provide a practical application or significantly more than the recited abstract idea. See MPEP §2106.05(h). Because the claims only recite use of a generic computer, they do not apply the judicial exception with a particular machine. See MPEP §2106.05(b). Under step 2B of the subject matter eligibility analysis, the claims do not integrate the abstract idea into a judicial exception. Referring to the additional elements provided in the analysis in step one, above, the generic computer hardware does not provide significantly more than the recited abstract idea. See MPEP §2106.05(f). For these reasons, the claims do not provide a practical application of the abstract idea, nor do they amount to significantly more than an abstract idea under step 2B of the subject matter eligibility analysis. Using a generic computer to implement an abstract idea does not provide an inventive concept. Therefore, the claims recite ineligible subject matter under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 6, 14, 18, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to Wang et al. (hereinafter ‘WANG ‘920’) in view of US 20220414137 A1 to Sewak et al. (hereinafter ‘SEWAK’). Claim 1 WANG ‘920 discloses a method of automatically labelling issues from an internal audit (see abstract and ¶[0039]; real-time expense auditing to compute audit risk scores as a function of expense descriptions. The output include a label), the method comprising: receiving an issue description comprising a text description of an internal audit issue (see again abstract; compute audit risk scores as a function of expense descriptions); combining the text description with a plurality of hypotheses texts to generate a plurality of description: hypothesis pairs (see ¶[0040], [0091], and [0102]; labeled training data includes input/output pairs in which each input is labeled with a desired output. An example training dataset may include one or more labels. Receive a query including a description of a new hypothetical or actual expense), each of the plurality of hypotheses texts associated with a sub-risk description for a sub-risk in a risk taxonomy (see ¶[0093]; the system generates a set of feature vectors for labeled examples. Example features include categorical information about what type of expense was incurred. Include rules about the types of expenses that are permissible and conditions where expenses are reimbursable). WANG does not specifically disclose, but SEWAK discloses, applying each of the description:hypothesis pairs to a zero-shot classification model to determine a label score for the sub-risk associated with the hypothesis (see abstract and ¶[0002]-[0004], [00039], & [0050]; a zero-shot generative mode is generally a mode of a generative NLP model capable of generating text without fine-tuning with a specific type of data. A generative NLP model generally receives an input text string and produces a generative result that is text, which is generated at the prompting of the input text string. Estimate a likelihood that the label applies to candidate text. Score the relevance of documents. WANG further discloses determining relevance of each sub-risk in the risk taxonomy to the issue description (see ¶[0026]-[0027]; the expense auditing system may automatically learn what patterns are predictive of the likelihood that a pattern of activities will trigger an audit and/or contravene expense policies); and outputting a plurality of relevant sub-risks associated with the issue description (see ¶[0027] and [0039]-[0049]; compute an output that may be a label, classification, or categorization. Predict whether an expense is an audit risk and formulate a natural language response). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model. It would have been obvious for one of ordinary skill in the art at the time of invention to include the zero-shot generative model as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions. Claim 5 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 1. WANG ‘920 does not specifically disclose, but SEWAK discloses, wherein determining the relevance of each sub-risk in the risk taxonomy to the issue description comprises: filtering each of the label scores to identify a top n labels for the issue description, where n is a whole number greater than 1 (see ¶[0175]; rank positive example results based on cosine similarity. See also ¶[0067]; probability of a label class is above an acceptable threshold level). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. It would have been obvious for one of ordinary skill in the art at the time of invention to include the ranking as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions. Claim 6 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 5. WANG ‘920 does not specifically disclose, but SEWAK discloses, wherein the filtering comprises: aggregating a plurality label scores for hypothesis associated with the same sub-risk; and filtering on the aggregated label scores (see ¶[0067]; probability of a label class is above an acceptable threshold level). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. It would have been obvious for one of ordinary skill in the art at the time of invention to include the ranking as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions. Claim 14 WANG ‘920 discloses a non-transitory computer readable medium storing instructions, which when executed by a processor of a computing device configure the computing device to perform a method (see ¶[0119]-[0120] a storage medium with a program) comprising: receiving an issue description comprising a text description of an internal audit issue (see again abstract; compute audit risk scores as a function of expense descriptions); combining the text description with a plurality of hypotheses texts to generate a plurality of description: hypothesis pairs (see ¶[0040], [0091], and [0102]; labeled training data includes input/output pairs in which each input is labeled with a desired output. An example training dataset may include one or more labels. Receive a query including a description of a new hypothetical or actual expense), each of the plurality of hypotheses texts associated with a sub-risk description for a sub-risk in a risk taxonomy (see ¶[0093]; the system generates a set of feature vectors for labeled examples. Example features include categorical information about what type of expense was incurred. Include rules about the types of expenses that are permissible and conditions where expenses are reimbursable). WANG does not specifically disclose, but SEWAK discloses, applying each of the description: hypothesis pairs to a zero-shot classification model to determine a label score for the sub-risk associated with the hypothesis (see abstract and ¶[0002]-[0004], [00039], & [0050]; a zero-shot generative mode is generally a mode of a generative NLP model capable of generating text without fine-tuning with a specific type of data. A generative NLP model generally receives an input text string and produces a generative result that is text, which is generated at the prompting of the input text string. Estimate a likelihood that the label applies to candidate text. Score the relevance of documents. WANG further discloses, determining relevance of each sub-risk in the risk taxonomy to the issue description (see ¶[0026]-[0027]; the expense auditing system may automatically learn what patterns are predictive of the likelihood that a pattern of activities will trigger an audit and/or contravene expense policies); and outputting a plurality of relevant sub-risks associated with the issue description (see ¶[0027] and [0039]-[0049]; compute an output that may be a label, classification, or categorization. Predict whether an expense is an audit risk and formulate a natural language response). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model. It would have been obvious for one of ordinary skill in the art at the time of invention to include the zero-shot generative model as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions. Claim 18 The combination of WANG ‘920 and SEWAK discloses the computer readable medium as set forth in claim 14. WANG ‘920 does not specifically disclose, but SEWAK discloses, wherein determining the relevance of each sub-risk in the risk taxonomy to the issue description comprises: filtering each of the label scores to identify a top n labels for the issue description, where n is a whole number greater than 1 (see ¶[0175]; rank positive example results based on cosine similarity. See also ¶[0067]; probability of a label class is above an acceptable threshold level). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. It would have been obvious for one of ordinary skill in the art at the time of invention to include the ranking as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions. Claim 22 WANG ‘920 discloses a computing system comprising: a processor for executing instructions; and a memory storing instructions (see ¶[0029] and Fig. 1; a computer network), which when executed by the processor configure the computing system to perform a method according to claim 1 (see claim 1 rejection). Claim(s) 2, 3, 15, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claim 1 above, and further in view of US 20230196105 A1 to Wang et al. (hereinafter WANG ‘105’). Claim 2 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 1. The combination of WANG ‘920 and SEWAK does not specifically disclose, but WANG ‘105 discloses, wherein determining the relevance of each sub-risk in the risk taxonomy to the issue description comprises: applying a generative large-language model (LLM) to the issue description and the hypothesis texts to determine if the issue description is relevant to the hypothesis text (see ¶[0019] and [0034]; text classification into categories given a premise sequence and a hypothesis sequence based on similarity. A language model neural network may include large language models). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. WANG ‘105 discloses classification tasks using hypothesis sequences that are used to compute a similarity between input sequences to classify the input. It would have been obvious for one of ordinary skill in the art to use the large language model using hypothesis sequences as taught by WANG ‘105 in the system executing the method of WANG ‘920 with the motivation to label audit risks based on descriptions. Claim 3 The combination of WANG ‘920, SEWAK, and WANG ‘105 discloses the method as set forth in claim 2. WANG ‘920 does not specifically disclose, but WANG ‘105 discloses, wherein only issue descriptions with a label score above a threshold are applied to the generative LLM (see ¶[0079]-[0080]; if both (i) the highest probability in the probability distribution exceeds the threshold probability for the training step and (ii) the highest probability is for a category that is different than the target category identified in the sampled auto-labeled training example, the system refrains from training the task neural network on the sampled auto-labeled training example at the training step (step 506). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. WANG ‘105 discloses classification tasks using hypothesis sequences that are used to compute a similarity between input sequences to classify the input, where only categories with a threshold probability for classification are used as training examples. It would have been obvious for one of ordinary skill in the art to use the thresholds as taught by WANG ‘105 in the system executing the method of WANG ‘920 with the motivation to label audit risks based on descriptions using a large language model. Claim 15 The combination of WANG ‘920 and SEWAK discloses the computer readable medium as set forth in claim 14. The combination of WANG ‘920 and SEWAK does not specifically disclose, but WANG ‘105 discloses, wherein determining the relevance of each sub-risk in the risk taxonomy to the issue description comprises: applying a generative large-language model (LLM) to the issue description and the hypothesis texts to determine if the issue description is relevant to the hypothesis text (see ¶[0019] and [0034]; text classification into categories given a premise sequence and a hypothesis sequence based on similarity. A language model neural network may include large language models). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. WANG ‘105 discloses classification tasks using hypothesis sequences that are used to compute a similarity between input sequences to classify the input. It would have been obvious for one of ordinary skill in the art to use the large language model using hypothesis sequences as taught by WANG ‘105 in the system executing the method of WANG ‘920 with the motivation to label audit risks based on descriptions. Claim 16 The combination of WANG ‘920, SEWAK, and WANG ‘105 discloses the computer readable medium as set forth in claim 15. WANG ‘920 does not specifically disclose, but WANG ‘105 discloses, wherein only issue descriptions with a label score above a threshold are applied to the generative LLM (see ¶[0079]-[0080]; if both (i) the highest probability in the probability distribution exceeds the threshold probability for the training step and (ii) the highest probability is for a category that is different than the target category identified in the sampled auto-labeled training example, the system refrains from training the task neural network on the sampled auto-labeled training example at the training step (step 506). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. WANG ‘105 discloses classification tasks using hypothesis sequences that are used to compute a similarity between input sequences to classify the input, where only categories with a threshold probability for classification are used as training examples. It would have been obvious for one of ordinary skill in the art to use the thresholds as taught by WANG ‘105 in the system executing the method of WANG ‘920 with the motivation to label audit risks based on descriptions using a large language model. Claim(s) 7 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claims 1 and 6 above, and further in view of US 20230113956 A1 to Lal et al. (hereinafter ‘LAL’). Claim 7 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 6. The combination of WANG ‘920 and SEWAK does not specifically disclose, but LAL discloses, wherein the filtering further comprises: for all hypothesis associated with sub-risks grouped by a common risk, filtering to a top m sub-risks for the risk grouping, where m is a whole number less than n (see ¶[0149]; identify one or more relevant tags 1460 by filtering a listing of available tags 1460 by category 1450 and/or subcategory). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. LAL discloses tags for sub-categories of categories. It would have been obvious to include the tags for sub-categories as taught by LAL in the system executing the method of WANG ‘920 with the motivation to label audit risks. Claim 19 The combination of WANG ‘920 and SEWAK discloses the computer readable medium as set forth in claim 18. WANG ‘920 does not specifically disclose, but SEWAK discloses, discloses, wherein the filtering comprises: aggregating a plurality label scores for hypothesis associated with the same sub-risk; filtering on the aggregated label scores (see ¶[0067]; probability of a label class is above an acceptable threshold level). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. It would have been obvious for one of ordinary skill in the art at the time of invention to include the ranking as taught by SEWAK in the system executing the method of WANG with the motivation to label audit risk based on expense descriptions The combination of WANG ‘920 and SEWAK does not specifically disclose, but LAL discloses, and for all hypothesis associated with sub-risks grouped by a common risk, filtering to a top m sub-risks for the risk grouping, where m is a whole number less than n (see ¶[0149]; identify one or more relevant tags 1460 by filtering a listing of available tags 1460 by category 1450 and/or subcategory). WANG discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. SEWAK discloses automatic labeling of text data using a zero-shot generative model, where a ranked list of candidate results is created based on a threshold probability of accuracy. LAL discloses tags for sub-categories of categories. It would have been obvious to include the tags for sub-categories as taught by LAL in the system executing the method of WANG ‘920 with the motivation to label audit risks. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claim 1 above, and further in view of US 20200065387 A1 to Matthews et al. (hereinafter ‘MATTHEWS’). Claim 8 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 1. The combination of WANG ‘920 and SEWAK does not specifically disclose, but MATTHEWS discloses, further comprising cleaning the issue description to normalize the issue description (see ¶[0058]-[0059]; the control platform 100 can clean and process the input data normalize data, remove spaces, make text in the same case, and so on. This cleaning and processing can refer to a bag-of-words model, for example. Natural language processing to process large natural language corpa). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on expense descriptions. MATTHEWS discloses report processing through natural language processing techniques that includes cleaning and processing input data and normalizing the data. It would have been obvious to clean and normalize text input as taught by MATTHEWS in the system executing the method of WANG ‘920 with the motivation to label audit risks based on expense descriptions. Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claim 1 above, and further in view of US 20150243285 A1 to Lane et al. (hereinafter ‘LANE’). Claim 9 The combination of WANG ‘920 and SEWAK discloses the method as set forth in claim 1. The combination of WANG ‘920 and SEWAK does not specifically disclose, but LANE discloses, wherein each of one or more of the sub-risks in the risk taxonomy are associated with a plurality of hypothesis (see ¶[0050]; assign an n-best hypotheses list to each arc and state). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. LANE discloses speech recognition that may include a large language model with multiple hypotheses for an arc and state. It would have been obvious to include multiple hypotheses as taught by LANE in the system executing the method of WANG ‘920 with the motivation to apply and efficiently prune a large language model (see LANE ¶[0009]). Claim 10 The combination of WANG ‘920, SEWAK, and LANE discloses the method as set forth in claim 9. WANG ‘920 does not specifically disclose, but LANE discloses, wherein the plurality of hypothesis are based on different portions of the sub-risk description in the risk taxonomy (see ¶[0009]; large language models contain millions of unique entries and billions of n-gram contexts). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. LANE discloses speech recognition that may include a large language model with multiple hypotheses for an arc and state derived from a large language model with millions of entries. It would have been obvious to include multiple hypotheses derived from millions of entries as taught by LANE in the system executing the method of WANG ‘920 with the motivation to apply and efficiently prune a large language model (see LANE ¶[0009]). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claim 1 above, and further in view of US 20220101115 A1 to Zhou et al. (hereinafter ‘ZHOU’). Claim 12 The combination of WANG and SEWAK discloses the method as set forth in claim 1. The combination of WANG and SEWAK does not specifically disclose, but ZHOU discloses, further comprising: receiving a hypothesis; determining relevant portions of the issue description to the selected hypothesis; and highlighting the relevant portions of the issue description in a user interface display (see ¶[0024]; keywords relevant to an issue in a labeled sentence can be highlighted). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. ZHOU discloses highlighting relevant keywords in descriptive natural language text that are relevant to a label. It would have been obvious to highlight words in a description that are relevant to a label as taught by ZHOU in the system executing the method of WANG with the motivation to provide information regarding relevant keywords when labeling text. Claim(s) 4 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. and US 20230196105 A1 to WANG ‘105 et al. as applied to claims 1 and 2 above, and further in view of US 20220138572 A1 to Song et al. (hereinafter ‘SONG’). Claim 4 The combination of WANG ‘920, SEWAK, and WANG ‘105 discloses the method as set forth in claim 3. The combination of WANG ‘920, SEWAK, and WANG ‘105 does not specifically disclose, but SONG discloses, wherein the hypothesis text applied to the generative LLM is a simplified version of the hypothesis text applied to the zero-shot classification model (see ¶[0038], [0065]-[0068], [0085], and [0098]; predict the values of unseen tokens over a large scale and general domain language corpa. Learn nuances ahead of time rather than performing them on downstream tasks. Concatenate a list of tokenized sentences up to a 512 token limit in pre-training. Some labels only occur in the test set. When pre-training, sentences with less than five words were removed). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. SONG discloses zero-shot pre-training that includes learning nuances ahead of time and limiting token phrases in pre-training. It would have been obvious to include the pre-training as taught by SONG in the system executing the method of WANG ‘920 with the motivation to conduct efficient pretraining of a language model (see SONG ¶[0096]). Claim 17 The combination of WANG ‘920, SEWAK, and WANG ‘105 discloses the computer readable medium as set forth in claim 16. The combination of WANG ‘920, SEWAK, and WANG ‘105 does not specifically disclose, but SONG discloses, wherein the hypothesis text applied to the generative LLM is a simplified version of the hypothesis text applied to the zero-shot classification model (see ¶[0038], [0065]-[0068], [0085], and [0098]; predict the values of unseen tokens over a large scale and general domain language corpa. Learn nuances ahead of time rather than performing them on downstream tasks. Concatenate a list of tokenized sentences up to a 512 token limit in pre-training. Some labels only occur in the test set. When pre-training, sentences with less than five words were removed). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. SONG discloses zero-shot pre-training that includes learning nuances ahead of time and limiting token phrases in pre-training. It would have been obvious to include the pre-training as taught by SONG in the system executing the method of WANG ‘920 with the motivation to conduct efficient pretraining of a language model (see SONG ¶[0096]). Claim(s) 11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. as applied to claim 1 above, and further in view of US 20150243285 A1 to Lane et al. (hereinafter ‘LANE’) as applied to claims 1 and 9 above, and further in view of US 20220366135 A1 to Patel et al. (hereinafter ‘PATEL’). Claim 11 The combination of WANG ‘920, SEWAK, and LANE discloses the method as set forth in claim 9. The combination of WANG ‘920, SEWAK, and LANE does not explicitly disclose, but PATEL discloses, wherein the plurality of hypothesis are based on different phrasing of a same portion of the same sub-risk description in the risk taxonomy (see ¶[0015]; certain embodiments may require zero-training data given a pre-trained model for accessing syntax dependency trees, parts of speech tags, and tokens, and can easily extend to languages other than English since the heuristics built on recognizing patterns in the syntax dependency tree operate on a universal dependencies framework which is defined in over 100 languages). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. PATEL discloses a pre-trained model that extends into languages other than English. It would have been obvious to include multiple languages as taught by PATEL in the system executing the method of WANG ‘920 with the motivation to provide a pre-trained model for labeling audit risks. Claim 20 The combination of WANG ‘920 and SEWAK discloses the computer readable medium as set forth in claim 14. The combination of WANG ‘920 and SEWAK does not specifically disclose, but LANE discloses, wherein each of one or more of the sub-risks in the risk taxonomy are associated with a plurality of hypothesis (see ¶[0050]; assign an n-best hypotheses list to each arc and state). , wherein the plurality of hypothesis are based on one or more of: different portions of the sub-risk description in the risk taxonomy (see ¶[0009]; large language models contain millions of unique entries and billions of n-gram contexts). The combination of WANG ‘920, SEWAK, and LANE does not explicitly disclose, but PATEL discloses, and different phrasing of a same portion of the same sub-risk description in the risk taxonomy (see ¶[0015]; certain embodiments may require zero-training data given a pre-trained model for accessing syntax dependency trees, parts of speech tags, and tokens, and can easily extend to languages other than English since the heuristics built on recognizing patterns in the syntax dependency tree operate on a universal dependencies framework which is defined in over 100 languages). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. PATEL discloses a pre-trained model that extends into languages other than English. It would have been obvious to include multiple languages as taught by PATEL in the system executing the method of WANG ‘920 with the motivation to provide a pre-trained model for labeling audit risks. WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. LANE discloses speech recognition that may include a large language model with multiple hypotheses for an arc and state. It would have been obvious to include multiple hypotheses as taught by LANE in the system executing the method of WANG ‘920 with the motivation to apply and efficiently prune a large language model (see LANE ¶[0009]). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. PATEL discloses a pre-trained model that extends into languages other than English. It would have been obvious to include multiple languages as taught by PATEL in the system executing the method of WANG ‘920 with the motivation to provide a pre-trained model for labeling audit risks. Claim(s) 13 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210073920 A1 to WANG ‘920 et al. in view of US 20220414137 A1 to SEWAK et al. and US 20220101115 A1 to ZHOU et al. as applied to claims 1 and 12 above, and further in view of US 20230368773 A1 to Mishra (hereinafter ‘MISHRA’). Claim 13 The combination of WANG ‘920, SEWAK, and ZHOU discloses the method as set forth in claim 12. The combination of WANG ‘920, SEWAK, and ZHOU does not specifically disclose, but MISHRA discloses, wherein determining the relevant portions of the issue description comprises: generating a plurality of text groupings based on pairings of sentences in issue description (see ¶[0091]-[0092]; parse alphanumeric characters into tokens. Assign semantic meaning to tokens with relevancy above a threshold); applying each of text groupings, combined with the hypothesis, to the zero shot classifier to provide a text group scoring for the hypothesis (see ¶[0091]-[0092]; parse alphanumeric characters into tokens. Assign semantic meaning to tokens with relevancy above a threshold). MISHRA does not specifically disclose, but ZHOU discloses, selecting the text grouping with the highest text group scoring for highlighting (see ¶[0024]; keywords relevant to an issue in a labeled sentence can be highlighted. See also ¶[0046]; rank results in order of confidence level to a threshold from the highest for determinations in downstream processing). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. MISHRA discloses a virtual agent using a large language model that assigns a semantic meaning to a token when a relevancy is above a threshold. It would have been obvious for one of ordinary skill in the art at the time of inventio to apply a threshold to tokenized alphanumeric characters as taught by MISHRA in the system executing the method of WANG ‘920 with the motivation to interpret language using a large language model. WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. ZHOU discloses highlighting relevant keywords in descriptive natural language text that are relevant to a label. It would have been obvious to highlight words in a description that are relevant to a label as taught by ZHOU in the system executing the method of WANG with the motivation to provide information regarding relevant keywords when labeling text. Claim 21 The combination of WANG ‘920 and SEWAK discloses the computer readable medium as set forth in claim 14. The combination of WANG and SEWAK does not specifically disclose, but ZHOU discloses, further comprising: receiving a hypothesis; determining relevant portions of the issue description to the selected hypothesis (see ¶[0024]; keywords relevant to an issue in a labeled sentence can be highlighted). The combination of WANG ‘920, SEWAK, and ZHOU does not specifically disclose, but MISHRA discloses, highlighting the relevant portions of the issue description in a user interface display, wherein determining the relevant portions of the issue description comprises: generating a plurality of text groupings based on pairings of sentences in issue description (see ¶[0091]-[0092]; parse alphanumeric characters into tokens. Assign semantic meaning to tokens with relevancy above a threshold); applying each of text groupings, combined with the hypothesis, to the zero shot classifier to provide a text group scoring for the hypothesis (see ¶[0091]-[0092]; parse alphanumeric characters into tokens. Assign semantic meaning to tokens with relevancy above a threshold). MISHRA does not specifically disclose, but ZHOU discloses, selecting the text grouping with the highest text group scoring for highlighting (see ¶[0024]; keywords relevant to an issue in a labeled sentence can be highlighted. See also ¶[0046]; rank results in order of confidence level to a threshold from the highest for determinations in downstream processing). WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. MISHRA discloses a virtual agent using a large language model that assigns a semantic meaning to a token when a relevancy is above a threshold. It would have been obvious for one of ordinary skill in the art at the time of inventio to apply a threshold to tokenized alphanumeric characters as taught by MISHRA in the system executing the method of WANG ‘920 with the motivation to interpret language using a large language model. WANG ‘920 discloses a real-time expense auditing and machine learning system that labels audit risks based on descriptions. ZHOU discloses highlighting relevant keywords in descriptive natural language text that are relevant to a label. It would have been obvious to highlight words in a description that are relevant to a label as taught by ZHOU in the system executing the method of WANG with the motivation to provide information regarding relevant keywords when labeling text. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD N SCHEUNEMANN whose telephone number is (571)270-7947. The examiner can normally be reached M-F 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at 571-270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD N SCHEUNEMANN/ Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Nov 26, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579549
PLATFORM FOR FACILITATING AN AUTOMATED IT AUDIT
2y 5m to grant Granted Mar 17, 2026
Patent 12535999
A METHOD FOR EXECUTION OF A MACHINE LEARNING MODEL ON MEMORY RESTRICTED INDUSTRIAL DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12033094
AUTOMATIC GENERATION OF TASKS AND RETRAINING MACHINE LEARNING MODULES TO GENERATE TASKS BASED ON FEEDBACK FOR THE GENERATED TASKS
2y 5m to grant Granted Jul 09, 2024
Patent 12026624
System and Method For Loss Function Metalearning For Faster, More Accurate Training, and Smaller Datasets
2y 5m to grant Granted Jul 02, 2024
Patent 11836746
AUTO-ENCODER ENHANCED SELF-DIAGNOSTIC COMPONENTS FOR MODEL MONITORING
2y 5m to grant Granted Dec 05, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
6%
Grant Probability
15%
With Interview (+8.4%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 551 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month