Prosecution Insights
Last updated: April 19, 2026
Application No. 17/309,419

Predictive System for Request Approval

Final Rejection §103
Filed
May 26, 2021
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Solventum Intellectual Properties Company
OA Round
6 (Final)
74%
Grant Probability
Favorable
7-8
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to applicant’s amendment filed 1/21/2026. Claims 1-7, 9-13, 16, 18-20 and 23-26 are currently pending and have been examined. Applicant’s IDS have been considered. There is no claim to foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see remarks, filed 10/8/25, with respect to the rejection(s) of claim(s) as previously rejected under 35 USC 103 have been fully considered and are persuasive, based on applicant’s amendments to the independent claims. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of at least the previously cited prior art and Pujun et al. (Pujun, Demystifying the workings of Lending Club). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9, 11-13, 16, 18-20, 23, 25 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verstraete et al. (Verstraete, US 2021/0286833) in view of Priestas et al. (Priestas, US 2020/0364404) in view of Dey et al. (Dey, Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks) and further in view of Gilmore et al. (Gilmore, US 2018/0254101), and further in view of Gharat et al. (Gharat, US 2020/0210868) and further in view of Pujin et al. (Pujin, Demystifying the workings of Lending Club). As per claim 1, Verstraete teaches computer implemented method comprising: receiving a text-based request from a first entity for approval by a second entity based at least in part on compliance with a set of rules (paragraphs [0011, 0044-0055]-his natural language text request from a policyholder as the first entity, for approval by the insurer, as the second entity, based on his policy, as the compliance rules); converting the text-based request to create a machine-compatible converted input having multiple features (paragraph [0036-0039]-his word to vector, vector representation, based on correspond text string and components/features); providing the converted input to a trained machine learning model that has been trained based at least in part on a training set of historical converted requests by the first entity (ibid, see also paragraph [0038]-his “machine learning model, and historical records used for training, the converted input passed to the trained model), [wherein the trained machine learning model comprises a recurrent neural network, including gated recurrent units]; [receiving a plurality of approval predictions from the trained machine learning model, each prediction associated with a probability indicating a likelihood of approval by the second entity; clustering the plurality of approval predictions to identify groups of similar request outcomes, wherein clustering comprises analyzing approval likelihoods associated with different feature subsets of the converted input; and [providing a visual representation of the clustered predictions, the visual representation including indicators of identifying key features that distinguish the groups, and wherein the indicators are automatically determined based on features contributing to cluster separation.] Verstraete lacks explicitly teaching that which Priestas teaches, wherein the trained machine learning model comprises a recurrent neural network, [including gated recurrent units] (paragraph [0030]-his RNN, as the trained ML model for classification and text processing). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an RNN to implement the classification of text as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN to understand the context when labeling text ( ibid Priestas). The above combination lacks teaching that which Dey teaches wherein the trained machine learning model comprises a recurrent neural network, including gated recurrent units (page 1598, sections 2.1 and 2.2 his trained RNN, and corresponding GRU RNN, which is similar to the LSTM, but “with less external gating signal”). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas and Dey to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an trained RNN to implement the classification of text as taught by Priestas with providing information to a trained GRU RNN model (instead of an LSTM RNN model) as taught by Dey for as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN, such as a GRU RNN which is “comparable to, or even outperforms, the LSTM in most cases”, to understand the context when labeling text (ibid Priestas, see Dey, above cited section 2.2-his outperforms discussion). The above combination lacks teaching that which Gilmore teaches, receiving a plurality of approval predictions from the trained machine learning model, each prediction associated with a probability indicating a likelihood of approval by the second entity (paragraphs [0479, 0480, 0477, 0596-0598, 0627, 0026]- his machine intelligence, and predictions, including “likelihood” of denial or acceptance, and his visual acceptance/denials, as displayed, See Fig. 30, paragraphs [0609, 0610, 0582-0598]); clustering the plurality of approval predictions to identify groups of similar request outcomes (ibid-his color-coded grouped approval and predictions, see Fig. 30, including his clustered predictions, identifying groups of similar outcomes, see cited sections, describing denials/acceptance); and providing a visual representation of the clustered predictions, the visual representation including indicators of identifying key features that distinguish the groups (ibid-the color-coded groupings as the indicators). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas and Dey and Gilmore to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an trained RNN to implement the classification of text as taught by Priestas with providing information to a trained GRU RNN model (instead of an LSTM RNN model) as taught by Dey with the visualization for approval/denial predictions as taught by Gilmore for as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN, such as a GRU RNN which is “comparable to, or even outperforms, the LSTM in most cases”, to understand the context when labeling text, and visualizing the results, thus presenting the possibility of comparison of accepted claims with denied claims (ibid-Gilmore, paragraph [0596], ibid Priestas, see Dey, above cited section 2.2-his outperforms discussion). The above combination lacks explicitly teaching that which Gharat teaches, receiving a plurality of approval predictions from the trained machine learning model, each prediction associated with a probability indicating a likelihood of approval by the second entity; clustering the plurality of approval predictions to identify groups of similar request outcomes, wherein clustering comprises analyzing approval likelihoods associated with different feature subsets of the converted input (paragraphs [0006, 0007, 0058-0061]-his likelihood of acceptance, as the approval likelihoods, each approval associated with specific features as a subset of features specific to each entity, and clustering, based on each feature set, with respect to each entity); and providing a visual representation of the clustered predictions, the visual representation including indicators of identifying key features that distinguish the groups, and wherein the indicators are automatically determined based on features contributing to cluster separation (ibid, see also paragraphs [0047-0049]-his visual indicators, corresponding bar, chart, wheel, indicating likelihood of acceptance, wherein the actual indicators are presented based on distinguishing features contributing to the separation, such as thumbs up, exclamation marks, etc. pertaining to input features, including capacity, distance, etc. ). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas and Dey and Gilmore and Gharat to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an trained RNN to implement the classification of text as taught by Priestas with providing information to a trained GRU RNN model (instead of an LSTM RNN model) as taught by Dey with the visualization for approval/denial predictions as taught by Gilmore with clustering the approval predictions, and providing a visual representation of the indicators automatically determined based on features contributing to the separation as taught by Gharat, for as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN, such as a GRU RNN which is “comparable to, or even outperforms, the LSTM in most cases”, to understand the context when labeling text, and visualizing the results, thus presenting the possibility of comparison of accepted claims with denied claims, the clustering comprising approval likelihoods associated with different feature subsets of converted input, and automatic visualization comprising features contributing to cluster separation (ibid-Gilmore, paragraph [0596], ibid Priestas, see Dey, above cited section 2.2-his outperforms discussion, ibid-Gharat, see also abstract). The above combination lacks explicitly teaching that which Pujun teaches, providing a visual representation of the clustered predictions, the visual representation including indicators of identifying key features that distinguish the groups, wherein the key features are identified via an unsupervised classifier from the multiple features (page 2 section 2.1, and 3.3-his clustering task, using an unsupervised classifier, to classify the dataset into approved/denied, page 1 abstract, his visualizer presenting the classified key features, based on k-means clustering, and the clustering having distinguishable features). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas and Dey and Gilmore and Gharat and Pujun to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an trained RNN to implement the classification of text as taught by Priestas with providing information to a trained GRU RNN model (instead of an LSTM RNN model) as taught by Dey with the visualization for approval/denial predictions as taught by Gilmore with clustering the approval predictions, and providing a visual representation of the indicators automatically determined based on features contributing to the separation as taught by Gharat, with k-means clustering for generating a visualized representation of approval/denial features of a claim and taught by Pujun for as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be using an unsupervised classification technique to classify approval/denial data an generate a visualized representation thereof (ibid-Pujun). As per claim 2, Verstraete further makes obvious the method of claim 1, Priestas teaches that which Verstraete lacks, wherein converting the text-based request comprises separating punctuation marks from text in the text-based request and treating individual entities as tokens (paragraph [0033]-his tokenizer, and NER, each entity as the token, as supported in his priority). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for a text based request as taught by Verstraete with tokenizing the request data as taught by Priestas as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be tokenizing the text data to identify textual items relevant to a request (ibid-Priestas). As per claim 3, Verstraete further makes obvious the method of claim 2, wherein converting the text-based request is performed by a natural language processing machine (paragraphs [0029, 0036, 0050]-his natural language processing of the text, entity extraction from the text string). As per claim 4, Verstraete further makes obvious the method of claim 1, but lacks teaching that which Priestas teaches, wherein converting the text-based request comprises tokenizing the text-based request to create tokens (paragraph [0033]-his tokenizer, and NER, each entity as the token, as supported in his priority). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for a text based request as taught by Verstraete with tokenizing the request data as taught by Priestas as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be tokenizing the text data to identify textual items relevant to a request (ibid-Priestas). As per claim 5, Verstraete with Priestas further makes obvious the method of claim 4, wherein tokenizing the text-based request includes using inverse document frequency to form a vectorized representation of the tokens (ibid-Verstraete, paragraph [0104]-is inverse document frequency, and corresponding words in the vocabulary of terms, via natural language processing and extraction, as transformed into vectors). As per claim 6, Verstraete further makes obvious the method of claim 4, wherein tokenizing the text-based request includes using neural word embeddings to form a dense word vector embedding of the tokens (ibid-Verstraete, paragraph [0104]- his corresponding words in the vocabulary of terms, via natural language processing and extraction, as transformed into vectors, and dense representation as numeric vectors). As per claim 7, Verstraete further makes obvious the method of claim 1, wherein the trained machine learning model comprises a classification model (ibid-see claim 1, machine learning discussion and paragraph [0036]-classification models). As per claims 9, 16 and 20, Verstraete further makes obvious the method of claim 1, further comprising: iteratively providing different subsets of the multiple features (“set of features”-with respect to claim 16) to the trained machine learning model (Verstraete, Fig. 5, paragraphs [0036-0039]-his historical claims, as different subsets of multiple features, the subsets as the set of features, provided to his AI, trained machine learning model, see claim 1, “trained” model discussion); receiving predictions and probabilities for each of the provided different subsets (ibid-his predictions for accepted or refused claims, paragraphs [0031, 0036-0039]-his confidence values as the probabilities applied to each claim); and identifying at least one subset correlated with approval of the text-based request (paragraph [0091]-his identified claim, as a subset of data as approved, Figs. 3, 5). As per claim 11, claim 1 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the machine-readable storage device is deemed to embody the method, such that Verstraete with Priestas with Dey with Gilmore with Gharat with Pujun make obvious a machine-readable storage device having instructions for execution by a processor of a machine to cause the processor to perform a set of operations to perform a method of predicting a disposition of requests, the set of operations comprising (Verstraete, paragraphs [0027, 0044-0048]): receiving a text-based request from a first entity for approval by a second entity based at least in part on compliance with a set of rules (ibid-see claim 1, corresponding and similar limitation); converting the text-based request to create a machine-compatible converted input having a set of features (ibid, see claim 1, input having multiple features discussion-the multiple features as a set of features), wherein converting the text-based request comprises utilizing term frequency-inverse document frequency to form a vectorized representation of a set of tokens (ibid, Verstraete, paragraph [0104]-see his term-frequency-inverse document frequency numeric vector transformation discussion); providing the converted input to a trained machine learning model (ibid), wherein the trained machine learning model comprises a recurrent neural network including gated recurrent units (ibid-see claim 1, corresponding and similar limitation); receiving, from the trained machine learning model, a plurality of predictions indicating whether the second entity would likely approve the request, each prediction associated with a probability (ibid); clustering the plurality of predictions to identify groups of similar request outcomes, wherein clustering comprises analyzing approval likelihoods associated with different feature subsets of the converted input (ibid); and generating a visual representation of the clustered predictions, including key features that distinguish the groups, wherein the key features are identified via an unsupervised classifier from the set of features (ibid), and wherein the indicators are automatically determined based on the features contributing to cluster separation (ibid). As per claim 12, Verstraete further makes obvious the device of claim 11, Priestas teaches that which Verstraete lacks, wherein converting the text-based request comprises separating punctuation marks from text in the text-based request is performed by a natural language processing machine (paragraph [0033]-his tokenizer, and NER, each entity as the token, as supported in his priority). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for a text based request as taught by Verstraete with tokenizing the request data as taught by Priestas as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be tokenizing the text data to identify textual items relevant to a request (ibid-Priestas). As per claim 13, Verstraete with Priestas with Dey further makes obvious the device of claim 11, wherein converting the text-based request includes using neural word embeddings to form a dense word vector embedding of the set of tokens (ibid-Verstraete, paragraph [0104]-his Word2Vec, Doc2Vec, GloVe transformation of the text, as using neural word embeddings to form a dense word vector embedding on the set of tokens/vocabulary). As per claim 18, claim 18 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the device is deemed to embody the method, such that Verstraete with Priestas with Dey with Gilmore with Gharat make obvious a device comprising (paragraphs [0027, 0044-0048]-see his device, processor, memory and instructions discussion): a processor (ibid); and a memory device coupled to the processor and having a program stored thereon for execution by the processor to perform a set of operations to perform a method (ibid) of predicting a disposition of requests, the set of operations comprising (ibid): receiving a text-based request from a first entity for approval by a second entity based at least in part on compliance with a set of rules (ibid-see claim 1, corresponding and similar limitation); converting the text-based request to create a machine-compatible converted input having multiple features (ibid), wherein converting the text-based request comprises tokenization and feature extraction (ibid-Verstraete, paragraph [0103, 0104]-his preprocessing of the text, before provided into the machine learning model, his input broken down into component words, as his tokenization of the input, the words as extracted features, passed to the vectorization model, see also Priestas, paragraph [0017, 0018]-his “request is preprocessed by parsing, tokenizing” and data for the tokens as generated discussion, the extracted data responsive to the requirements, as also comprising feature extraction); providing the converted input to a trained machine learning model that has been trained based at least in part on a training set of historical converted requests by the first entity (ibid-see claim 1, corresponding and similar limitation), wherein the trained machine learning model comprises a recurrent neural network including gated recurrent units (ibid); and receiving a set of predictions of approval from the trained machine learning model (ibid), each prediction of the set of predictions indicating a likelihood of approval by the second entity and associated with a probability (ibid); clustering the set of predictions to identify groups of similar request outcomes, wherein clustering comprises analyzing approval likelihoods associated with different feature subsets of the converted input (ibid); and providing a visual representation of the clustered predictions, wherein the visual representation includes indicators identifying key features that distinguish the groups (ibid), wherein the key features are identified via an unsupervised classifier from the multiple features (ibid), and wherein the indicators are automatically determined based on features contributing to cluster separation (ibid). As per claim 19, claim 19 sets forth limitations similar to claims 2-6, and is thus rejected under similar reasons and rationale, wherein Verstraete with Priestas with Dey wit Gilmore with Gharat make obvious the device of claim 18 wherein converting the text-based request further comprises; separating punctuation marks from text in the text-based request; treating individual entities as tokens; performing the converting of the text-based request by a natural language processing machine; using inverse document frequency to form a vectorized representation of the tokens or using neural word embeddings to form a dense word vector embedding of the tokens (see claims 2-6, corresponding and similar limitations). As per claim 23, Verstraete with Priestas with Dey with Gilmore with Gharat further makes obvious the device of claim 18, Priestas further teaching that which Verstraete lacks, wherein the set of operations further comprises: identifying a set of features from the text-based request that contribute to a likelihood of approval or denial (Priestas, paragraphs [0046-0047, 0034-0036]-his claim analysis, and corresponding approval, denial threshold, based on the features identified); and providing feedback to the first entity regarding the set of features for potential modification of the text-based request (ibid-his human edit information, as feedback, for modification of the request, the text-based request modified in the appeal process, for resubmission from the submitting/first entity, based on the features, his age feature as an example for likelihood of approval or denial, the age feedback, and modification applied to the text-based request appeal, and re-submission). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using feedback pertaining to features of approval or denial and generating a modification of a text-based request as taught by Priestas as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN to understand the context when labeling text, and having feedback for modification of a request for re-submission (ibid Priestas). As per claim 25, Verstraete further makes obvious the method of claim 1, but lacks teaching that which Gilmore teaches, wherein the visual representation of the clustered predictions includes grouping based on contextual features comprising one or more of:a hospital wing, an attending physician, or a procedure code (paragraph [0587, 0595, 0447], Fig. 30-his groupings based on procedure code, “procedure codes“ on HCPCS, and many other contextual features used to generate the visual representation, clusters and grouping, see paragraphs [0583-0596]). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas and Dey and Gilmore to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an trained RNN to implement the classification of text as taught by Priestas with providing information to a trained GRU RNN model (instead of an LSTM RNN model) as taught by Dey with the visualization for approval/denial predictions, including clusters based on multiple contextual features, as taught by Gilmore for as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN, such as a GRU RNN which is “comparable to, or even outperforms, the LSTM in most cases”, to understand the context when labeling text, and visualizing the results, thus presenting the possibility of comparison of accepted claims with denied claims (ibid-Gilmore, paragraph [0596], ibid Priestas, see Dey, above cited section 2.2-his outperforms discussion). As per claim 26, Verstraete with Priestas further makes obvious the method of claim 1, wherein the key features include at least learned document embeddings from the recurrent neural network (Priestas, paragraph [0030]-his RNN, as the trained ML model for classification and text processing, which includes the embeddings in his neural network, the training based on his document details, and corresponding regression analysis). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with using an RNN to implement the classification of text, using embeddings learned from the document as input, through the training, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN, as trained, to understand the context when labeling text (ibid Priestas). Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verstraete et al. (Verstraete, US 2021/0286833) in view of Priestas et al. (Priestas, US 2020/0364404) in view of Dey et al. (Dey, Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks) in view of Gilmore in view of Gharat, as applied to claim 18 above, and further in view of Priestas et al. (referred to as Priestas ‘395, US 20f19/0087395) and further in view of Love et al. (Love, US 9,836,183). As per claim 24, Verstraete with Priestas with Dey with Gilmore with Gharat further makes obvious the device of claim 18, Priestas teaching that which Verstraete lacks, wherein the set of operations further comprises: receiving a text-based response of the second entity based at least in part on the text- based request (Priestas, paragraphs [0024-0027, 0046-0047, 0034-0036]-his request and corresponding response, based training); extracting features from the text-based request and the text-based response (ibid, his extracted features from both the request and response for training the ML models, classifiers); and providing the extracted features to an [unsupervised] classifier to identify key features corresponding to denials or approval by the second entity (ibid-the features are provided to the ML model, to train the model on the features for approval or denial). Priestas does not explicitly teach the classifier as unsupervised, however, Priestas ‘395 teaches unsupervised learning, for classification of request features, used in approval or denial of a claim (see paragraphs [0032, 0060]-his supervised or unsupervised learning, for learning features from a request). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Priestas to combine the prior art element of predicting a label for data, as approved as taught by Verstraete with a text-based response form a second entity, and text-based request, having extracted features used in training as taught by Priestas with an unsupervised training (instead of supervised training) as taught by Priestas ‘395 as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be improving the efficacy of the labeling process, using a special category of RNN to understand the context when labeling text, and using unsupervised (thus less expensive and more broadly applicable) training in the approval/denial claim process (ibid Priestas ‘395, Love-C.13 line 64-C.14 line 3-see unsupervised machine learning technique, less expensive discussion). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Verstraete et al. (Verstraete, US 2021/0286833) in view of Priestas in view of Dey in view of Gilmore in view of Gharat, as applied to claim 9, and further in view of Williams, JR. et al. (Williams, US 2015/0254555). As per claim 10, Verstraete further makes obvious the method of claim 9, but lacks teaching that which Williams teaches wherein iteratively providing different subsets of the multiple (“set of features”, with respect to claim 17, see claim subset of features discussion) features is performed using n-gram analysis (paragraph [0089, 0198]-his learning system, based on utilizing n-grams representation, and number of samples, as different subsets of features). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Verstraete and Williams to combine the prior art element of predicting a label for a text based request as taught by Verstraete with using n-gram analysis for providing features to a learning model as taught by Williams as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be to process claim information based on trained and sampled subsets of data comprising features, presented to a model to determine claim decision information (ibid-Williams). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/ Primary Examiner, Art Unit 2657 3/7/2026
Read full office action

Prosecution Timeline

May 26, 2021
Application Filed
Jun 01, 2024
Non-Final Rejection — §103
Aug 12, 2024
Interview Requested
Aug 27, 2024
Applicant Interview (Telephonic)
Aug 27, 2024
Examiner Interview Summary
Sep 03, 2024
Response Filed
Nov 16, 2024
Final Rejection — §103
Dec 26, 2024
Interview Requested
Jan 08, 2025
Examiner Interview Summary
Jan 08, 2025
Applicant Interview (Telephonic)
Jan 16, 2025
Request for Continued Examination
Jan 17, 2025
Response after Non-Final Action
Feb 07, 2025
Non-Final Rejection — §103
Apr 11, 2025
Interview Requested
Apr 23, 2025
Examiner Interview Summary
Apr 23, 2025
Applicant Interview (Telephonic)
May 09, 2025
Response Filed
Jul 04, 2025
Final Rejection — §103
Sep 22, 2025
Interview Requested
Sep 30, 2025
Applicant Interview (Telephonic)
Sep 30, 2025
Examiner Interview Summary
Oct 08, 2025
Request for Continued Examination
Oct 12, 2025
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §103
Dec 31, 2025
Interview Requested
Jan 12, 2026
Applicant Interview (Telephonic)
Jan 12, 2026
Examiner Interview Summary
Jan 21, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month