Prosecution Insights
Last updated: April 19, 2026
Application No. 17/513,647

HYBRID MODEL FOR CASE COMPLEXITY CLASSIFICATION

Non-Final OA §101§103§112
Filed
Oct 28, 2021
Examiner
PHUNG, QUOC LY PHU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett Packard Enterprise Development LP
OA Round
3 (Non-Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
6 granted / 19 resolved
-23.4% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
31.5%
-8.5% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Remarks Claims 1-20 have been examined and rejected. This Office Action is responsive to the amendment filed on 03/11/2025, which has been entered in the above identified application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Response to Amendment Applicant’s amendment filed on 03/11/2025 has been entered. Claims 1, 5, 12, 14, 16 and 19 are amended. Claims 1-20 are pending in the application. Specification The disclosure is objected to because of the following informalities: Applicant should review the term “L2 regularization” in paragraphs 0015, 0054, 0059, 0066. It is clearly “12 regularization” with the number “1”, it is not “l2 regularization” with the lowercase letter “L”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6, 15 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claims 6, 15 and 20, it is unclear what “the predictive model comprises a support vector machine (SVM) with linear kernel and 12 regularization” means in each of these claims. Claim 6 is depended on claim 1, claim 15 is depended on claim 12, and claim 20 is depended on claim 16. However, these independent claims as well as the specifications do not have a definition for “12 regularization”. For the purposes of examination, examiner will interpret the limitation in each of these claims as “the predictive model comprises a support vector machine (SVM) with linear kernel and L2 regularization”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims Step 1 Claim 1 is drawn to a method, claim 12 is drawn to a non-transitory machine-readable storage medium storing instructions and claim 16 is drawn to an apparatus comprising a processor when executed by the processor, cause the processor to perform the method of claim 1. Therefore, each of these claim groups falls under one of four categories of statutory subject matter (process/method, machines/product/apparatus, manufactures, and composition of matter). Step 2A – Prong 1 Claims 1, 12 and 16 are directed to a judicially recognized exception of an abstract idea without significantly more. Claims 1, 12 and 16 recite a method of receiving a set of fields corresponding to a case that under its broadest reasonable interpretation enumerates a mental concept. A human can mentally perform, with the physical aid such as pen and paper, to receive a set of fields that corresponds to a case. Therefore, the step of receiving a set of fields corresponding to a case is nothing more than a mental concept (MPEP 2106.04(a)(2)(III)). Claims 1, 12 and 16 recite a method of inputting the set of fields to a hybrid model comprising a set of rules and a predictive model, that under its broadest reasonable interpretation enumerates a mental concept. A human can mentally perform, with the physical aid such as pen and paper, to input the set of fields into a model comprising rules and a predictive model. Therefore, the step of inputting the set of fields to a hybrid model comprising a set of rules and a predictive model is nothing more than a mental concept (MPEP 2106.04(a)(2)(III)). Claims 1, 12 and 16 recite a method of determining, via the hybrid model, a case complexity classification for the case based on analysis of the set of fields initially using the set of rules and subsequently using the predictive model upon failure of the analysis using the set of rules that under its broadest reasonable interpretation enumerates a mental concept. A human can mentally perform, with the physical aid such as pen and paper, to determine a case complexity classification for the case. Therefore, the step of determining, via the hybrid model, a case complexity classification for the case is nothing more than a mental concept (MPEP 2106.04(a)(2)(III)). Step 2A – Prong 2 Claims 1, 12 and 16 recite further a method of wherein the predictive model comprises a predictive machine learning model trained using a historical set of cases that are specific to a technical domain of the case, the set of fields having been vectorized, and wherein a score for the set of fields is calculated, the score comprising a ratio of a frequency of one or more terms making up the set of fields and a log of a ratio of a total number of cases and a number of cases in which the one or more terms are found that fails to integrate the abstract idea into a practical application. The step of using a historical set of cases to train a predictive machine learning model is a form of insignificant input and output solution activities, where using a historical set of cases that are specific to a technical domain of the case is necessary for all uses of the judicial exception. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Claims 1, 12 and 16 recite further a method of utilizing the case complexity classification, including the calculated score, to route the case for resolution processing that fails to integrate the abstract idea into a practical application. The step of utilizing the case complexity classification is a form of insignificant input and output solution activities, where utilizing the case complexity classification to route the case is necessary for all uses of the judicial exception. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Step 2B The additional elements in step 2A-Prong 2 those are forms of insignificant extra-solution activities, do not amount to significantly more than an abstract idea because the court decision have determined that these additional elements of wherein the predictive model comprises a predictive machine learning model trained using a historical set of cases that are specific to a technical domain of the case and utilizing the case complexity classification to route the case for processing to be well-understood, routine, and conventional when claimed in a merely generic manner (MPEP 2106.05(d)(II)). As such, claims 1, 12 and 16 are not patent eligible. Dependent claims Claims 2-11, 13-15 and 17-20 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to claims 1, 12 and 16, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental processes that are practically capable of being performed in the human mind with the assistance of pen and paper and mathematical concepts that are achievable through mathematical computation. Therefore, claims 2-11, 13-15 and 17-20 also recite abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Step 1 Claims 2-11 are drawn to a method, claims 13-15 are drawn to a non-transitory machine-readable storage medium storing instructions and claims 17-20 are drawn to an apparatus comprising a processor when executed by the processor, cause the processor to perform the method of claims 2-11. Therefore, each of these claim groups falls under one of four categories of statutory subject matter (process/method, machines/product/apparatus, manufactures, and composition of matter). Step 2A – Prong 1 Dependent claims 2, 13 and 17 recite further the mental process by the analysis of the set of fields by the hybrid model comprises identifying a keyword match between the set of fields and the set of rules, and responsive to identifying the keyword match, applying an associated rule of the set of rules to generate the case complexity classification for the case those are based on one or more features of the ML project (MPEP 2106.04(a)(2)(III)). Dependent claims 4 and 18 recite further the mental process by responsive to failing to identify the keyword match, applying the predictive model to the set of fields that are based on one or more features of the ML project (MPEP 2106.04(a)(2)(III)). Dependent claims 5 and 19 recite further the mental process by applying the predictive model further comprises: providing the vectorized set of fields as input for the predictive model; and outputting, in response to the input of the vectorized set of fields, the case complexity classification for the case, wherein the vectorized set of fields comprises term frequency and inverse document frequency (TF-IDF) for a term of the one or more terms, t, in a document, d E document - setD, such that TF-IDF = Tf-idf(t,d,D) = tf (t,d).idf(t,D), where TF = tf (t; d) = log (1 + freq (t; d)), and where IDF = idf(t,D)=log((1+N)/(1+count(de D:te d))) those are based on one or more features of the ML project (MPEP 2106.04(a)(2)(III)). Dependent claim 14 recites further the mental process by instructions to repond to failing to identify the keyword match by applying the predictive model to the set of fields, and wherein applying the predictive model comprises: vectorizing the set of fields; providing the vectorized set of fields as input for the predictive model; and outputting, in response to the input of the vectorized set of fields, the case complexity classification for the case those are based on one or more features of the ML project (MPEP 2106.04(a)(2)(III)). Step 2A – Prong 2 Dependent claim 3 recites further the insignificant extra solution activities by the keyword match comprises a match of at least one of an error code, an event signature, or a text string. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claims 6, 15 and 20 recite further the insignificant extra solution activities by the predictive model comprises a support vector machine (SVM) with linear kernel and 12 regularization. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claim 7 recites further the insignificant extra solution activities by the historical set of cases are each labelled with a case complexity by a domain expert. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claim 8 recites further the insignificant extra solution activities by the set of fields comprise at least a subject field, an issue text field, or a severity field. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claim 9 recites further the insignificant extra solution activities by the case is managed by a case management system in a technical support environment. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claim 10 recites further the insignificant extra solution activities by the case complexity classification is to indicate a level of difficulty in resolving a problem of the case. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). Dependent claim 11 recites further the insignificant extra solution activities by the set of rules and the predictive model are based on data provided in one or more languages. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(g)). As such, dependent claims 2-11, 13-15 and 17-20 are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US 20090019171 A1) hereafter Liu, further in view of Quijada et al (US 12039596 B1) hereafter Quijada, and further in view of Fontecilla et al (US 11321538 B1) hereafter Fontecilla. With respect to claim 1, Liu teaches a method (a method for determining a mail class provided that reads a mail head of an email with an unknown class [par. 0008]) for comprising: receiving a set of fields corresponding to a case (some fields are included in the mail head of an email, such as From, To, Reply-To, Delivered-To, etc. Part or all of the fields may be selected using the preset condition [par. 0020-0025]); determining, via the hybrid model, a case complexity classification for the case based on analysis of the set of fields using the predictive model (a behavior model may be used to classify the fields those are vectorized and obtained and stored as parameters in the behavior model. The behavior model may use the mail head to establish a behavior required for determination of a mail class [par. 0026-0030]); and utilizing the case complexity classification, to route the case for resolution processing (the behavior model may determine the mails to be classified into two classes: junk mails or non-junk mails. The calculation process of the Support Vector Machine (SVM) may classify the mails into two classes labeled with 0 and 1 [par. 0036-0040]). However, Liu did not disclose inputting the set of fields to a hybrid model comprising a set of rules and a predictive model, wherein the predictive model comprises a predictive machine learning model trained using a historical set of cases that are specific to a technical domain of the case, the set of fields having been vectorized, and wherein a score for the set of fields is calculated, the score comprising a ratio of a frequency of one or more terms making up the set of fields and a log of a ratio of a total number of cases and a number of cases in which the one or more terms are found; and determining a case complexity classification for the case based on analysis of the set of fields initially using the set of rules and subsequently using the predictive model upon failure of the analysis using the set of rules; and utilizing the case complexity classification using the calculated score. In the same field of endeavor, Quijada teaches inputting the set of fields to a hybrid model comprising a set of rules and a predictive model, wherein the predictive model comprises a predictive machine learning model trained using a historical set of cases that are specific to a technical domain of the case, (the method provided to receive transaction data that is associated with a financial account of a borrower. The transaction data includes one or more data elements representing income transactions those analyzing one or more characteristics, where a characteristic includes a transaction description, an income type label and an income source label. Based on a predetermined set of rules when training one or more computer models, the processor may be developed using transform features as inputs. The transaction analysis used to estimate a net income and to estimate a relationship between net income and growth income those may be based on the historical relationship of the two types of incomes. Processor may apply a term weighting to account for term frequencies and/or inverse document frequencies to account for term sequences [col. 3, line 65 – col. 4, line30; col. 3, lines 5-25; col. 10, line 64 – col. 11, line 30]); and determining a case complexity classification for the case based on analysis of the set of fields initially using the set of rules and subsequently using the predictive model upon failure of the analysis using the set of rules (the method may include some classification techniques along with the predetermined rules to classify and recognize characteristics in the transaction data. Based on these techniques, the processor may determine classify first income and second income sources according to a classification. The process includes parsing transaction data for textual pattern matching purposes. The processor may generate n-grams of the transaction data information, wherein the n-gram levels may be 2 or 3 [col. 9, line 50 - col. 12, line 20 and col. 10, lines 17-43]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the concept of classifying the transaction data based on a certain set of predetermined rules as suggested by Quijada into the concept of classifying mails based on the input fields of the mails as suggested by Liu because both of these systems addressing the process of training one or more algorithm models to classify the inputs and to determine the level of complexity of the inputs. Doing so would be desirable because the system of Liu would be more efficient by incorporating a set of predetermined rules of transaction those associated with the types of income streams to classify the transactions into correct types of income and to generate a gross income distribution (Quijada, [col. 3, line 65 – col. 4, line30]). However, the combination of Liu and Quijada does not particularly disclose the set of fields having been vectorized, and wherein a score for the set of fields is calculated; and utilizing the case complexity classification using the calculated score. In the same field of endeavor, Cohen teaches the set of fields having been vectorized, and wherein a score for the set of fields is calculated (a feature vector represents each of the first text tokens of at least one sentence, from a document, including the predefined keyword to obtain a first set of textual feature vectors. A similar feature vector represents each of the second text tokens. A text token similarity score between text tokens is determined that uses third natural language processing (NLP) model [col. 1, line 30 – col. 2, line 15; col. 7, lines 10-60]); and utilizing the case complexity classification using the calculated score (some documents may have low-quality responses based on whether its text token score satisfied a certain threshold condition, wherein a low-quality response may include a specific detail such as regularity constraint, safety, health data, or other information. Such response may be classified as non-compliant. A response document may be scored to identify a level of compliance with the requirements [col. 3, line 60 – col. 4, line 5; col. 4, lines 28-50]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the concept of determining how well a document addresses some requirements set forth in a requirement-specifying document as suggested by Fontecilla into the combination of Liu and Quijada because all of these systems addressing the process of training one or more algorithm models to classify the inputs and to determine the level of complexity of the inputs. Doing so would be desirable because the combination of Liu and Quijada would be more efficient by generating a feature vector for each of the text tokens for the first and second documents, and generating a text token similarity score to classify a document as compliant or non-compliant using the NLP model (Fontecilla, [col. 1, line 30 – col. 2, line 25]). With respect to claim 2, the combination of Liu, Quijada and Fontecilla teaches wherein the analysis of the set of fields by the hybrid model comprises identifying a keyword match between the set of fields and the set of rules (Quijada, the processor may identify separate words or groups of characters based on the spaces in transaction data and compare them to a predefined set of words and characters such as “deposit” or “payroll”, and the processor may apply a set of predefined rules to parse the information [col. 9, line 60 – col. 10, line 42]), and responsive to identifying the keyword match, applying an associated rule of the set of rules to generate the case complexity classification for the case (Quijada, the processor may identify a match between words or groups of characters based on the textual pattern matching. The processor may generate a list of words to process the matching through the n-gram levels and generate matching words for each level [col. 9, line 60 – col. 10, line 42]). With respect to claim 3, the combination of Liu, Quijada and Fontecilla teaches wherein the keyword match comprises a match of at least one of an error code, an event signature, or a text string (Quijada, the processor parses the information from the input transaction data into a list of words (or text strings) to identify a match with a predefined set of words, wherein the parsed words need to go through a predetermined number of n-gram levels [col. 10, lines 15-42]). With respect to claim 4, the combination of Liu, Quijada and Fontecilla teaches wherein responsive to failing to identify the keyword match, applying the predictive model to the set of fields (Quijada, the verification of a borrower may be determined based on the borrower and employment recognizer (BER) including applying a predetermined set of transaction rules. BER may verify the data elements based on detecting a match between one or more characteristics of the data elements. A failed attempt may be based on a determination that a certain text string does not match the text string of a borrower in the database [col. 13, lines 20-65]). With respect to claim 5, the combination of Liu, Quijada and Fontecilla teaches wherein applying the predictive model comprises: providing the vectorized set of fields as input for the predictive model (Liu, a first field extracting unit configured to extract a first field, and a first vectorizing unit configured to vectorize the first field into a first preset number of first feature vectors. A calculating unit configured to input the first feature vectors to a preset predictive algorithm with data stored for a behavior model [par. 0008-0010]); and outputting, in response to the input of the vectorized set of fields, the case complexity classification for the case (Liu, the classification process of the mails may use the preset condition of the field extracting unit to output the mail class of each of the mails [par. 0021]), wherein the vectorized set of fields comprises term frequency and inverse document frequency (TF-IDF) for a term of the one or more terms, t, in a document, d E document - setD, such that TF-IDF = Tf-idf(t,d,D) = tf (t,d).idf(t,D), where TF = tf (t; d) = log (1 + freq (t; d)), and where IDF = idf(t,D)=log((1+N)/(1+count(de D:te d))) (Quijada, the processor may apply a term weighting to account for term frequencies and/or inverse document frequencies to account for term sequences not likely to provide valuable information related to the income transaction. Processor may develop and train the computer models comprising a predetermined rule set using transformed features as inputs [col. 10, line 64 – col. 11, line 30]). With respect to claim 6, the combination of Liu, Quijada and Fontecilla teaches wherein the predictive model comprises a support vector machine (SVM) with linear kernel and 12 regularization (Liu, the behavior model may be established by Support Vector Machine (SVM). The SVM method may generate a tradeoff between the complexity of the input mails and the behavior model. One of the algorithms of SVM method may construct a linear decision function in the high-dimensional feature space [par. 0029]). With respect to claim 7, the combination of Liu, Quijada and Fontecilla teaches wherein the historical set of cases are each labelled with a case complexity by a domain expert (Quijada, the one or more characteristics of the one or more data elements may be extracted to generate an income type label and an income source label. The method may identify a cluster of data elements which is associated with a first income source label, wherein the identified cluster may include a plurality of data elements associated with a first income type label [col. 3, line 65 – col. 4, line30]). With respect to claim 8, the combination of Liu, Quijada and Fontecilla teaches wherein the set of fields comprise at least a subject field, an issue text field, or a severity field (Liu, some of the fields may be common in the mail head of each of the mails: From field, To field, Reply-To field, Delivered-To field, Return-Path field, Date field, etc. [par. 0021]). With respect to claim 9, the combination of Liu, Quijada and Fontecilla teaches wherein the case is managed by a case management system in a technical support environment (Quijada, an electronic transaction data may be received, processed and managed by a technical support organization, such as a bank [col. 2, line 40 – col. 3, line 5]). With respect to claim 10, the combination of Liu, Quijada and Fontecilla teaches wherein the case complexity classification is to indicate a level of difficulty in resolving a problem of the case (Quijada, the transaction data may be parsed into different lengths or different n-gram levels to classify into a group of associated words to find the identity of the borrower. For example, if the transaction data information is “John Doe payroll”, the processor may generate a list of words of length three each: “Joh,” “ohn,” “hn_,” “n_D,” “_De,” “Doe,” “e_p,” “_pa,” “pay,” “ayr,” “yro,” “rol,” [col. 10, lines 15-42]). With respect to claim 11, the combination of Liu, Quijada and Fontecilla teaches wherein the set of rules and the predictive model are based on data provided in one or more languages (Liu, the behavior model established from the mail head may be useful in determining the mail class regardless of a specific language of the mail body, and that would set a specific set of rules for the determination [par. 0028, 0048, 0060, 0068]). With respect to claim 12, it is a non-transitory machine-readable claim that corresponding to the method of claim 1. Therefore, it is rejected for the same reason as claimed in claim 1 above. With respect to claim 13, it is a non-transitory machine-readable claim that corresponding to the method of claim 2. Therefore, it is rejected for the same reason as claimed in claim 2 above. With respect to claim 14, it is a non-transitory machine-readable claim that corresponding to the method of claim 5. Therefore, it is rejected for the same reason as claimed in claim 5 above. With respect to claim 15, it is a non-transitory machine-readable claim that corresponding to the method of claim 6. Therefore, it is rejected for the same reason as claimed in claim 6 above. With respect to claim 16, it is an apparatus claim that corresponding to the method of claim 1. Therefore, it is rejected for the same reason as claimed in claim 1 above. With respect to claim 17, it is an apparatus claim that corresponding to the method of claim 2. Therefore, it is rejected for the same reason as claimed in claim 2 above. With respect to claim 18, it is an apparatus claim that corresponding to the method of claim 4. Therefore, it is rejected for the same reason as claimed in claim 4 above. With respect to claim 19, it is an apparatus claim that corresponding to the method of claim 5. Therefore, it is rejected for the same reason as claimed in claim 5 above. With respect to claim 20, it is an apparatus claim that corresponding to the method of claim 6. Therefore, it is rejected for the same reason as claimed in claim 6 above. Response to Arguments The examiner respectfully acknowledges the applicant’s amendments to claims 1-20. Applicant’s arguments filed on 03/11/2025 regarding claims 1-20 under 35 USC 101 have been fully considered but are not persuasive. Applicant argued that “First, Applicant submits that the context in which examples of the disclosed technology is applied is, in-effect, a real-time context. Indeed, [0001] of the present application notes a typical customer service scenario, the handling of which, may be improved by implementing examples of the disclosed technology. That is, to support/handle customer issues, examples of the disclosed technology address real-time inquiries, e.g., via phone or electronic messaging. There would be no practical way for the recited limitations to be performed mentally, with pen/paper, in this context. This is especially true because as noted at, e.g., [0009] of the present application, examples of the disclosed technology seek to "reduce a resolution time." Interpreting the recited limitations as mere mental concepts would vitiate the intent of the disclosed technology … For example, application of the disclosed technology, as described at, e.g., [0012] of the present application, allows for early case complexity determinations that improve efficiency and save time.” Examiner respectfully disagrees. First of all, when Examiner classified these limitations as mental concepts, that doesn’t mean Examiner concluded that the process of interpreting the recited limitations as mere mental concepts would vitiate the intent of the disclosed technology. Based on what is recited in amended claim 1, the broadest reasonable interpretation (BRI) in view of Specification of the claim limitations “wherein the predictive model comprises a predictive machine learning model trained using a historical set of cases that are specific to a technical domain of the case, the set of fields having been vectorized, and wherein a score for the set of fields is calculated, the score comprising a ratio of a frequency of one or more terms making up the set of fields and a log of a ratio of a total number of cases and a number of cases in which the one or more terms are found” and “utilizing the case complexity classification, including the calculated score, to route the case for resolution processing” indicates that these limitations are well-understood, routine, and conventional when claimed in a merely generic manner. The Court has provided some examples that may not be sufficient to show an improvement in computer-functionality [MPEP 2106.05(a)(I)], such as, Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016); or Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018). Even though the Specification provided the technical effect of reducing the resolution time of a case in paragraphs 0001, 0009 or 0012, a human with a pen and paper can mentally perform the steps of using historical cases, calculating the score, or utilizing the case complexity. A human can mentally analyze the case complexity based on the set of fields, including subject field, issue text field and/or severity field, those corresponding to the case, in order to determine a set of rules to the fields to identify there is a match. Therefore, the amended claim 1 is not directed to an improvement in computer functionality as the above limitation alone cannot provide an improvement to the computer functionality. Amended claim 1 is not patentable for at least the reasons above, as well as the corresponding independent claims 12 and 16. Dependent claims 2-11, 13-15 and 17-20, those are either directly or indirectly depended on claims 1, 12 and 16, are not patentable for the same reasons. Applicant’s amendments filed on 03/11/2025 regarding claims 6, 15 and 20 under 35 USC 112(b) have been considered but are not persuasive. Applicant argued that “12 regularization (lower case "L", not the numeral "1") refers to a known machine learning technique that reduces overfitting in models. This regularization is not related to any claim/claim dependency. Accordingly, Applicant requests that the rejection be withdrawn.” In the Specification, “12 regularization” which is number “1”, not the lowercase letter “l” [par. 0015, 0054, 0059, 0066]. Examiner highly recommends Applicant to review the term “l2 regularization” in both the Specification and the claims. If possible, Applicant may want to write it “L2 regularization” to make it less confusing. Applicant’s arguments filed on 03/11/2025 regarding claims 1-20 under 35 USC 103 have been fully considered and moot in view of new ground of rejection (see rejection above). Conclusion Applicant’s amendment necessitated the new grounds of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP 706.07(a). Applicant is remined of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filled within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Quoc Phung whose telephone number is (703) 756 1330. The examiner can normally be reached on Monday through Friday from 9am to 5pm PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Q.L.P./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Oct 28, 2021
Application Filed
Nov 06, 2024
Non-Final Rejection — §101, §103, §112
Jan 31, 2025
Interview Requested
Mar 11, 2025
Response Filed
Jun 12, 2025
Final Rejection — §101, §103, §112
Sep 18, 2025
Request for Continued Examination
Sep 24, 2025
Response after Non-Final Action
Dec 17, 2025
Non-Final Rejection — §101, §103, §112
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554998
DATA ANALYTICS FOR MORE-INFORMED REPAIR OF A MECHANICAL OR ELECTROMECHANICAL SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12415528
COMPLEX NETWORK COGNITION-BASED FEDERATED REINFORCEMENT LEARNING END-TO-END AUTONOMOUS DRIVING CONTROL SYSTEM, METHOD, AND VEHICULAR DEVICE
2y 5m to grant Granted Sep 16, 2025
Patent 12353983
AN INFERENCE DEVICE AND METHOD FOR REDUCING THE MEMORY USAGE IN A WEIGHT MATRIX
2y 5m to grant Granted Jul 08, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month