Prosecution Insights
Last updated: April 19, 2026
Application No. 17/722,075

REGULATORY OBLIGATION IDENTIFIER

Non-Final OA §101
Filed
Apr 15, 2022
Examiner
BAHL, SANGEETA
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pricewaterhousecoopers LLP
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
40%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
93 granted / 452 resolved
-31.4% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
40 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
37.6%
-2.4% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is a Non-Final Office Action in response to communications received on 1/22/26. Claims 5 and 19 have been previously cancelled. Claims 10-11 have been cancelled. Claims 1 and 20 have been amended. Therefore, Claims 1-4, 6-9, 12-18, 20 are now pending and have been addressed below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/22/26 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-9, 12-18, 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more. Step 1: Identifying Statutory Categories In the instant case, claims 1-4, 6-9, 12-18 are directed to a method and claim 20 is directed to a non-transitory medium. Thus, the claims fall within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A: Prong 1 Identifying a Judicial Exception Under Step 2A, prong 1, Claims 1-4, 6-9, 12-18, 20 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea without significantly more. Independent claims 1 and 20 recite methods that determining one or more predictions for meeting one or more regulatory requirements, including apply the regulatory text against the information about the organization; and generate the one or more predictions, wherein the one or more predictions include one or more categories and a classification, wherein the one or more categories include regulator action, exception/exemption, definition, background, example, regulatory requirement, conditionally permitted, calculations, and prohibition, wherein the classification is an indicator of whether the organization is obligated to comply with the one or more regulatory requirements. comparing one or more categories having a highest probability in the one or more predictions with one or more correct categories in a correct prediction in a test dataset; in response to the comparison indicating that the one or more categories having the highest probability do not match the one or more correct categories in the correct prediction, determining a modification to the training dataset; implementing the modification to the training dataset including identifying additional data corresponding to the one or more categories having the highest probability that did not match the one or more correct categories, and adding the identified additional data to the training data set; and in accordance with determining that on one or more metrics indicating a performance of the model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training These limitations as drafted, are a process that, under its broadest reasonable interpretation, covers methods of organizing human activity (including commercial interactions such as business relations, managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)), but for the recitation of generic computer components. That is, other than reciting the structural elements (such as train/retrain a machine-learning model, CNN, using a softmax layer of CNN, non-transitory medium), the claims are directed to determining one or more predictions for meeting one or more regulatory requirements. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity but for the recitation of generic computer components, the claim recites an abstract idea. Step 2A Prong 2 - This judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of receiving regulatory data, analyzing it, and providing regulatory requirements. In particular, the claims only recites the additional element – using a machine-learning model, non-transitory medium. These additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. a) The limitations of a train/retrain a machine-learning model, CNN, using a softmax layer of CNN, non-transitory medium merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Further, the limitation of “train/retrain the machine learning system and in accordance with determining that on one or more metrics indicating a performance of the machine-learning model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training of the machine- learning model” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. The requirements that the machine learning model be “iteratively trained” or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement. (RECENTIVE ANALYTICS, INC. v. FOX CORP.). In addition, limitations reciting data gathering such as “receiving regulatory text of one or more regulatory documents; receiving information about an organization “ are insignificant pre-solution activity that merely gather data and, therefore, do not integrate the exception into a practical application for that additional reason. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en bane), aff’d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371-72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); GIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)). The claims are directed to an abstract idea. When considered in combination, the claims do not amount to improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a), applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b), effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c), or applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they does not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. Step 2B: Considering Additional Elements The claimed invention is directed to an abstract idea without significantly more. The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” to; determining regulatory requirements. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are not patent eligible. The dependent claim(s) when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail to establish that the claim(s) is/are not directed to an abstract idea. The dependent claims are not significantly more because they are part of the identified judicial exception. See MPEP 2106.05(g). The claims are not patent eligible. With respect to train/retrain a machine-learning model, CNN, using a softmax layer of CNN, non-transitory medium, these limitations are described in Applicant’s own specification as generic and conventional elements. See Applicants specification, Paragraph [0055] details “computer 802 includes a processor 904 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 906 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), and a static memory 908 (e.g., flash memory, static random access memory (SRAM), etc.), which can communicate with each other via a bus, [0012] machine -learning model.” These are basic computer elements applied merely to carry out data processing such as, discussed above, receiving, analyzing, transmitting and displaying data. As discussed in Step 2A, Prong Two above, the recitations of “receiving steps” amount to receiving data over a network and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. Furthermore, the use of such generic computers to receive or transmit data over a network has been identified as a well understood, routine and conventional activity by the courts. See Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AVAuto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93, OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Further, the limitation of “in accordance with determining that on one or more metrics indicating a performance of the machine-learning model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training of the machine- learning model” is similar to Nair (US 11650968) disclosing train neural networks (NNs) and determine when to stop training to not waste computing or other resources when improvement is not no longer likely (abstract). Also see MPEP 2106.05(d). Lastly, the additional elements provides only a result-oriented solution which lacks details as to how the computer performs the claimed abstract idea. Therefore the additional elements amount to mere instructions to apply the exception. See MPEP 2106.05(f). Furthermore, these steps/components are not explicitly recited and therefore must be construed at the highest level of generality and are well-understood, routine and conventional limitations that amount to mere instructions to implement the abstract idea on a computer. Therefore, the claimed invention does not demonstrate a technologically rooted solution to a computer-centric problem or recite an improvement to another technology or technical field, an improvement to the function of any computer itself, applying the exception with, or by use of, a particular machine, effect a transformation or reduction of a particular article to a different state or thing, add a specific limitation other than what is well-understood, routine and conventional in the field, add unconventional steps that confine the claim to a particular useful application, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment such as computing. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. Taking the additional claimed elements individually and in combination, the computer components at each step of the process perform purely generic computer functions. Viewed as a whole, the claims do not purport to improve the functioning of the computer itself, or to improve any other technology or technical field. Use of an unspecified, generic computer does not transform an abstract idea into a patent-eligible invention. Thus, the claim does not amount to significantly more than the abstract idea itself. Dependent claims 2-4, 6-9, 12-18, add additional limitations, for example but these only serve to further limit the abstract idea, and hence are nonetheless directed towards fundamentally the same abstract idea as representative claim 1. Claims 2-4 recites outputting the one or predictions, wherein the output comprises a probability for each of the one or more categories; outputting one of the one or more categories having a highest probability; wherein the classification is a binary indicator. These limitations further limit the abstract idea and recites outputting result at high level of generality. Claims 6-9, 12-13 recites training the machine-learning model using a training dataset; testing the trained machine-learning model using a test dataset; determining whether the trained machine-learning model meets a target performance; and when the trained machine-learning model does not meet the target performance: changing the training dataset; and repeating the training and testing using the changed training dataset; wherein the training the machine-learning model comprises: determining one or more relationships between annotations and data in an annotated dataset; wherein the one or more relationships are determined by associating words in segments of the regulatory text to the annotations; wherein the annotations include a citation identifier, a website link, a regulator, a data source, a name of one of the one or more regulatory documents, a topic, a corresponding category, a corresponding classification, or machine-learning information; wherein the test dataset comprises text from multiple regulations and multiple topics; wherein the changing the training dataset includes adding additional data to the training dataset; wherein the additional data includes data belonging to the same category as data in the training dataset from an incorrect prediction; wherein the changing the training dataset includes modifying existing data of the training dataset; wherein the modifying the existing data includes modifying data in the training dataset from an incorrect prediction. These limitations further limit the abstract idea. Further, the limitation of “training the machine learning system” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. Claims 14-18 recite wherein the training the machine-learning model comprises: generating pre-trained embeddings; batching a training dataset into a configurable batch set, wherein the training dataset is included in the training dataset; using integer identifiers to look up word embeddings on input text of the training dataset; forming a convolution neural network; and performing iterative optimizations; , wherein the generating the pre-trained embeddings comprises: tokenizing the input text; designating the words as vocabulary words; generating a vector using the vocabulary words; and providing the vector to the convolution neural network; wherein the vocabulary words are regulation-based words; wherein the training dataset includes a validation dataset, wherein the training the machine-learning model comprises: tuning parameters using the training dataset; tuning hyper parameters using the validation dataset; and terminating the training of the machine-learning model based on one or more metrics; developing the machine-learning model using different configurations; comparing performances of the different configurations to determine a configuration with a highest performance, wherein the training of the machine-learning model includes using the configuration with the highest performance; wherein the generating the one or more predictions includes determining one or more probabilities using a softmax layer of a convolution neural network used by the machine-learning model. In this case, claims employs generic elements, such as, a “pretrained embeddings” and a “trained model, “tokenizing”, Vector, CNN” to implement the abstract idea without any improvement to the computer system itself. See, e.g., Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335–1336 (Fed. Cir. 2016) (distinguishing between “claims . . . directed to an improvement to computer functionality versus being directed to an abstract idea” or whether “the focus of the claims is on the specific asserted improvement in computer capabilities . . . or, instead, on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.”). As such, the claims are directed to an abstract idea. The additional limitations of a machine learning system, softmax, merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Further, the limitation of “training the machine learning system” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. These limitations merely adds the words apply it (or an equivalent) with the judicial exception , or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea as discussed in MPEP 2106.05(f). The receiving function is similar to a data gathering function. The dependent claims do not integrate into a practical application. As such, the additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words “apply it” (or equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (See MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computing system is merely being used to apply the abstract idea to a technological environment. These limitations do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. See MPEP 2106.05d. Thus, the claims do not add significantly more to an abstract idea. The claims are ineligible. Therefore, since there are no limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. See (Alice Corporation Pty. Ltd. v. CLS Bank International, et al.). Examiner Note: Subject Matter free of prior art Regarding Claims 1 and 20, Ramezani discloses the method/medium for determining one or more predictions for meeting one or more regulatory requirements (Abstract lines 1-4 generating regulatory content requirement descriptions is disclosed and involves receiving requirement data including a plurality of requirements including hierarchical information extracted from regulatory content.), the method comprising: Ramezani teaches training a machine-learning model using a training dataset (Fig 4 # 410 machine learning service, Col 3 lines 38-45 in a training exercise, feeding a training set of requirement pairs through the conjunction classifier, each requirement pair in the training set having a label indicating whether the pair is a NC, CSR, or CMR requirement pair, and based on the classification output by the conjunction classifier neural network for requirement pairs in the training set, optimizing the plurality of weights and biases to successively train the neural network for generation of the classification output).; Ramezani discloses receiving regulatory text of one or more regulatory documents (Fig 1 A # 104 requirement data, Col 6 lines 25-35 The system 100 includes a parent/child relationship identifier 102, which receives a requirement data input defining a plurality of requirements 104 extracted from regulatory content. Generally regulatory content documents include significant regulatory text that define requirements, but may also include redundant or superfluous text such as cover pages, a table of contents, a table of figures, page headers, page footers, page numbering etc.); an organization (Col 1 lines 33-36 Standards bodies, companies, and other organizations may also generate documents setting out conditions for product and process compliance); and Ramezani discloses using the machine-learning model (Fig 4 # 410 machine learning services and Col 10 lines 45-50training system such as a machine learning computing platform or cloud-based computing system) to: apply the regulatory text against the information (Col 1 lines 27-30Governments implement regulations, permits, plans, court ordered decrees, and bylaws to regulate commercial, industrial, and other activities considered to be in the public's interest. Standards bodies, companies, and other organizations may also generate documents setting out conditions for product and process compliance. Col 9 lines 59-67 the pre-trained language model 302 may be fine-tuned on a regulatory content training corpus to specifically configure the language model 302 to act as a regulatory content language model. The term “corpus” is generally used to refer to a collection of written texts on a particular subject and in this context to more specifically refer to a collection of regulatory content including regulations, permits, plans, court ordered decrees, bylaws, standards, and other such documents) ; and Ramezani discloses generate the one or more predictions, using a softmax layer of the plurality of layers of the CNN (Col 10 lines 20-22a classification layer, such as a softmax layer, that generates the classification output 108 as a set of probabilities) wherein the one or more predictions a probability of the regulatory text falling into one or more categories and a classification (Fig 6 # 124 (category requirements such as equipment z; equipment Y), 602 (classification), Col 7 lines 36-45 The conjunction classifier 106 may be implemented using a neural network that is trained to generate a classification output 108. In this embodiment, the classification output 108 is indicative of the requirement pair being not a conjunction (NC), a single requirement conjunction (CSR), or a multiple requirement conjunction (CMR). In one embodiment the conjunction classifier 106 may generate a classification output having three probability classes corresponding to the classifications NC, CSR, and CMR. Col 11 lines 45-55 When a satisfactory performance of the conjunction classifier 106 has been reached during training, the determined weights and biases 314 may be written to the location 252 of the data storage memory 206 of the inference processor circuit 200. The conjunction classifier 106 may then be configured and implemented on the inference processor circuit 200 for generating conjunction classifications NC, CSR, and CMR for unlabeled requirement pair inputs 304 associated with regulatory content being processed. Col 12 lines 34-40 the parent requirement citation d., the conjunction classifier 106 would assign the following two classifications for the pairs (A.2.d., A.2.d.i.) and (A.2.d, A.2.d.ii.): (A.2.d, A.2.d.i.): (For Equipment Y greater than 500 hp, Record fuel consumption daily, or).fwdarw.CSR (A.2.d, A.2.d.ii.): (For Equipment Y greater than 500 hp, Install a recording fuel meter).fwdarw.CMR)), wherein the one or more categories include regulator action, exception/exemption, definition, background, example, regulatory requirement, conditionally permitted, calculations, and prohibition (Col 8 lines 1-20 The output 150 further includes classification column 156 and a requirement description column 158. The requirement description column 158 includes complete descriptions of requirements extracted from the requirement data input 104. The requirement description generator 110 outputs single, unique requirements in the requirement description column 158 by including text from sections and subsections of the regulatory content. Each requirement is generated to convey a complete thought or definition of the requirement, without the reader having to reference other requirements for full understanding. In this embodiment, each requirement description also has a corresponding classification tag “REQ” in the classification column 156.), Ramezani discloses wherein the classification is an indicator of whether the organization is obligated to comply with the one or more regulatory requirements (Fig 1A # 108 classification indicator, Col 7 lines 32-40The system 100 also includes a conjunction classifier 106 configured to receive each of the requirement pairs from the parent/child relationship identifier 102. The conjunction classifier 106 may be implemented using a neural network that is trained to generate a classification output 108. In this embodiment, the classification output 108 is indicative of the requirement pair being not a conjunction (NC), a single requirement conjunction (CSR), or a multiple requirement conjunction (CMR). Col 10 lines 10-18 generate the classification output 108 based on the vector W representing the conjunction between the requirement text of the parent requirement and the child requirement of the requirement pair.). Ramezani does not specifically teach receiving information about an organization using a machine-learning model to: apply the regulatory text against the information about the organization; comparing one or more categories having a highest probability in the one or more predictions with one or more correct categories in a correct prediction in a test dataset; in response to the comparison indicating that the one or more categories having the highest probability do not match the one or more correct categories in the correct prediction, determining a modification to the training dataset: implementing the modification to the training dataset: and re-training the machine-learning model using the training dataset with the modification. Ramezani teaches changing the training dataset; and repeating the training and testing using the changed training dataset (Col 9 lines 59-67 the pre-trained language model 302 may be fine-tuned on a regulatory content training corpus to specifically configure the language model 302 to act as a regulatory content language model. The term “corpus” is generally used to refer to a collection of written texts on a particular subject and in this context to more specifically refer to a collection of regulatory content including regulations, permits, plans, court ordered decrees, bylaws, standards, and other such documents Col 10 lines 3-8 The language model may be further fine-tuned to improve performance on specific content, such as regulatory content. This involves performing additional training of the language model using a reduced learning rate to make small changes to the weights and biases based on a set of regulatory content data, Col 11 lines 24-30 During the training exercise, the operator may make changes to the training parameters and the configuration Bayyapu teaches receiving information about an organization (Fig 4 # 402 wealth management organization/industry, Col 4 lines 58-67 The server system enables users to search and discover all regulations related to an industry or an organization in one place. The server system allows the user to distill the organization or the industry down to its most granular parts, and then link each part to the most granular clauses of a regulation. The linking of clauses of applicable regulations to functional constituents (for example, businesses, products, etc.) of an organization or an industry is performed online and saved on the organization's cloud.); using a machine-learning model (Col 9 lines 60-64 machine learning to: apply the regulatory text against the information about the organization (Col 6 lines 18-30 The server system 150 is configured to facilitate implementation of regulations by organizations. The term ‘facilitate implementation of regulations’ as used herein implies enabling an end-user entity to (1) discover all applicable regulations; (2) understand the applicable regulations either through context-based enrichment or through collaboration with other users, and (3) connect the granular clauses of the regulations to the concerned aspects of the businesses, so as to enable the end-user entity to fully estimate the impact of all applicable regulations and thereafter take appropriate action to implement the regulations to achieve regulatory compliance. The term ‘organization’ as used herein may relate to any private enterprise, public enterprise, private-public partnership (PPP) enterprise, non-governmental organization, non-profit organization, and the like. For example, the organization may be a banking enterprise, an educational institution, a financial trading enterprise, an aviation company, a consumer goods enterprise or any such public or private sector enterprise., Fig 9 # 906-902 industry specific regulations, Col 13 lines 47-52 experts may use generic industry level models to build an organization specific construct, such as the organization level construct 550. This may be achieved by mapping the relationships amongst the organizational structure, processes and assets. The organization level construct 550 includes functional constituents represented as nodes 552, 554, 556, 558, 560, 562, 564 and 566 corresponding to ‘Legal Entities’, ‘Processes’, ‘IT Systems’, ‘Money Movement’, ‘Compliance’, ‘Teams and Roles’, ‘Functional Areas’ and ‘Products’, respectively.) wherein the classification is an indicator of whether the organization is obligated to comply with the one or more regulatory requirements (Col 10 lines 3-16 parsing and identifying links related to regulatory repositories across states and countries (such as, but not limited to, U.S. based repositories related to the Federal Register (FR), Code of Federal Regulations (CFR), United States Code (USC), and the like), and also across unions (such as for example, the European Union); and (4) parsing and identifying action verbs, calculations, compliance words, actors, timeline and triggers for those timelines, reporting requirements, Col 15 lines 18-25 The UI 700 further depicts a section 720 including requirements within the regulations, or in other words, clauses within regulations that are applicable to the products and business units of an organization (for example, Organization XYZ). Accordingly, the section 720 shows applicable requirements within regulation B/ECOA in form of clauses 722, 724, 726, 728 and 730.) Bhinde (US2021/0295204) teaches one or more predictions with one or more correct categories in a correct prediction in a test dataset ([0024] The prediction accuracy model 106 can include prediction training data 118, a prediction classifier 120, and prediction classifications 122. The prediction training data 118 can be a combination of the training data 112 and the predictions 116. The prediction classifier 120 can be a binary classifier that analyzes the predictions 116 of the client ML model 104 and determines whether the predictions 116 are accurate. The prediction training data 118 can include scored versions of the predictions 116. The scores of the prediction training data 118 can include probabilities for each of the potential classes (categories). The probabilities can indicate how likely the classifier 114 has determined it is that the associated class is the correct label. Thus, the probability of the selected label may be greater than the probability of the label(s) not selected., [0028]; in response to the correct/incorrect prediction ([0052] the prediction classifier 120 can indicate that a particular prediction 116 is correct or incorrect with a specific amount of confidence. [0053] Confidence can be represented as a percentage ranging from zero to 100%, wherein zero represents no confidence, and 100% represents 100% confidence. Accordingly, for scenarios where the model output manager 108 cannot find a probability set from the nearest training data points, the model output manager 108 can select the probability set of the nearest data point where a proxy model is least confident that the test prediction is wrong)re-training the machine-learning model ([0028] the model output manager 108 can combine the scoring with the training data 112 to create the prediction training data 118. The prediction training data 118 can include all the features used to train the client ML model 104, the label selection, probability scoring, and the difference in the comparatively top two probabilities of all the labels. [0016] classifies predictions made by a client machine-learning model as correct or incorrect with a degree of confidence. Additionally, some embodiments can automatically correct the client ML model's predictions, thus improving the accuracy of the output provided for clients of the machine-learning model. [0022] training and retraining to help improve accuracy of predictions. Creating new training data by inspecting and assigning correct labels [0052] the prediction classifier 120 can indicate that a particular prediction 116 is correct or incorrect with a specific amount of confidence.) Gasperecz et al. (US 2020/0279271 A1) discloses methods for extracting requirements from regulatory content data. The method including: receiving the regulatory content data; classifying an associated type for each citation in the regulatory content data using a trained classifier machine learning model, the classifier machine learning model trained using regulatory content data including expert labelled annotations McCourt (US 2020/0019884A1) discloses training data entries that are determined to be incorrect may be removed from the set of training data, relabeled, or compared to other data to determine an accuracy score or performance score of such a classifier. Iyer (US 20220301031) discloses the machine learning engine 204 may provide curated text or categorical inputs, provide hints and patterns associated with standardized code prediction, provide model negators, perform training, testing, approve, and publish model versions for consumption, perform scoring model parameter tuning, or create scoring accuracy thresholds for generating a model 226. AU2018255335B2 discloses improved artificially intelligent system for employing modularized and taxonomy-based classifications to generate compliance-related content. In one embodiment, the system comprises monitoring circuitry that receives regulatory compliance data from one or more regulatory institutions, as well as a taxonomy engine that processes the regulatory compliance data to generate taxonomy -based classifications of the regulatory compliance data comprising a plurality of modules and compliance requirements within each module. Nair (US 11650968) disclosing train neural networks (NNs) and determine when to stop training to not waste computing or other resources when improvement is not no longer likely (abstract) The model may be updated using data gained from training the particular or target NN. The comparison of expected and actual loss may be modified by a threshold or range such that if the difference is less than the threshold or within the range it can be deemed that stopping should occur. Train one or more neural networks (NNs) and determine when to stop training to not waste computing or other resources when improvement is not no longer likely. Systems and methods may train neural networks (NNs) and determine when to stop training to not waste computing or other resources when improvement is not no longer likely. After training period (e.g. an epoch) for a NN, a model trained using training data from other NNs may return a probability of improvement in the loss of the NN or a probability that the likely best loss of the NN is lower than the best loss of the other NNs for which hyperparameters have been chosen. Training may be stopped if the probability is less than a threshold, or a wait value is greater than a wait threshold. (Col 3 lines 22-35) Lahann “Utilizing machine learning to reveal VAT compliance violations in accounting data” 2019 discusses verification of VAT regulations within ERP system. However, prior art of record, take alone or in combination, neither anticipates nor render obvious the combination of claim limitations, in particular at least “generate the one or more predictions using a softmax layer of the plurality of layers of the CNN, wherein the one or more predictions include a probability of the regulatory text falling into one or more categories, and a classification; comparing one or more categories having a highest probability in the one or more predictions with one or more correct categories in a correct prediction in a test dataset; in response to the comparison indicating that the one or more categories having the highest probability do not match the one or more correct categories in the correct prediction, determining a modification to the training dataset: implementing the modification to the training dataset: and re-training the machine-learning model using the training dataset with the modification, in accordance with determining that on one or more metrics indicating a performance of the machine-learning model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training of the machine- learning model”. Prior art is not applied to dependent claims due to their dependence from claims 1 & 20. Response to Arguments Applicant's arguments filed 1/22/26 have been fully considered but they are not persuasive. Regarding 101 rejection, applicant states that claims are not directed to abstract idea, integrate exception into practical application and provide significantly more than abstract idea. New limitations have been considered in rejection above. All arguments have been considered and they are not persuasive. The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. a) The limitations of a train/retrain a machine-learning model, CNN, using a softmax layer of CNN, non-transitory medium and in accordance with determining that on one or more metrics indicating a performance of the machine-learning model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training of the machine- learning model. merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Further, the limitation of “train/retrain the machine learning system” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. The requirements that the machine learning model be “iteratively trained” or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement. (RECENTIVE ANALYTICS, INC. v. FOX CORP.). Further, the limitation of in accordance with determining that on one or more metrics indicating a performance of the machine-learning model exceeds a performance threshold or that a validation loss does not decrease for a predefined number of training epochs, terminating the re-training of the machine- learning model is similar to Nair (US 11650968) disclosing train neural networks (NNs) and determine when to stop training to not waste computing or other resources when improvement is not no longer likely (abstract). Also see MPEP 2106.05(d). Lastly, the additional elements provides only a result-oriented solution which lacks details as to how the computer performs the claimed abstract idea. Therefore the additional elements amount to mere instructions to apply the exception. See MPEP 2106.05(f). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gasperecz et al. (US 2020/0279271 A1) discloses methods for extracting requirements from regulatory content data. The method including: receiving the regulatory content data; classifying an associated type for each citation in the regulatory content data using a trained classifier machine learning model, the classifier machine learning model trained using regulatory content data including expert labelled annotations McCourt (US 2020/0019884A1) discloses training data entries that are determined to be incorrect may be removed from the set of training data, relabeled, or compared to other data to determine an accuracy score or performance score of such a classifier. Iyer (US 20220301031) discloses the machine learning engine 204 may provide curated text or categorical inputs, provide hints and patterns associated with standardized code prediction, provide model negators, perform training, testing, approve, and publish model versions for consumption, perform scoring model parameter tuning, or create scoring accuracy thresholds for generating a model 226. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGEETA BAHL whose telephone number is (571)270-7779. The examiner can normally be reached 7:30 - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANGEETA BAHL/Primary Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Apr 15, 2022
Application Filed
Feb 21, 2025
Non-Final Rejection — §101
May 28, 2025
Interview Requested
Jun 09, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Examiner Interview Summary
Jul 28, 2025
Response Filed
Oct 18, 2025
Final Rejection — §101
Jan 22, 2026
Request for Continued Examination
Feb 02, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591914
REAL-TIME COLLATERAL RECOMMENDATION
2y 5m to grant Granted Mar 31, 2026
Patent 12548099
SYSTEMS AND METHODS FOR PRIORITIZED FIRE SUPPRESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12524739
CREATING AND USING TRIPLET REPRESENTATIONS TO ASSESS SIMILARITY BETWEEN JOB DESCRIPTION DOCUMENTS
2y 5m to grant Granted Jan 13, 2026
Patent 12482304
SYSTEM AND A METHOD FOR AUTHENTICATING INFORMATION DURING A POLICE INQUIRY
2y 5m to grant Granted Nov 25, 2025
Patent 12450617
LEARNING FOR INDIVIDUAL DETECTION IN BRICK AND MORTAR STORE BASED ON SENSOR DATA AND FEEDBACK
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
40%
With Interview (+19.3%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month