Prosecution Insights
Last updated: April 19, 2026
Application No. 19/366,395

REGENERATIVE MODEL-CONTINUOUS EVOLUTION SYSTEM

Non-Final OA §101§103§DP
Filed
Oct 22, 2025
Examiner
HINCKLEY, CHASE PAUL
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Trete Inc.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
78%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
134 granted / 196 resolved
+13.4% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
19 currently pending
Career history
215
Total Applications
across all art units

Statute-Specific Performance

§101
23.0%
-17.0% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 196 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION This non-final office action is responsive to application 19/366,395 as submitted 22 Oct. 2025. Claim status is currently pending and under examination for claims 1-14 of which independent claims are 1 and 8. A Track 1 status was granted on 11/19/25. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement As required by M.P.E.P. 609(c), the applicant’s submissions of the Information Disclosure Statement dated 10/22/2025 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by MPEP 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action. Priority 5. Priority is claimed to a 2-page list of related domestic applications. After review, the relevant filing appears to be Provisional 63/615,136 with a filing date of 12/27/23. Prior to this date, no support is identified for the presented claims. Therefore, the effective filing date is 12/27/23, not the earliest of Provisional 63/454,622 filed 03/24/23. Should the applicant desire earlier priority, it must be specifically point out where support exists and in which specific filing. Information Disclosure Statement As required by MPEP 609(c), the applicant’s submissions of the Information Disclosure Statement dated 10/22/25 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by MPEP 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action. Double Patenting – Three Instances Patents: US12,406,305 and US12,307,525; Application: 19/286,270 The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 4-6, 8-9 and 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of U.S. Patent No.12,406,305 in view of reference Karlsson et al., PCT WO2023/183494A1. This is an obviousness-type non-provisional double patenting rejection. Although the claims at issue are not identical, they are not patentably distinct from each other because they recite substantively similar limitation with an obvious substitution of asset type. Particularly, the instant application is of slightly broader scope and specifies assets as private credit or private debt. Karlsson discloses [0109] “onboarding loan asset to a blockchain” encrypted with private key, Fig 4. Person having ordinary skill in the art would have considered it obvious prior to the effective filing date to onboard debt/loan assets per Karlsson in combination as simple substitution of one known asset type for another to obtain predictable results. Financial assets in capitalist market are customarily private. See the following comparison table: Instant Application: 19/366,395 Issued Patent: 12,406,305 Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receiving a selection of a first model from among the plurality of different models; iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the first model responsive to the user feedback score being above the first predetermined threshold; receiving a selection of a second model from among the plurality of different models; iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least a regulatory model and an anti-fraud security measure model, the training evolving and improving performance of the plurality of different models, including: receiving a selection of a regulatory model from among the plurality of different models, wherein the regulatory model identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset; iteratively training the regulatory model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the regulatory model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the regulatory model responsive to the user feedback score being above the first predetermined threshold; receiving a selection of a first anti-fraud security measure model from among the plurality of different models, wherein the first anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively training the first anti-fraud security measure model in parallel with the regulatory model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first anti-fraud security measure model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the first anti-fraud security measure model responsive to the user feedback score being above the second predetermined threshold; receiving a selection of a second anti-fraud security measure model from among the plurality of different models, wherein the second anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively training the second anti-fraud security measure model in parallel with the regulatory model and first anti-fraud security measure model utilizing at least a further subset of the plurality of annotated documents and at least one further previously annotated document until a third user feedback score is above a third predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second anti-fraud security measure model producing third responses; receiving the third user feedback score indicating correctness of the third produced responses; and checking the third user feedback score relative to the third predetermined threshold; and publishing the second anti-fraud security measure model side-by-side with the first anti-fraud security measure model responsive to the third user feedback score being above the third predetermined threshold. In the above, limitations in bold are differences and limitations underlined are similarities. The underlined similar limitations are relocated within the claim, and the bold difference limitation is an obvious substitution of asset type as noted in header. Claim 2. The method of claim 1, wherein iteratively training the first model comprises iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 2. The method of claim 1, wherein iteratively training the first anti-fraud security measure model comprises iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 4. The method of claim 1, wherein publishing the first model comprises replacing an incumbent anti-fraud security measure model with another anti-fraud security measure model. Claim 3. The method of claim 1, wherein publishing the first anti-fraud security measure model comprises replacing an incumbent anti-fraud security measure model with the first anti- fraud security measure model. Claim 5. The method of claim 1, wherein publishing the first model comprises running a first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. Claim 4. The method of claim 1, wherein publishing the first anti-fraud security measure model comprises running the first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. Claim 6. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 5. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 8. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboard asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilize appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulate a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilize a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receive a selection of a first model from among the plurality of different models; iteratively train the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the first model responsive to the user feedback score being above the first predetermined threshold; receive a selection of a second model from among the plurality of different models; iteratively train the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 6. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboarding asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: utilize appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulate a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilize a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least a regulatory model and an anti-fraud security measure model, the training evolving and improving performance of the plurality of different models, including: receive a selection of a regulatory model from among the plurality of different models, wherein the regulatory model identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset; iteratively train the regulatory model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the regulatory model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the regulatory model responsive to the user feedback score being above the first predetermined threshold; receive a selection of a first anti-fraud security measure model from among the plurality of different models, wherein the first anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively train the first anti-fraud security measure model in parallel with the regulatory model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first anti-fraud security measure model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the first anti-fraud security measure model responsive to the user feedback score being above the second predetermined threshold; receive a selection of a second anti-fraud security measure model from among the plurality of different models, wherein the second anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively train the second anti-fraud security measure model in parallel with the regulatory model and first anti-fraud security measure model utilizing at least a further subset of the plurality of annotated documents and at least one further previously annotated document until a third user feedback score is above a third predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the second anti-fraud security measure model producing third responses; receive the third user feedback score indicating correctness of the third produced responses; and check the third user feedback score relative to the third predetermined threshold; and publish the second anti-fraud security measure model side-by-side with the first anti-fraud security measure model responsive to the third user feedback score being above the third predetermined threshold. In the above, limitations in bold are differences and limitations underlined are similarities. The underlined similar limitations are relocated within the claim, and the bold difference limitation is an obvious substitution of asset types as noted in header. Claim 9. The system of claim 8, wherein instructions, which, when executed, cause the processor to iteratively train the first model comprise instructions, which, when executed, cause the processor to iteratively train the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 7. The system of claim 6, wherein instructions, which, when executed, cause the processor to iteratively train the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to iteratively train the first anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 11. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to replace an incumbent anti-fraud security measure model with the first anti-fraud security measure model. Claim 8. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to replace an incumbent anti-fraud security measure model with the first anti-fraud security measure model. Claim 12. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to run the first anti-fraud security measure model in parallel with an incumbent model. Claim 9. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to run the first anti-fraud security measure model in parallel with an incumbent model. Claim 13. The system of claim 8, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claim 10. The system of claim 6, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claims 1-2, 4-6, 8-9 and 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of U.S. Patent No.12,307,525 in view reference Karlsson et al., PCT WO2023/183494A1. This is an obviousness-type non-provisional double patenting rejection. Although the claims at issue are not identical, they are not patentably distinct from each other because they recite substantively similar limitation with an obvious substitution of asset type. Particularly, the instant application is of slightly broader scope and specifies assets as private credit or private debt. Karlsson discloses [0109] “onboarding loan asset to a blockchain” encrypted with private key, Fig 4. Person having ordinary skill in the art would have considered it obvious prior to the effective filing date to onboard debt/loan assets per Karlsson in combination as simple substitution of one known asset type for another to obtain predictable results. Financial assets in capitalist market are customarily private. See the following comparison table: Instant Application: 19/366,395 Issued Patent: 12,307,525 Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receiving a selection of a first model from among the plurality of different models; iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the first model responsive to the user feedback score being above the first predetermined threshold; receiving a selection of a second model from among the plurality of different models; iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: accessing a plurality of submitted documents, each document in the plurality of submitted documents containing at least some of the asset data; evaluating that the asset data includes required data for listing the asset for trading, including for each document in the plurality of submitted documents: identifying a document type of the document; matching an appropriate model, from among a plurality of different models, to the document based on the document type; utilizing the matched appropriate model extracting a portion of the asset data from the document; and sending the extracted portion of the asset data to a data correlator; and formulating a checklist reflecting a summary of the asset data collectively including the required data; utilizing a further model annotating the plurality of submitted documents forming a plurality of annotated documents indicating correct and incorrect examples, wherein a subset of the plurality of annotated documents include legal documents; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least a regulatory model and an anti-fraud security measure model, the training evolving and improving performance of the plurality of different models, including for each model in the plurality of different models: receiving a selection of the a regulatory model from among the plurality of different models, wherein the regulatory model identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset; iteratively training the a first regulatory model utilizing at least a subset of the plurality of annotated legal documents and at least one other previously annotated legal document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first regulatory model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the first regulatory model responsive to the user feedback score being above the first predetermined threshold; iteratively training a second regulatory model in parallel with the first regulatory model utilizing at least a subset of the plurality of annotated legal documents and at least one other previously annotated legal document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second regulatory model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the second regulatory model side-by-side with the first regulatory model responsive to the second user feedback score being above the second predetermined threshold; receiving a selection of an anti-fraud security measure model from among the plurality of different models, wherein the anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively training a first anti-fraud security measure model in parallel with the first and second regulatory model utilizing at least a subset of the plurality of annotated documents and at least one other previously annotated document until a third user feedback score is above a third predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first anti-fraud security measure model producing third responses; receiving the third user feedback score indicating correctness of the third produced responses; and checking the third user feedback score relative to the third predetermined threshold; and publishing the first anti-fraud security measure model responsive to the user feedback score being above the predetermined threshold; iteratively training a second anti-fraud security measure model in parallel with the first and second regulatory model, and first anti-fraud security measure model utilizing at least a subset of the plurality of annotated documents and at least one other previously annotated document until a fourth user feedback score is above a fourth predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second anti-fraud security measure model producing fourth responses; receiving the fourth user feedback score indicating correctness of the fourth produced responses; and checking the fourth user feedback score relative to the fourth predetermined threshold; and publishing the second anti-fraud security measure model side-by-side with the first anti-fraud security measure model responsive to the user feedback score being above the fourth predetermined threshold. In the above, limitations in bold are differences and limitations underlined are similarities. The underlined similar limitations are relocated within the claim, and the bold difference limitation is an obvious substitution of asset type as noted in header. Claim 2. The method of claim 1, wherein iteratively training the first model comprises iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 2. The method of claim 1, wherein iteratively training the model comprises iteratively training the model to learn true positives, false positives, true negatives, and false negatives. Claim 4. The method of claim 1, wherein publishing the first model comprises replacing an incumbent anti-fraud security measure model with another anti-fraud security measure model. Claim 3. The method of claim 1, wherein publishing the model comprises replacing an incumbent model with the model. Claim 5. The method of claim 1, wherein publishing the first model comprises running a first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. Claim 4. The method of claim 1, wherein publishing the model comprises running the model in parallel with an incumbent model. Claim 6. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 5. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 8. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboard asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilize appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulate a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilize a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receive a selection of a first model from among the plurality of different models; iteratively train the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the first model responsive to the user feedback score being above the first predetermined threshold; receive a selection of a second model from among the plurality of different models; iteratively train the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 6. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboard asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: access a plurality of submitted documents, each document in the plurality of submitted documents containing at least some of the asset data; evaluate that the asset data includes required data for listing the asset for trading, including for each document in the plurality of submitted documents: identify a document type of the document; match an appropriate model, from among a plurality of different models, to the document based on the document type; utilize the matched appropriate model extracting a portion of the asset data from the document; and send the extracted portion of the asset data to a data correlator; and formulate a checklist reflecting a summary of asset data collectively including the required data; utilize a further model annotating the plurality of submitted documents forming a plurality of annotated documents indicating correct and incorrect examples, wherein a subset of the plurality of annotated documents include legal documents; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least a regulatory model and an anti-fraud security measure model, the training evolving and improving performance of the plurality of different models, including for each model in the plurality of different models: receive a selection of the a regulatory model from among the plurality of different models, wherein the regulatory model identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset; iteratively train the a first regulatory model utilizing at least a subset of the plurality of annotated legal documents until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first regulatory model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the first regulatory model responsive to the first user feedback score being above the first predetermined threshold; iteratively train a second regulatory model in parallel with the first regulatory model utilizing at least a subset of the plurality of annotated legal documents and at least one other previously annotated legal document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the second regulatory model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the second regulatory model side-by-side with the first regulatory model responsive to the second user feedback score being above the second predetermined threshold; receive a selection of an anti-fraud security measure model from among the plurality of different models, wherein the anti-fraud security measure model identifies manipulative actions or irregularities within the plurality of submitted documents; iteratively train a first anti-fraud security measure model in parallel with the first and second regulatory model utilizing at least a subset of the plurality of annotated documents and at least one other previously annotated document until a third user feedback score is above a third predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first anti-fraud security measure model producing third responses; receive the third user feedback score indicating correctness of the third produced responses; and check the third user feedback score relative to the third predetermined threshold; and publish the first anti-fraud security measure model responsive to the user feedback score being above the predetermined threshold; iteratively train a second anti-fraud security measure model in parallel with the first and second regulatory model, and first anti-fraud security measure model utilizing at least a subset of the plurality of annotated documents and at least one other previously annotated document until a fourth user feedback score is above a fourth predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the second anti-fraud security measure model producing fourth responses; receive the fourth user feedback score indicating correctness of the fourth produced responses; and check the fourth user feedback score relative to the fourth predetermined threshold; and publish the second anti-fraud security measure model side-by-side with the first anti-fraud security measure model responsive to the user feedback score being above the fourth predetermined threshold. In the above, limitations in bold are differences and limitations underlined are similarities. The underlined similar limitations are relocated within the claim, and the bold difference limitation is an obvious substitution of asset types as noted in header. Claim 9. The system of claim 8, wherein instructions, which, when executed, cause the processor to iteratively train the first model comprise instructions, which, when executed, cause the processor to iteratively train the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 7. The system of claim 6, wherein instructions, which, when executed, cause the processor to iteratively train the model comprise instructions, which, when executed, cause the processor to iteratively train the model to learn true positives, false positives, true negatives, and false negatives. Claim 11. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to replace an incumbent anti-fraud security measure model with the first anti-fraud security measure model. Claim 8. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the model comprise instructions, which, when executed, cause the processor to replace an incumbent model with the model. Claim 12. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to run the first anti-fraud security measure model in parallel with an incumbent model. Claim 9. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the model comprise instructions, which, when executed, cause the processor to run the model in parallel with an incumbent model. Claim 13. The system of claim 8, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claim 10. The system of claim 6, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claims 1-2, 4-6, 8-9 and 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of co-pending Application 19/286,270 in view of view reference Karlsson et al., PCT WO2023/183494A1. This is an obviousness-type provisional nonstatutory double patenting rejection. Although the claims at issue are not identical, they are not patentably distinct from each other because they recite substantively similar limitation with an obvious substitution of asset type. Particularly, the instant application is of slightly broader scope and specifies assets as private credit or private debt. Karlsson discloses [0109] “onboarding loan asset to a blockchain” encrypted with private key, Fig 4. Person having ordinary skill in the art would have considered it obvious prior to the effective filing date to onboard debt/loan assets per Karlsson in combination as simple substitution of one known asset type for another to obtain predictable results. Financial assets in capitalist market are customarily private. See the following comparison table: Instant Application: 19/366,395 Co-pending Application: 19/286,270 Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receiving a selection of a first model from among the plurality of different models; iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the first model responsive to the user feedback score being above the first predetermined threshold; receiving a selection of a second model from among the plurality of different models; iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 1. A computer-implemented method comprising: onboarding asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receiving a selection of a first model from among the plurality of different models; iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the first model producing first responses; receiving the first user feedback score indicating correctness of the produced first responses; and checking the first feedback score relative to the first predetermined threshold; and publishing the first model responsive to the user feedback score being above the first predetermined threshold; receiving a selection of a second model from among the plurality of different models; iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receiving the second user feedback score indicating correctness of the second produced responses; and checking the second user feedback score relative to the second predetermined threshold; and publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. In the above, limitations in bold are differences which is an obvious substitution of asset type as noted in header. Claim 2. The method of claim 1, wherein iteratively training the first model comprises iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 2. The method of claim 1, wherein iteratively training the first anti-fraud security measure model comprises iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 4. The method of claim 1, wherein publishing the first model comprises replacing an incumbent anti-fraud security measure model with another anti-fraud security measure model. Claim 3. The method of claim 1, wherein publishing the first anti-fraud security measure model comprises replacing an incumbent anti-fraud security measure model with the first anti- fraud security measure model. Claim 5. The method of claim 1, wherein publishing the first model comprises running a first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. Claim 4. The method of claim 1, wherein publishing the first anti-fraud security measure model comprises running the first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. Claim 6. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 5. The method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. Claim 8. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboard asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: utilize appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulate a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilize a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receive a selection of a first model from among the plurality of different models; iteratively train the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the first model responsive to the user feedback score being above the first predetermined threshold; receive a selection of a second model from among the plurality of different models; iteratively train the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. Claim 6. A system comprising: a processor; system memory coupled to the processor and storing instructions, which, when executed, cause the processor to: onboard asset data defining an asset to be listed for trading at an Alternative Trading System (ATS), comprising: utilize appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and formulate a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; utilize a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, train the plurality of different models based on the plurality of annotated documents, wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, the training evolving and improving performance of the plurality of different models, including: receive a selection of a first model from among the plurality of different models; iteratively train the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: run the first model producing first responses; receive the first user feedback score indicating correctness of the produced first responses; and check the first feedback score relative to the first predetermined threshold; and publish the first model responsive to the user feedback score being above the first predetermined threshold; receive a selection of a second model from among the plurality of different models; iteratively train the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: running the second model producing second responses; receive the second user feedback score indicating correctness of the second produced responses; and check the second user feedback score relative to the second predetermined threshold; and publish the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. In the above, limitations in bold are differences which is an obvious substitution of asset type as noted in header. Claim 9. The system of claim 8, wherein instructions, which, when executed, cause the processor to iteratively train the first model comprise instructions, which, when executed, cause the processor to iteratively train the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 7. The system of claim 6, wherein instructions, which, when executed, cause the processor to iteratively train the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to iteratively train the first anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. Claim 11. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to replace an incumbent anti-fraud security measure model with the first anti-fraud security measure model. Claim 8. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to replace an incumbent anti-fraud security measure model with the first anti-fraud security measure model. Claim 12. The system of claim 8, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to run the first anti-fraud security measure model in parallel with an incumbent model. Claim 9. The system of claim 6, wherein instructions, which, when executed, cause the processor to publish the first anti-fraud security measure model comprise instructions, which, when executed, cause the processor to run the first anti-fraud security measure model in parallel with an incumbent model. Claim 13. The system of claim 8, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claim 10. The system of claim 6, wherein the plurality of models includes at least one of: a text model, an image model, and a language model. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In determining whether the claims are subject matter eligible, the examiner applies guidance set forth under MPEP 2106. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—all claims fall within one of the four statutory categories: claims 1-7 are a method/process, and claims 8-14 are a system/machine. Thus, the analysis should proceed per MPEP 2106.03. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the claims, under the broadest reasonable interpretation, recites an abstract idea. In this case, claims fall within the enumerated grouping of abstract idea being “Mental Processes” and/or “Certain Methods of Organizing Human activity”, but for the recitation of generic computer components. In particular, claims recite: “utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents” (mental evaluation, e.g. [0093] “A human, (typically an Analyst working with the technology that uses the Models” [0103] “not limited to such models”) “formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading” (mental opinion) “utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples” (mental judgment, e.g. [0116] “Humans label/classify data points that are then used” and/or [0099] “manual review”) automatically… “wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and (2) an anti-fraud security measure model that identifies manipulative actions or irregularities with the plurality of submitted documents” (mental observations/identifies modification/irregularities e.g. detect anomalies in finance, and/or legal interactions including agreements in the form of legal obligations) “receiving a selection of a first model from among the plurality of different models” (mental judgment, indicate choice of appropriate model) “receiving the first user feedback score indicating correctness of the produced first responses” (mental evaluation, e.g. grading rubric pass/fail) “checking the first feedback score relative to the first predetermined threshold” (mental evaluation or judgment, e.g. comparison to min/max) “receiving a selection of a second model” from among the plurality of different models” (mental judgment, indicate choice of appropriate model) “receiving the second user feedback score indicating correctness of the second produced responses” (mental evaluation, e.g. grading rubric pass/fail) “checking the second user feedback score relative to the second predetermined threshold” (mental evaluation or judgment, e.g. comparison to min/max) Focus of the claims concern models of regulatory model and anti-fraud security measure model utilized for asset data subject to checklist for trading. The above identified functions and models do not preclude mental performance and may provide a framework of legal/regulatory checks for fraud detection. A human may detect fraud against a set of rules for example. As such, the claims comprise at least mental processes and/or certain methods of organizing human activity as the abstract idea enumerated under MPEP 2106.04(a)(2)(II)-(III). Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—a practical application is not integrated by the judicial exception because the additional elements are as follows: “computer-implemented” MPEP 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, e.g. [0078] “general purpose microprocessors” “onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. mere data gathering or selecting the type of data to be manipulated “automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models […], the training evolving and improving performance of the plurality of different models” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on training that is merely repetitive with routine optimization “iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on the training that is merely repetitive and scored against common performance measures “running the first model producing the results” MPEP 2106.05(f) adding the words ‘apply-it’ (or an equivalent) with the judicial exception “publishing the first model responsive to the user feedback score being above the first predetermined threshold” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. necessary data output “iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on training that is merely repetitive and scored against common performance measures “running the second model producing second responses” MPEP 2106.05(f) adding the words ‘apply-it’ (or an equivalent) with the judicial exception “publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. necessary data output or drafting columns of table-data by hand Balance of the claim concerns training iteratively, running the models and publishing of results as well as data onboarding and computer implementation. These additional elements do not provide technical details sufficient to carry out a technical solution in a non-conventional way. The training broadly applies established techniques so that the models may be run or tested. This can be performed as off-the-shelf model optimization performed by established functions. Further, onboarding and publishing amounts to little more than input and output, respectively. Finally, computer implementation is recited at a high level of generality. As is set forth under MPEP 2106.04(a)(2)(III) “A claim that requires a computer may still recite a mental process.” Accordingly, the claim remains directed to the abstract idea and the additional elements do not integrate the abstract idea into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—the claims do not include additional elements that amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea in to a practical application, the additional elements are identified with respect to MPEP 2106.05 and are as follows: “computer-implemented” MPEP 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, e.g. [0078] “general purpose microprocessors” More particularly, the computer does not qualify as a particular machine under MPEP 2106.05(b). “onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. mere data gathering or selecting the type of data to be manipulated. Particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity under MPEP 2106.05(d)(II)(i) receiving or gathering data. “automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models […], the training evolving and improving performance of the plurality of different models” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on training that is merely repetitive with routine optimization. More particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity as evidenced by Gijsbers et al., “AMLB: an AutoML Benchmark” arXiv: 2207.12560v1 at [Abst; P.3 ¶2] “well-known AutoML” which [P.2 ¶1] “allow novice users to train well-performing models” automatically (Auto)ML [P.6 ¶3] “AutoML frameworks have been developed, either by iteratively improving on old designs” including e.g. “ensemble models” [P.8 ¶4]. “iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on the training that is merely repetitive and scored against common performance measures. More particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity as evidenced by Baker et al., US PG Pub No 2022/0335296A1 at [0041] “well-known training process comprises an iterative process… well-known optimization technique” e.g. ensembles [0011]; and Hall et al., US PG Pub No 2023/0148321A1 at [0107] “it is common to train models so that the threshold is set” describes threshold of confusion matrix in terms of TP/TN/FP/FN regard precision and recall defined at [0101,111]. “running the first model producing the results” MPEP 2106.05(f) adding the words ‘apply-it’ (or an equivalent) with the judicial exception. Particularly, the limitation does not satisfy the test of particular transformation or impart meaningful limitation under MPEP 2106.05(c)(e). “publishing the first model responsive to the user feedback score being above the first predetermined threshold” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. necessary data output. Particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity under MPEP 2106.05(d)(II)(iii) electronic recordkeeping. “iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, no meaningful limitation on training that is merely repetitive and scored against common performance measures. Particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity evidenced by Baker et al., US PG Pub No 2022/0335296A1 at [0041] “well-known training process comprises an iterative process… well-known optimization technique” e.g. training ensembles [0011]; and Tomasic et al., US PG Pub No 2023/0131393A1 at [0186] “Common metrics applied in accuracy measurement include: Precision=TP/(TP+FP) and Recall=TP/(TP+FN)… iteratively re-trains the machine learning model until the occurrence of a stopping condition, such as the accuracy measurement” “running the second model producing second responses” MPEP 2106.05(f) adding the words ‘apply-it’ (or an equivalent) with the judicial exception. More particularly, the limitation does not satisfy the test of particular transformation or impart meaningful limitation under MPEP 2106.05(c)(e). “publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold” MPEP 2106.05(g) adding insignificant extra-solution activity to the judicial exception, e.g. necessary data output or drafting columns of table-data by hand. More particularly, said extra-solution activity is a well-understood, routine and conventional (WURC) activity under MPEP 2106.05(d)(II)(iii) electronic recordkeeping. Significantly more is not supported by the balance of the claim for the reasons indicated above. Particularly, the training lacks technical granularity beyond that which is already established prior to the effective filing date circa ~2023. Evidence shows that functionality does not impart an inventive concept. If the claim language provides only a result-oriented solution, with insufficient detail for how a computer accomplishes it, then the claims do contain an inventive concept. Taken alone, their additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination and as a whole, contribution to the art is deficient and the storied drafting fails to distill what is considered new. The collective functions merely provide conventional computer implementation. For at least the foregoing reasons, the claims are not patent eligible. This rejection applies equally to independent claims 1 and 8 as well to dependent claims 2-7 and 9-14. Dependent claims when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea, or that they include additional elements which integrate the judicial exception into a practical application or amount to significantly more. Independent claim 8 further embellishes the computer implementation to include processor coupled to memory storing executable instructions for performing the limitations of method claim 1. The processor and memory are additional elements understood for computer implementation and which fall under MPEP2106.05(f). More particularly, said additional elements do not qualify as a particular machine under MPEP 2106.05(b). Therefore, the additional elements fail to integrate the judicial exception into a practical application or amount to significantly more. Dependent claims 2-3 and 9-10 disclose wherein training the models includes learning true positives, false positives, true negatives, and false negatives. The limitation is considered additional elements which amount to adding insignificant extra-solution activity under MPEP 2106.05(g). Particularly, said extra-solution activity is a well-understood, routine and conventional activity as evidenced by Tomasic at [0186] “Common metrics applied in accuracy measurement include: Precision=TP/(TP+FP) and Recall=TP/(TP+FN)… iteratively re-trains the machine learning model until the occurrence of a stopping condition, such as the accuracy measurement”. Thus, the additional elements are not sufficient to integrate the judicial exception into a practical application or as amounting to significantly more. Dependent claims 4 and 11 disclose replacing an incumbent anti-fraud model with another. This is considered part of the abstract idea being a mental process of judgment or evaluation to substitute among plurality of models. There are no additional elements. Dependent claims 5 and 12 disclose wherein publishing comprises running models in parallel. The limitation is considered additional element amounting to the words ‘apply-it’ (or an equivalent) with the judicial exception under MPEP 2106.05(f). The run/apply-it models being in parallel does not satisfy the test of particular transformation under MPEP 2106.05(c) or meaningfully limit the claim because parallelization is long established in the field of endeavor e.g. ensembles, meta-learning, multi-threaded distributed learning etc. Simply running models in parallel is not found to be an inventive concept as of the effective filing date. As such, the additional elements do not integrate the judicial exception into a practical application or amount to significantly more. Dependent claims 6 and 13 disclose wherein the model plurality includes at least one among alternatives of text model, image model, and language model. This is considered part of the abstract idea being mental process of evaluation such as estimation by hypothesis based on information type. Multi-modal models for heterogeneous inputs are established. There are no additional elements. Dependent claims 7 and 14 disclose wherein onboarding asset data comprises onboarding a private credit asset. The limitation is treated as additional element which amounts to adding insignificant extra-solution activity to the judicial exception under MPEP 2106.05(g) mere data gathering or selecting the type of data to be manipulated. Particularly, the limitation is a well-understood, routine and conventional activity under MPEP 2106.05(d)(II)(i) receiving or gathering data. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 6-8 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over: Karlsson et al., PCT WO2023/183494A1 hereinafter Karlsson, in view of Fang et al., US PG Pub No 2022/0067738A1 hereinafter Fang, in view of Saadi, Saad, US PG Pub No 2023/0297831A1 hereinafter Saadi, in view of Sardanopoli et al., US Patent No 12,217,271B1 hereinafter Sardanopoli, as evidenced by Provisional 63/593,163, in view of Hosseinali et al., US PG Pub No 2024/0211966A1 hereinafter Hosseinali, in view of Lin et al., US Patent No 11,314,620B1, hereinafter Lin. With respect to claim 1, Karlsson teaches: A computer-implemented method comprising: {Karlsson [150] “Methods as described herein can be implemented by way of a machine (e.g., computer processor)” similar at [144,146]} onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset, comprising: {Karlsson [33-37] “onboarding digital assets” comprising [46,43] “mortgage asset” or “loan asset” are debt assets; see Fig 1:120, [027-29], [108-09,117] the assets are tokenized and represent documents such as smart contracts} However, Karlsson does not disclose the following limitations which are met by Fang: utilizing appropriate models, from among a plurality of different models, based on document type to extract a portion of the asset data from each document in a plurality of documents; and {Fang [0048] “AutoML may be utilized” where [0080] “During the AutoML Model training… ensemble models” is plurality of different models, Fig 4 shows with models 410 employ feature extraction 408 from input data 404, extracted data shown Figs 1:132, 3:304 receiving blockchain information, documents are “records” [0176] e.g. “smart contract” [0091]} formulating a checklist reflecting a summary of the asset data collectively including data for listing the asset for trading; {Fang at [0299] “blockchain is a continuously growing list of records… verifiable” where [0307] “There owner of a transaction can examine each previous transaction to verify the chain of ownership” verifying is checking i.e. check-list, further [0108] “a list of any suspicious transactions provided as a summary” and may use blacklist Fig 1:112} iteratively training the second model in parallel with the first model utilizing at least another subset of the plurality of annotated documents and at least one other previously annotated document until a second user feedback score is above a second predetermined threshold indicative of model improvement for at least one of: recall or precision, including: {Fang Fig 4:418 “AutoML Parallel Training” emphasis parallel [0075-85], iterative is per [0181] “repeatedly resampling training data” for “ensemble by training”. The documents are “records” [0176] e.g. smart contract [0091], blockchain ledger data is extracted Figs 1:132, 3:304; and annotation includes labeled data Fig 4:404. Both recall and precision are disclosed [0081] and a threshold thereof is per [0078] “Precision>95%”} utilizing a further model annotating the plurality of documents forming a plurality of annotated documents indicating correct examples and incorrect examples; and {Fang [0170-79] “decision trees that may be utilized” describes models using “labels” corresponding to annotations, “based on truth or falsity” is correct and incorrect, shown per Fig 11 and [0194] “incorrectly labeled”, see also ground truth [0240]} receiving a selection of a first model from among the plurality of different models; {Fang Fig 4:414 “Model Selection” described at length e.g. [0082-85], [0133]} Fang is directed to financial assets and models with machine learning training thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to utilize AutoML for training select models per Fang in combination for a motivation “auto-machine learning is capable of automatically executing all the manual, tedious modeling tasks of data scientists… with auto-machine learning, you can do any tasks like developing or comparing between models, making predictions from the insights, finding any pattern or solving any business problems within days” [0127] and/or “optimize the creation of model” [0122]. However, Fang in combination does not prima facie disclose the following limitations which are met by Saadi: automatically, as part of a continuous training cycle, and concurrently with onboarding the asset data, training the plurality of different models based on the plurality of annotated documents, […] the training evolving and improving performance of the plurality of different models, including: {Saadi [0058-59] “AutoML… iteratively improves the model by retraining” Fig 4 loop illustrates, the model(s) shown Fig 3B:58a-n described [0050,48] comprise e.g. random forest which is an ensemble, and shows documents Figs 9,12, further disclosing feature data [0073] e.g. text or labeled data [0003]} iteratively training the first model utilizing at least a subset of the plurality of annotated documents and at least one previously annotated document until a first user feedback score is above a first predetermined threshold indicative of model improvement for at least one of: recall or precision, including: {Saadi Fig 4 loop with performance threshold 110 [0058] “precision value, a recall value, and a specificity value. If the performance of the improved version of the model is not greater than the predetermined threshold, then the process iteratively returns”} running the first model producing first responses; {Saadi Fig 3A output 62 from trained model 58, see [0053] output and/or [0050-51] execute} receiving the first user feedback score indicating correctness of the produced first responses; and {Saadi Figs 4:106, 3A:64 shows feedback loop, described [0053-56] “user and/or community may provide the correct feedback”} checking the first feedback score relative to the first predetermined threshold; and {Saadi Fig 4 loop with feedback 106 subject to threshold 110, described [0056-58]} publishing the first model responsive to the user feedback score being above the first predetermined threshold {Saadi Fig 4:112 Deploy model after feedback 106 and threshold 110, described e.g. [0045], [0058]}; Saadi is directed to AutoML for training machine learning models thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to specify AutoML’s iterative training per Saadi to support Fang’s AutoML in combination for a motivation “improves the training of a machine learning system by retraining” [0043] and/or “continual learning” [0065]. However, Saadi in combination does not disclose the following limitation which is met by Sardanopoli: wherein the plurality of different models includes at least (1) a regulatory model that identifies modifications to the formulated checklist and asset data locations within the plurality of submitted documents based on changing laws and regulations associated with the asset and {Sardanopoli Fig 2:212-18 AI model(s) and regulation data, see [Col10 Line67 – Col11 Line4] “AI models can be trained on rules, regulations… trained on prior responses to compliance questionnaires” notably [Col16 Lines63-65] “AI can be trained on the SEC rules, FINRA rules… FINRA regulatory notices” so as to [Col12 Lines1-56] “identify violations trained to the details of the rules and audit…ChatGPT model can be augmented by training on complete/incomplete responses” Figs 4-18 screenshots with columnar table data convey checklist formulated by questionnaire responses and audits, further discloses automatic updates, check boxes [Col17 Lines49-55] as well as automatically identifying location [Col13 Line20-21]. Corresponding provisional support comprises ‘163 Fig 2, [P.11 Lines8-10], [P.19 Lines9-10], [P.12 Line16], [P.13 Lines15-17] and Appendix Figures} receiving a selection of a second model from among the plurality of different models; {Sardanopoli [Col19 L38-55] “model selected based on the associated rules, regulations …select AI models that are tailored” Fig 2:212-216. Corresponding provisional support comprises ‘163 Fig 2, [P.22 Lines25 – P.23 Line1]} Sardanopoli is directed to trained machine learning models for finance thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to use a regulatory model per Sardanopoli in combination for a motivation “enable the AI to perform/analyze the review and determine if there are any issues and/or violations” [Col17 Lines35-38] e.g. “automated review that uses AI to identify violations or gaps based on promulgated standards” [Col19 Lines65-66] and/or address “significant need for a technical solution to compliance and data management that can be configured to integrate artificial intelligence (‘AI’), for example, into guided solutions. Various embodiments leverage artificial intelligence in identify and resolving compliance issues (e.g., with regulatory requirements, client-specified requirements, certification conditions, etc.), or preventing violations of law, rules and regulations” [Col1 Lines51-59]. Corresponding provisional support comprises ‘163 [P.20 Lines2-3], [P.23 Lines13-14], [P.1 Lines17-22]. However, Sardanopoli in combination does not disclose the following limitation which is met by Hosseinali: (2) an anti-fraud security measure model that identifies manipulative actions or irregularities within the plurality of submitted documents, {Hosseinali Fig 2:224 described [0039-42] “fraud MLM” is Machine Learning Model “ensemble of MLM(s)… each of the MLM(s) are pretrained to detect fraudulent merchant activities” detecting is identifying, actions are disclosed and include transactions replete, further includes [0065] “document fraud detection” Fig 5:508 emphasis documents, see also Figs 3A-3C} running the second model producing second responses; {Hosseinali [0035] “models used to make fraud determinations” [0063] “response to a fraud detection” Figs 2-3 arrows indicating output from models} receiving the second user feedback score indicating correctness of the second produced responses; and {Hosseinali Fig 5:502 “receive a feedback” being [0031] “verified as correct and/or incorrect as part of a feedback loop” similar at [0065]} checking the second user feedback score relative to the second predetermined threshold; and {Hosseini Figs 7:712-14, 4:412, compare fraud threshold, e.g. [0047] “compares the combined score to a merchant fraud detection threshold”} Hosseinali is directed to ensemble models trained for fraud detection thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to use the fraud detection models of Hosseinali in combination for a motivation “achieve improved fraud detection performance by improving the models” [0035,73]. However, Hosseinali in combination does not disclose the following limitation which is met by Lin: publishing the second model side by side with the first model responsive to the user feedback score being above the second predetermined threshold. {Lin [Col4 Lines48-50] “multiple models may be displayed simultaneously (e.g., in a side-by-side comparison)” similar [Col1 Line66 – Col2 Line3] “compare the benchmark model and the model being validated. The model validation platform may then provide the users substantive analysis of a model and its performance through one or more user interface tools such as side-by-side comparisons” e.g. Figs 1-4 with Fig 3-middle “threshold” and Fig 1-bottom Minimum/Maximum in table columns} Lin is directed to trained machine learning models thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to perform side-by-side comparison of models per Lin in combination as applying known techniques to known methods ready for improvement to yield predictable results and/or for a motivation of validating against benchmark models to compare performance that is useful for simulation and version control [Col1 Lines18-21, 66 – Col2 Line5]. Considered as a whole, the combination primarily combines AutoML (Fang & Saadi) with the models of regulatory and fraud detection (Sardanopoli and Hosseinali, respectively). Lesser limitations regard the pre- and post-processing to input asset type (Karlsson) and output side-by-side comparison (Lin). Taken together, the skilled artisan would have a powerful training tool of AutoML for the finance models specified by Sardanopoli and Hosseinali, both of whom employ a plurality of models. Accordingly, it is respectfully submitted that the combined teachings are sufficient to render the invention as claimed obvious under 35 U.S.C. 103. With respect to claim 6, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1, wherein the plurality of different models includes at least one of: a text model, an image model, and a language model. {Sardanopoli discloses “ChatGPT” and “LLM” large language models [Col12 Line51], [Col 11 Lines34-35]. Corresponding provisional support at [P.13 Line16], [P.15 Line28]} With respect to claim 7, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1, wherein onboarding asset data defining an asset selected from among: a private credit asset or a private debt asset comprises onboarding a private credit asset. {Karlsson [4] “assets may be onboarded to the blockchain” which includes [117] “asset-backed securities, cryptocurrency, or tokenized assets” e.g. [122] “coins, cryptocurrency” is a private credit asset} With respect to claim 8, the rejection of claim 1 is incorporated. The difference in scope being a system comprising processor coupled to memory storing instructions to perform limitations of claim 1 method. Karlsson discloses [145] “computer system 901 may include a central processing unit (CPU… memory” describes Fig 9. The remainder of this claim is rejected for the same rationale as claim 1. With respect to claim 13, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further discloses the limitation of claim 6. Therefore, the rejection of claim 6 is applied to claim 13. With respect to claim 14, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further discloses the limitation of claim 7. Therefore, the rejection of claim 7 is applied to claim 14. Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over: Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin in view of Laptiev et al., US PG Pub No 2023/0177512A1 hereinafter Laptiev. With respect to claim 2, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1. Laptiev teaches wherein iteratively training the first model comprises Iteratively training the anti-fraud security measure model to learn true positives, false positives, true negatives, and false negatives. {Laptiev [0083] “iterative training process to fit a fraud detection machine-learning model” shown Fig 4B and discloses [0103] “true-positive-fraudulent digital claims… false-positive-fraudulent digital claims” and [0107] “true-negative-fraudulent digital claims… false-negative-fraudulent digital claims” describes precision and recall, plots Figs 7-9} Laptiev is directed to training machine learning models for fraud detection thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to specify true/false positive/negative per Laptiev in combination for a motivation “improvements in true positive and false positive rates for the intelligent fraud detection” [0102-03] and/or “tunes a fraud detection machine-learning model… fits a fraud detection machine-learning model” [0083]. With respect to claim 9, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further combination with Laptiev teaches the limitation of claim 2. Therefore, the rejection of claim 2 is applied to claim 9. Claims 3 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over: Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin in view of Fitzgerald et al., US PG Pub No 2024/0202339A1 hereinafter Fitzgerald. With respect to claim 3, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1. Fitzgerald teaches wherein iteratively training the first model comprises Iteratively training the regulatory model to learn true positives, false positives, true negatives, and false negatives. {Fitzgerald [0047] “train (or retrain) the machine-learning models” similar [0023], Fig 3 shows TP/TN/FN/FP are true positive, true negative, false negative and false positive, respectively [0077], a regulation is a rule to suppress false positives, abbreviated FPS [0045] described throughout} Fitzgerald is directed to training machine learning models thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to use the teachings of Fitzgerald in combination to arrive at the invention as claimed for a motivation of an “improved false positive” [0140] for example “a relatively low number of false positives output from a machine-learning model may not outweigh the benefit of many true positive… To address this imbalance, some machine-learning models may be retrained so that the machine-learning models no longer identify samples that cause false positive” [0023]. With respect to claim 10, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further combination with Fitzgerald teaches the limitation of claim 3. Therefore, the rejection of claim 3 is applied to claim 10. Claims 4-5 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over: Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin in view of Umesh et al., US PG Pub No 2023/0316282A1 hereinafter Umesh. With respect to claim 4, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1. Umesh teaches wherein publishing the first model comprises replacing an incumbent anti-fraud security measure model with another anti-fraud security measure model. {Umesh [0023] “replacing, via the one or computers, the incumbent automated fraud or abuse detection workflow” e.g. [0045] “primary… secondary machine learning model” and/or [0089] versioning, models may include “GPT” [0047]. Figs 2:S240 and 4-5 deploying includes replacing [0023]} Umesh is directed to fraud detection with trained machine learning models thus being analogous. A person having ordinary skill in the art would have considered it obvious prior to the effective filing date to replace incumbent fraud detection per Umesh in combination to arrive at the invention as claimed for a motivation of “perpetually evolving and tunable machine learning models” [0036] and/or “deploy the probationary or the tuned incumbent automated-decisioning workflow to production” [0114] the deployment suitable to “distinct types of digital fraud” [0062]. With respect to claim 5, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the method of claim 1. Umesh teaches wherein publishing the first model comprises running a first anti-fraud security measure model in parallel with an incumbent anti-fraud security measure model. {Umesh [0094,101] “parallel” incumbent fraud workflow which comprises models [0044] “ensemble of machine learning models may include hundreds or thousands of machine learning models that work together” to [0089] “simultaneously dispatch or forward the same digital events (or a copy of the same digital events) dispatched to the currently deployed version incumbent automated-decisioning workflow (e.g., in-production work-flow or live version) to the tuned incumbent automated-decisioning workflow such that decisioning outputs produced by the tuned incumbent automated-decisioning workflow can be compared, tracked, and/or measures against the decisioning outputs produced by the live version of the incumbent automated-decision workflow”} Motivation is applied similar to claim 4 and with further benefit of comparing versions. With respect to claim 11, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further combination with Umesh teaches the limitation of claim 4. Therefore, the rejection of claim 4 is applied to claim 11. With respect to claim 12, the combination of Karlsson, Fang, Saadi, Sardanopoli, Hosseinali and Lin teaches the system of claim 8 and further combination with Umesh teaches the limitation of claim 5. Therefore, the rejection of claim 5 is applied to claim 12. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Chase P Hinckley whose telephone number is (571)272-7935. The examiner can normally be reached M-F 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda M. Huang can be reached at 571-270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHASE P. HINCKLEY/Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Oct 22, 2025
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §103, §DP
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585443
COMPILER FOR NEURAL ACCELERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585960
DYNAMICALLY TUNING HYPERPARAMETERS DURING ML MODEL TRAINING
2y 5m to grant Granted Mar 24, 2026
Patent 12585989
FEATURE EFFECTIVENESS ASSESSMENT METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12572899
SYSTEM AND METHODS FOR GENERATING TASKS BASED ON AGENT PROFILES
2y 5m to grant Granted Mar 10, 2026
Patent 12561575
SYSTEM AND METHOD FOR GENERATING TRAINING SETS FOR NEURAL NETWORKS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
78%
With Interview (+9.3%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 196 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month