Prosecution Insights
Last updated: April 19, 2026
Application No. 18/733,830

PLATFORM FOR INTEGRATION OF MACHINE LEARNING MODELS UTILIZING MARKETPLACES AND CROWD AND EXPERT JUDGMENT AND KNOWLEDGE CORPORA

Non-Final OA §101§103§112
Filed
Jun 04, 2024
Examiner
EL-CHANTI, KARMA AHMAD
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qomplx LLC
OA Round
3 (Non-Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
2y 7m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
31 granted / 83 resolved
-14.7% vs TC avg
Strong +34% interview lift
Without
With
+34.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
25 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
33.7%
-6.3% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 83 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of Claims This communication is a non-final action on the merits in response to the amendments and arguments filed on October 17, 2025. Claims 1-2, 9-10, and 17-18 were amended. Claims 3, 6, 11, 14, and 19 were canceled. Claims 1-2, 4-5, 7-10, 12-13, 15-18, 20-24, and 31-36 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 17, 2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 8, 16, and 24 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 8, 16, and 24 recite the limitation "the hyperparameters." There is insufficient antecedent basis for this limitation in the claims. For purposes of compact prosecution, Examiner is interpreting this limitation to recite "hyperparameters." Claims 8, 16, and 24 recite the limitation "the retrieval and generation components." There is insufficient antecedent basis for this limitation in the claims. For purposes of compact prosecution, Examiner is interpreting this limitation to recite "retrieval and generation components." Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-5, 7-10, 12-13, 15-18, 20-24, and 31-36 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-2, 4-5, 7-8, 17-18, 20-24, and 31-36 are directed to a machine. Claims 9-10, 12-13, and 15-16 are directed to a process. As such, each claim is directed to a statutory category of invention. Step 2A Prong 1 The examiner has identified independent Claim 17 as the claim that represents the claimed invention for analysis and is similar to independent Claims 1 and 9. Independent Claim 17 recites the following abstract ideas: “facilitates the transaction of machine learning models between buyers, sellers, and experts: list, search, and transact marketplace goods comprising machine learning and artificial intelligence assets, the assets comprising models, datasets, embeddings, Retrieval Augmented Generations (RAG) models, knowledge corpora, simulations, expert responses, surveys, each asset associated with one or more defined characteristics selected from: quality, relevance, suitability, credibility, reliability, efficiency, scalability, interoperability, and delivery status; implement data contract specification and enforcement for both the marketplace and the marketplace goods; facilitating selection and integration of machine learning, artificial intelligence, and simulation models based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets; collect and aggregate evaluations and ratings from human experts on the one or more defined characteristics of the listed marketplace goods; securely process payments, licensing, and delivery of the acquired marketplace goods between buyers and sellers, the processing including enforcing usage rights and restrictions and holding of funds until delivery and acceptance of the marketplace goods is confirmed; assessing the interoperability and combinability of selected artificial intelligence assets for use in one or more integrated assets, and evaluating the efficiency and scalability of the integrated assets; suggesting improvements to the selected artificial intelligence assets based on performance results; and enable policy enforcement mechanisms and logging features to support transparency, accountability, and compliance with marketplace quality standards and intellectual property rights associated with multi-stakeholder collaboration and downstream system integration, wherein the policy enforcement mechanisms implement the data contract specifications.” The limitations, as drafted, are a process that, under its broadest reasonable interpretation, relates to commercial interactions including marketing or sales activities or behaviors (i.e., facilitates the transaction of machine learning models between buyers, sellers, and experts: list, search, and transact marketplace goods comprising machine learning and artificial intelligence assets, the assets comprising models, datasets, embeddings, Retrieval Augmented Generations (RAG) models, knowledge corpora, simulations, expert responses, surveys, each asset associated with one or more defined characteristics selected from: quality, relevance, suitability, credibility, reliability, efficiency, scalability, interoperability, and delivery status; implement data contract specification and enforcement for both the marketplace and the marketplace goods; facilitating selection and integration of machine learning, artificial intelligence, and simulation models based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets; collect and aggregate evaluations and ratings from human experts on the one or more defined characteristics of the listed marketplace goods; securely process payments, licensing, and delivery of the acquired marketplace goods between buyers and sellers, the processing including enforcing usage rights and restrictions and holding of funds until delivery and acceptance of the marketplace goods is confirmed; assessing the interoperability and combinability of selected artificial intelligence assets for use in one or more integrated assets, and evaluating the efficiency and scalability of the integrated assets; suggesting improvements to the selected artificial intelligence assets based on performance results; and enable policy enforcement mechanisms and logging features to support transparency, accountability, and compliance with marketplace quality standards and intellectual property rights associated with multi-stakeholder collaboration and downstream system integration, wherein the policy enforcement mechanisms implement the data contract specifications), but for the recitation of generic computer components (i.e., A system for providing a marketplace platform, comprising one or more computers with executable instructions, and artificial intelligence expert models). If a claim limitation, under its broadest reasonable interpretation, relates to commercial interactions including marketing or sales activities or behaviors, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05(g)), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). In particular, the claim recites the additional elements of a system for providing a marketplace platform, comprising one or more computers with executable instructions, and artificial intelligence expert models (in addition to the computing system and one or more hardware processors of Claim 1). The computer hardware is recited at a high level of generality (i.e., generic marketplace platform for listing, searching, and transacting goods, generic computers receiving, processing, generating, and displaying information, and generic AI models providing data) such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application, since they do not involve improvements to the functioning of a computer or to any other technology or technical field (MPEP 2106.05(a)), they do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), they do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and they do not apply or use the abstract idea in some other meaningful way beyond generally linking its use to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claim is directed to an abstract idea without a practical application. Step 2B The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. The additional elements of using computer hardware (a system for providing a marketplace platform, comprising one or more computers with executable instructions, and artificial intelligence expert models (in addition to the computing system and one or more hardware processors of Claim 1)) amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Therefore, the claim is not patent-eligible. Dependent claims 7, 15, and 23 recite selecting, creating, and incorporating “trained models,” which is described in paragraph [0068] of the specification. Dependent claims 8, 16, and 24 recite optimizing the hyperparameters of and updating “machine learning and artificial intelligence models,” which is described in paragraphs [0232] – [0235] of the specification. Dependent claim 31 recites implementing “federated learning across a plurality of edge devices,” and Dependent claim 33 recites “distributed computational graph workflows” and “distributed computing resources.” The additional elements are generic technology / models used to implement the abstract idea, and they do not integrate the abstract idea into a practical application, nor are they sufficient to amount to significantly more than the abstract idea when considered both individually and as an ordered combination. Dependent claims 2, 4-5, 10, 12-13, 18, 20-22, 32, and 34-36 do not include any additional elements beyond those identified above. They further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. As such, they do not integrate the abstract idea into a practical application, nor are they sufficient to amount to significantly more than the abstract idea when considered both individually and as an ordered combination. Therefore, dependent claims 2, 4-5, 7-8, 10, 12-13, 15-16, 18, 20-24, and 31-36 are directed to an abstract idea, and do not include additional elements that integrate the abstract idea into a practical application, or that are sufficient to amount to significantly more than the abstract idea. Thus, the aforementioned claims are not patent-eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 5, 7, 9-10, 13, 15, 17-18, 21-23, and 31-35 are rejected under 35 U.S.C. 103 as being unpatentable over Ambite Molina et al. (US-20230394516) in view of Yuksel et al. (US-20220318619) and Kawas et al. (US-20230316389). Claim 1 (and Similarly Claims 9, and 17) Ambite Molina teaches the following limitations: A computing system for providing a marketplace platform that facilitates the transaction of machine learning models between buyers, sellers, and experts, the computing system comprising: one or more hardware processors configured for: listing, searching, and transacting marketplace goods comprising machine learning and artificial intelligence assets, the assets comprising models, datasets, embeddings, Retrieval Augmented Generations (RAG) models, knowledge corpora, simulations, expert responses, surveys, each asset associated with one or more defined characteristics selected from: quality, relevance, suitability, credibility, reliability, efficiency, scalability, interoperability, and delivery status ([0006] a federated learning marketplace is provided. The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met. The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models; [0048] a model provider 20 provides the necessary toolset (e.g., Nevron.AI) to automate the formation of these coalitions and enable institutions to train machine learning and deep learning models on their local datasets without ever sharing their local dataset, but their locally trained model parameters; the entire disclosure of the Nevron.AI website is hereby incorporated by reference in its entirety. Once a federated model has been trained, it is stored in the model repository for versioning, bookkeeping, and serving. All institutions that contributed to the training of the federated model have model ownership claims (model owners 22). Any institution (e.g., model consumer 24) that wants to use an already trained federated model to perform predictions over its own private or any other public dataset and has not contributed to its training, needs to pay the corresponding model serving fee. The collected fee is then distributed back to the institutions that contributed to the training of the federated model and to the model provider that enabled the model transaction. This synergy between model owners, model provider (Nevron.AI) and model consumers constitutes the federated learning marketplace); implementing data contract specification and enforcement for both the marketplace and the marketplace goods ([0048] All institutions that contributed to the training of the federated model have model ownership claims (model owners 22). Any institution (e.g., model consumer 24) that wants to use an already trained federated model to perform predictions over its own private or any other public dataset and has not contributed to its training, needs to pay the corresponding model serving fee. The collected fee is then distributed back to the institutions that contributed to the training of the federated model and to the model provider that enabled the model transaction; [0049] there are two revenue streams that form the monetization plan of Nevron.AI's federated learning platform, “MetisFL.” The first stream 36 is related to the training of the federated models and the second stream 38 to the federated models serving; [0050] With respect to the first stream, to enable institutions (i.e., clients) to form or join coalitions/federations and collaboratively train machine learning/deep learning models on their own private datasets, every participating institution needs to pay an annual license fee. A secure software package will be installed on-premise for each institution to handle the private training of the federated model on the institution's local dataset and securely exchange encrypted model parameters with the rest of the federation; [0051] Institutions that have contributed to the training of any federated model have free access to this specific model for lifetime. After federated training is completed, a pricing value is assigned to the model. Model price is determined by the quantity and quality of the data, computational resources that each client contributed to training, final learning performance (e.g., accuracy, F1, RMSE, MAE, etc.) of the model, as well as societal/market demand. All institutions that contributed to the training of the final federated model are referred to as federated model owners; [0052] With respect to the second stream, there are institutions that may not be equipped with the necessary computational resources (e.g., no GPU) and/or the necessary type, amount, or quality of data needed to participate in the federated training process. To allow these institutions to gain access via their computing devices to previously trained federated models an annual access license fee that covers the on-premise installation and subscription to the model serving infrastructure is required. Thereafter, the institutions can download and use the federated model on their own premises to perform predictions over their own private or other public datasets. These institutions are referred to as the federated model consumers. The subscription license includes a number of inference queries that can be executed over any federated model at no cost. An inference query is a forward pass over the machine/deep learning model for a single data sample (batch size=1). If the institution exceeds the allocated number of “free” queries, then the cost of every subsequent inference query will be based on the value of the federated model upon which it is executed, i.e., the query price depends on the specifications of the federation (clients) that trained the federated model (see Model Training section). The model inference/serving service package enables access to the federated model's repository and prediction queries over the federated models); facilitating selection and integration of machine learning, artificial intelligence, and simulation models based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met. The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models; [0046] To prevent undesired disclosure of the models to parties outside of the federation, the locally-trained machine learning model of each site is encrypted before transmission to the coordinator. The coordinator aggregates these encrypted site models in encrypted space using secure aggregation methods, such as fully homomorphic encryption, masking, or other methods of secure aggregation); securely processing payments, licensing, and delivery of the acquired marketplace goods between buyers and sellers of the acquired marketplace goods, the processing including enforcing usage rights and restrictions and holding of funds until delivery and acceptance of the marketplace goods is confirmed ([0006] The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models. Advantageously, the central coordinator is configured to receive a first revenue stream from the plurality of clients and a second revenue stream from the plurality of model consumers; [0007] at least a portion of collected fees from model consumers are distributed to clients that contributed to training of a machine learning and deep learning model that is used by model consumers; [0048] All institutions that contributed to the training of the federated model have model ownership claims (model owners 22). Any institution (e.g., model consumer 24) that wants to use an already trained federated model to perform predictions over its own private or any other public dataset and has not contributed to its training, needs to pay the corresponding model serving fee. The collected fee is then distributed back to the institutions that contributed to the training of the federated model and to the model provider that enabled the model transaction; [0049] there are two revenue streams that form the monetization plan of Nevron.AI's federated learning platform, “MetisFL.” The first stream 36 is related to the training of the federated models and the second stream 38 to the federated models serving; [0050] With respect to the first stream, to enable institutions (i.e., clients) to form or join coalitions/federations and collaboratively train machine learning/deep learning models on their own private datasets, every participating institution needs to pay an annual license fee. A secure software package will be installed on-premise for each institution to handle the private training of the federated model on the institution's local dataset and securely exchange encrypted model parameters with the rest of the federation; [0051] Institutions that have contributed to the training of any federated model have free access to this specific model for lifetime. After federated training is completed, a pricing value is assigned to the model. Model price is determined by the quantity and quality of the data, computational resources that each client contributed to training, final learning performance (e.g., accuracy, F1, RMSE, MAE, etc.) of the model, as well as societal/market demand. All institutions that contributed to the training of the final federated model are referred to as federated model owners; [0052] With respect to the second stream, there are institutions that may not be equipped with the necessary computational resources (e.g., no GPU) and/or the necessary type, amount, or quality of data needed to participate in the federated training process. To allow these institutions to gain access via their computing devices to previously trained federated models an annual access license fee that covers the on-premise installation and subscription to the model serving infrastructure is required. Thereafter, the institutions can download and use the federated model on their own premises to perform predictions over their own private or other public datasets. These institutions are referred to as the federated model consumers. The subscription license includes a number of inference queries that can be executed over any federated model at no cost. An inference query is a forward pass over the machine/deep learning model for a single data sample (batch size=1). If the institution exceeds the allocated number of “free” queries, then the cost of every subsequent inference query will be based on the value of the federated model upon which it is executed, i.e., the query price depends on the specifications of the federation (clients) that trained the federated model (see Model Training section). The model inference/serving service package enables access to the federated model's repository and prediction queries over the federated models); assessing the interoperability and combinability of selected artificial intelligence assets for use in one or more integrated assets ([0006] The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met), and suggesting improvements to the selected artificial intelligence assets based on performance results ([0045] The updated parameters for the global model can then be passed to the clients for further optimization. For example, the process repeats for a number of synchronization periods or asynchronously until specific convergence criteria are met. In this context, convergence criteria refer to the conditions under which a machine learning or deep learning algorithm is considered to have “converged” or reached its optimal performance. For example, convergence occurs when the model has learned all the relevant patterns and relationships in the training data, and can no longer improve its performance on the validation or test data. Convergence criteria assist to prevent overfitting. Examples of convergence criteria include loss function and validation accuracy. The loss function measures how well the model is able to predict the output for a given input. Convergence is achieved when the loss function reaches a minimum value or plateaus. The validation accuracy measures how well the model is able to generalize to new, unseen data. In a refinement, convergence is achieved when the validation accuracy reaches a plateau or no longer improves. In a refinement, early stopping can be used to prevent over-fitting by stopping the training process before the model has fully converged. In a further refinement, early stopping is triggered when the validation accuracy or loss function does not improve after a certain number of epochs); and enabling policy enforcement mechanisms and logging features to support transparency, accountability, and compliance with marketplace quality standards and intellectual property rights associated with multi-stakeholder collaboration and downstream system integration, wherein the policy enforcement mechanisms implement the data contract specifications ([0048] All institutions that contributed to the training of the federated model have model ownership claims (model owners 22). Any institution (e.g., model consumer 24) that wants to use an already trained federated model to perform predictions over its own private or any other public dataset and has not contributed to its training, needs to pay the corresponding model serving fee. The collected fee is then distributed back to the institutions that contributed to the training of the federated model and to the model provider that enabled the model transaction; [0049] there are two revenue streams that form the monetization plan of Nevron.AI's federated learning platform, “MetisFL.” The first stream 36 is related to the training of the federated models and the second stream 38 to the federated models serving; [0050] With respect to the first stream, to enable institutions (i.e., clients) to form or join coalitions/federations and collaboratively train machine learning/deep learning models on their own private datasets, every participating institution needs to pay an annual license fee. A secure software package will be installed on-premise for each institution to handle the private training of the federated model on the institution's local dataset and securely exchange encrypted model parameters with the rest of the federation; [0051] Institutions that have contributed to the training of any federated model have free access to this specific model for lifetime. After federated training is completed, a pricing value is assigned to the model. Model price is determined by the quantity and quality of the data, computational resources that each client contributed to training, final learning performance (e.g., accuracy, F1, RMSE, MAE, etc.) of the model, as well as societal/market demand. All institutions that contributed to the training of the final federated model are referred to as federated model owners; [0052] With respect to the second stream, there are institutions that may not be equipped with the necessary computational resources (e.g., no GPU) and/or the necessary type, amount, or quality of data needed to participate in the federated training process. To allow these institutions to gain access via their computing devices to previously trained federated models an annual access license fee that covers the on-premise installation and subscription to the model serving infrastructure is required. Thereafter, the institutions can download and use the federated model on their own premises to perform predictions over their own private or other public datasets. These institutions are referred to as the federated model consumers. The subscription license includes a number of inference queries that can be executed over any federated model at no cost. An inference query is a forward pass over the machine/deep learning model for a single data sample (batch size=1). If the institution exceeds the allocated number of “free” queries, then the cost of every subsequent inference query will be based on the value of the federated model upon which it is executed, i.e., the query price depends on the specifications of the federation (clients) that trained the federated model (see Model Training section). The model inference/serving service package enables access to the federated model's repository and prediction queries over the federated models). However, Ambite Molina does not explicitly teach the following limitations: collecting and aggregating evaluations and ratings from human experts and artificial intelligence expert models on the one or more defined characteristics of the listed marketplace goods; evaluating the efficiency and scalability of the integrated assets; Yuksel, in the same field of endeavor, teaches the following limitations: collecting and aggregating evaluations and ratings from… artificial intelligence expert models on the one or more defined characteristics of the listed marketplace goods ([0049] FIG. 3 is a block diagram 300 illustrating a first component of a machine learning model generation platform… the operations and components described with respect to FIG. 3 (e.g., “the Mentalist” 302) may be the first AI-powered virtual architect that empowers product owners to architect AI solutions; [0053] Mentalist 302 data may rely on thousands of model and AI/ML publications from internet sources, commercial AI/ML, and data suppliers in the platform's community… Using measurements and knowledge from previously designed architectures, it provides the specified solution, the recommended vendors, time estimates and indications of off-the-shelf vs the need of custom built technology using available AI/ML resources; [0057] FIG. 4 is a block diagram 400 illustrating a second component of a machine learning model generation platform… the operations and components described with respect to FIG. 4 (e.g., “the Matchmaker” 402) may be a virtual AI implementer; [0059] Matchmaker 402 presents a recommended, well catalogued, bill of material that implements the blueprint (e.g., architecture 306) of the solution and matches the budget and metrics to any specified requirements. It includes an easy to understand description for each AI asset need with examples and a fair and single-number benchmark with explanation; [0060] Matchmaker 402 provides a variety of benefits over existing technologies, including: [0061] 1. Generating the best implementation that fits the budget, quality and product success metrics; [0066] Matchmaker 402 may provide any relevant information for display and selection (e.g., purchase) on the platform; [0070] display, or provide for display, an option to access (e.g., purchase) the AI-based solution in a marketplace platform); evaluating the efficiency and scalability of the integrated assets ([0020] a variety of machine learning models that each perform an individual task may be combined together as subcomponents in a larger system, such that the combination accomplishes an operator or objective; [0022] The variety of embodiments described herein provide the infrastructure for building and deploying portable and scalable end-to-end artificial intelligence (AI) solution workflows. The embodiments allow for end-to-end orchestration, enabling & simplifying the orchestration of full AI workflows (e.g., pipelines) during both training and inference (deployment). The embodiments further allow for easy experimentation—making it easy to try numerous ideas and techniques and manage various trials/experiments for hyper-parameter tuning and benchmarking. The embodiments also allow for easy re-use—enabling the re-use of AI components and pipelines to quickly cobble together end-to-end solutions, without rebuilding each time); Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the marketplace platform of Ambite Molina with the limitations taught by Yuksel. One of ordinary skill in the art would have been motivated to make this modification for the benefit of providing AI solutions that match specified requirements, and eliminating the cost and time to hire AI specialists (Yuksel – [0059] - [0062]). However, Ambite Molina, in combination with Yuksel, does not explicitly teach the following limitations: collecting and aggregating evaluations and ratings from human experts… on the one or more defined characteristics of the listed marketplace goods; Kawas, in the same field of endeavor, teaches the following limitations: collecting and aggregating evaluations and ratings from human experts… on the one or more defined characteristics of the listed marketplace goods ([0038] the online marketplace described herein can enable a user to buy, sell, or trade any asset (e.g. product, service); [0111] product audio fault determiner 304 uses machine-trained models to identify faults. In some embodiments, for example those selling or auctioning vehicles, platform product audio fault determiner 304 can use machine-trained models capable of identifying the most common sounds from a running vehicle. These include sounds from the engine bay, including motor sounds, belt noise, throttle and others to determine the mechanical condition of the vehicle including potential flaws and maintenance problems. In further embodiments, product audio fault determiner 304 can make use of buyer reviews of past products to update its training); Kawas shows that collecting evaluations from human experts was known in the prior art before the effective filing date of the claimed invention. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself; that is, in the substitution of the collecting evaluations from human experts of Kawas for the collecting evaluations from AI expert models means of Yuksel. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Claim 2 (and Similarly Claims 10, and 18) Yuksel further teaches the following limitations: wherein the one or more hardware processors are further configured for: collecting the evaluations and ratings from the human experts or the artificial intelligence expert models while browsing external data sources ([0053] Mentalist 302 data may rely on thousands of model and AI/ML publications from internet sources, commercial AI/ML, and data suppliers in the platform's community); quantifying the one or more defined characteristics ([0050] Mentalist 302 may provide an AI solution architecture through understanding the product needs, the relevant product success metrics, and any data resource constraints; [0059] Matchmaker 402 presents a recommended, well catalogued, bill of material that implements the blueprint (e.g., architecture 306) of the solution and matches the budget and metrics to any specified requirements. It includes an easy to understand description for each AI asset need with examples and a fair and single-number benchmark with explanation; [0060] Matchmaker 402 provides a variety of benefits over existing technologies, including: [0061] 1. Generating the best implementation that fits the budget, quality and product success metrics); and assessing the credibility and reliability of experts providing evaluations based on their historical evaluations and community feedback ([0053] Using measurements and knowledge from previously designed architectures, it provides the specified solution; [0066] Matchmaker 402 may provide estimates of benchmark data, costs, and time to build. Such data may be generated based on past knowledge of relative values for each model provided in the solution, or estimated using any number of statistical methods). This known technique is applicable to the system of Ambite Molina as they both share characteristics and capabilities, namely, they are directed to an online marketplace. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that applying the known technique of Yuksel would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Yuksel to the teachings of Ambite Molina would have yielded predictable results because the level of one of ordinary skill in the art would have known to incorporate such features (i.e., collecting data from external sources on the internet, assessing the credibility of an expert based on historical evaluations / feedback) into similar systems. Claim 5 (and Similarly Claims 13, and 21) Yuksel further teaches the following limitations: wherein the one or more hardware processors are further configured for proactively suggesting relevant goods, experts, or collaborators based on user preferences, transaction history, and platform interactions ([0059] Matchmaker 402 presents a recommended, well catalogued, bill of material that implements the blueprint (e.g., architecture 306) of the solution and matches the budget and metrics to any specified requirements; [0095] In some embodiments, in order to generate models, a dataset must be used. However, the entity generating a model may not have rights to the dataset that is needed to generate the model. Further, there may be use restrictions on the dataset. For example, the dataset or a subset thereof may include sensitive information and be subject to compliance laws, such as privacy regulations regarding children's personal information. Traditionally, if the dataset owner licensed or otherwise gave rights to the entity to access the dataset, the dataset owner would have a limited ability to monitor or track the entity's usage. In some embodiments of the present invention, the entity may still access and use the dataset in a secure environment, such that the dataset owner is satisfied with its ability to verify the entity's usage and access privileges to the dataset). This known technique is applicable to the system of Ambite Molina as they both share characteristics and capabilities, namely, they are directed to an online marketplace. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that applying the known technique of Yuksel would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Yuksel to the teachings of Ambite Molina would have yielded predictable results because the level of one of ordinary skill in the art would have known to incorporate such features (i.e., providing a suggestion based on user preferences or transaction history) into similar systems. Claim 7 (and Similarly Claims 15 and 23) Ambite Molina further teaches the following limitations: wherein the one or more hardware processors are further configured for selecting, creating, and incorporating trained models based on expert judgment inputs ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices… The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met. The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models). Claim 22 Ambite Molina further teaches the following limitations: wherein the system is further caused to: assess the interoperability and combinability of the selected machine learning models and datasets ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met); suggest improvements and enhancements to the selected models and datasets ([0045] The updated parameters for the global model can then be passed to the clients for further optimization. For example, the process repeats for a number of synchronization periods or asynchronously until specific convergence criteria are met. In this context, convergence criteria refer to the conditions under which a machine learning or deep learning algorithm is considered to have “converged” or reached its optimal performance. For example, convergence occurs when the model has learned all the relevant patterns and relationships in the training data, and can no longer improve its performance on the validation or test data. Convergence criteria assist to prevent overfitting. Examples of convergence criteria include loss function and validation accuracy. The loss function measures how well the model is able to predict the output for a given input. Convergence is achieved when the loss function reaches a minimum value or plateaus. The validation accuracy measures how well the model is able to generalize to new, unseen data. In a refinement, convergence is achieved when the validation accuracy reaches a plateau or no longer improves. In a refinement, early stopping can be used to prevent over-fitting by stopping the training process before the model has fully converged. In a further refinement, early stopping is triggered when the validation accuracy or loss function does not improve after a certain number of epochs). Yuksel further teaches the following limitations: evaluate the efficiency and scalability of the integrated models ([0020] a variety of machine learning models that each perform an individual task may be combined together as subcomponents in a larger system, such that the combination accomplishes an operator or objective; [0022] The variety of embodiments described herein provide the infrastructure for building and deploying portable and scalable end-to-end artificial intelligence (AI) solution workflows. The embodiments allow for end-to-end orchestration, enabling & simplifying the orchestration of full AI workflows (e.g., pipelines) during both training and inference (deployment). The embodiments further allow for easy experimentation—making it easy to try numerous ideas and techniques and manage various trials/experiments for hyper-parameter tuning and benchmarking. The embodiments also allow for easy re-use—enabling the re-use of AI components and pipelines to quickly cobble together end-to-end solutions, without rebuilding each time); and Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the marketplace platform of Ambite Molina with the limitations taught by Yuksel. One of ordinary skill in the art would have been motivated to make this modification for the benefit of easy experimentation, and easy re-use of AI models and components to quickly cobble together end-to-end solutions, without rebuilding each time (Yuksel – [0022]). Claim 31 Ambite Molina further teaches the following limitations: wherein the one or more hardware processors are configured for implementing federated learning across a plurality of edge devices while maintaining data privacy by keeping data on local nodes ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met. The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models). Claim 32 Kawas further teaches the following limitations: wherein the experts and expert models comprise both human experts and artificial intelligence systems registered as experts on the marketplace platform ([0111] product audio fault determiner 304 uses machine-trained models to identify faults. In some embodiments, for example those selling or auctioning vehicles, platform product audio fault determiner 304 can use machine-trained models capable of identifying the most common sounds from a running vehicle. These include sounds from the engine bay, including motor sounds, belt noise, throttle and others to determine the mechanical condition of the vehicle including potential flaws and maintenance problems. In further embodiments, product audio fault determiner 304 can make use of buyer reviews of past products to update its training). Kawas shows that collecting evaluations from human experts was known in the prior art before the effective filing date of the claimed invention. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself; that is, in the substitution of the collecting evaluations from human experts of Kawas for the collecting evaluations from AI expert models means of Yuksel. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Claim 33 Ambite Molina further teaches the following limitations: wherein the one or more hardware processors are further configured for incorporating the marketplace goods into distributed computational graph workflows that dynamically orchestrate the selection and execution of models across distributed computing resources ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met. The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models). Claim 34 Kawas further teaches the following limitations: wherein artificial intelligence experts in the marketplace are trained to specialize in specific niches or sub-domains within a larger field ([0111] product audio fault determiner 304 uses machine-trained models to identify faults. In some embodiments, for example those selling or auctioning vehicles, platform product audio fault determiner 304 can use machine-trained models capable of identifying the most common sounds from a running vehicle. These include sounds from the engine bay, including motor sounds, belt noise, throttle and others to determine the mechanical condition of the vehicle including potential flaws and maintenance problems). This known technique is applicable to the system of Ambite Molina, in combination with Yuksel, as they both share characteristics and capabilities, namely, they are directed to an online marketplace. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that applying the known technique of Kawas would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Kawas to the teachings of Ambite Molina, in combination with Yuksel, would have yielded predictable results because the level of one of ordinary skill in the art would have known to incorporate such features (i.e., experts being trained to specialize in niches) into similar systems. Claim 35 Ambite Molina further teaches the following limitations: wherein the one or more hardware processors are further configured for incorporating user feedback and preferences into an optimization process for continuous improvement and alignment with user expectations ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met; [0045] The updated parameters for the global model can then be passed to the clients for further optimization. For example, the process repeats for a number of synchronization periods or asynchronously until specific convergence criteria are met. In this context, convergence criteria refer to the conditions under which a machine learning or deep learning algorithm is considered to have “converged” or reached its optimal performance. For example, convergence occurs when the model has learned all the relevant patterns and relationships in the training data, and can no longer improve its performance on the validation or test data. Convergence criteria assist to prevent overfitting. Examples of convergence criteria include loss function and validation accuracy. The loss function measures how well the model is able to predict the output for a given input. Convergence is achieved when the loss function reaches a minimum value or plateaus. The validation accuracy measures how well the model is able to generalize to new, unseen data. In a refinement, convergence is achieved when the validation accuracy reaches a plateau or no longer improves. In a refinement, early stopping can be used to prevent over-fitting by stopping the training process before the model has fully converged. In a further refinement, early stopping is triggered when the validation accuracy or loss function does not improve after a certain number of epochs). Claims 4, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ambite Molina et al. (US-20230394516) in view of Yuksel et al. (US-20220318619) and Kawas et al. (US-20230316389), and further in view of Luk et al. (US-20240119486). Claim 4 (and Similarly Claims 12, and 20) Ambite Molina further teaches the following limitations: wherein the one or more hardware processors are further configured for: sharing and co-developing machine learning projects ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices… The federated learning marketplace also includes a plurality of model consumers that are provided licenses to use trained machine learning and deep learning models); and However, Ambite Molina, in combination with Yuksel and Kawas, does not explicitly teach the following limitations: documenting best practices, tutorials, and case studies related to the listed goods. Luk, in the same field of endeavor, teaches the following limitations: documenting best practices, tutorials, and case studies related to the listed goods ([0018] Companies and individuals produce short-form videos using multiple formats and approaches, demonstrating the use of products and related services, comparing their products to competitors, etc. In some cases, endorsements by celebrities and known experts are included, as are instructional videos detailing the uses of products in particular applications… consumers can comment on their experiences with products and companies and share them with others; [0019] Techniques for dynamic population of contextually relevant videos in an ecommerce environment are disclosed. First, a repository of short-form videos is assembled. The short-form video collection may be composed of professionally produced videos on behalf of a vendor or group of vendors, short-form video demonstrations or commentaries by users of products, celebrity endorsements, or a combination of these and other related videos. As the collection is brought together, metadata related to the short-form videos is captured. The metadata may include hashtags, user history, ranking, view history, and so on. As the videos are added to the repository, associations are made with one or more products or services for sale. The associations indicate what products or services are highlighted by the short-form videos. The associations can be generated using short-form video metadata or machine learning techniques. In some cases, multiple products or related services may be highlighted by a single video, leading to multiple associations within the repository. Machine learning can also aid in analyzing and categorizing the videos, using additional user data or conversion rate information). This known technique is applicable to the system of Ambite Molina, in combination with Yuksel and Kawas, as they both share characteristics and capabilities, namely, they are directed to an online marketplace. One of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that applying the known technique of Luk would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Luk to the teachings of Ambite Molina, in combination with Yuksel and Kawas, would have yielded predictable results because the level of one of ordinary skill in the art would have known to incorporate such features (i.e., documenting best practices and tutorials for goods in an online marketplace) into similar systems. Claims 8, 16, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Kawas et al. (US-20230316389) in view of Ambite Molina et al. (US-20230394516) in view of Yuksel et al. (US-20220318619) and Kawas et al. (US-20230316389), and further in view of Hightower (US-20240403984; supported by US Provisional Application 63/505,524 filed 6/1/2023). Claim 8 (and Similarly Claims 16 and 24) Yuksel further teaches the following limitations: wherein the one or more hardware processors are further configured for: continuously adjusting and optimizing the hyperparameters of the machine learning and artificial intelligence models based on performance metrics and user feedback ([0022] The variety of embodiments described herein provide the infrastructure for building and deploying portable and scalable end-to-end artificial intelligence (AI) solution workflows. The embodiments allow for end-to-end orchestration, enabling & simplifying the orchestration of full AI workflows (e.g., pipelines) during both training and inference (deployment). The embodiments further allow for easy experimentation—making it easy to try numerous ideas and techniques and manage various trials/experiments for hyper-parameter tuning and benchmarking. The embodiments also allow for easy re-use—enabling the re-use of AI components and pipelines to quickly cobble together end-to-end solutions, without rebuilding each time; [0098] Additional tuning of the model, in Model Hypertuning 742, may be done after the model has been validated); dynamically updating and fine-tuning machine learning and artificial intelligence models based on newly available data and evolving user requirements ([0047] a ML inference component may use one or more trained machine learning models to make a recommendation and/or prediction. Such machine learning models may be provided directly to the ML inference component or be received as output from a machine learning training component; [0048] ML training component may optionally receive the output of the ML inference component and, along with other information (e.g., data, references, metrics, etc.) generate new ML models or fine-tune existing ML models; [0059] Matchmaker 402 presents a recommended, well catalogued, bill of material that implements the blueprint (e.g., architecture 306) of the solution and matches the budget and metrics to any specified requirements. It includes an easy to understand description for each AI asset need with examples and a fair and single-number benchmark with explanation. Whenever needed, Matchmaker 402 may auto-procure non-existing assets (inference nodes, models, datasets) and connects the product owner with two or more recommended suppliers. In one embodiment, each asset may have least one swappable replacement option, if desired); Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the marketplace platform of Ambite Molina with the limitations taught by Yuksel. One of ordinary skill in the art would have been motivated to make this modification for the benefit of providing AI solutions that match specified requirements, and eliminating the cost and time to hire AI specialists (Yuksel – [0059] - [0062]). However, Ambite Molina, in combination with Yuksel and Kawas, does not explicitly teach the following limitations: optimizing the retrieval and generation components of the RAG models, including fine-tuning retrieval algorithms, updating knowledge bases, and enhancing generation quality; and incorporating user feedback and preferences into the optimization process, ensuring continuous improvement and alignment with user expectations. Hightower, in the same field of endeavor, teaches the following limitations: optimizing the retrieval and generation components of the RAG models, including fine-tuning retrieval algorithms, updating knowledge bases, and enhancing generation quality ([0025] the resulting answer from block 206 may be parsed for possible hallucinations. In certain implementations, the data that exists in the proprietary system may be used to validate the answer and/or to determine if the resulting answer includes any hallucinations. One approach that may be used to reduce hallucinations is the Retrieval-Augmented Generation (RAG) model in conjunction with vector databases. This approach may enable efficient leveraging of large language models (LLM) with proprietary data. In accordance with certain exemplary implementations of the disclosed technology, a trusted knowledge source (i.e., data from the proprietary system) may be searched for relevant data. The model may use those results to generate a user-friendly response and consolidate the pertinent details into a single concise answer. In certain implementations, vector databases may be used to improve the performance of the RAG model. In certain implementations, vector databases may store text as embeddings, or numerical vectors that capture its meaning. Questions may also be converted into a numerical vector. Relevant documents or passages can then be found in the vector database, even when they don't share the same words); and incorporating user feedback and preferences into the optimization process, ensuring continuous improvement and alignment with user expectations ([0035] Certain implementation of the disclosed technology may provide improvements in flexibility. For example, the code generation step can be in the SQL language to run in proprietary databases, or code to execute within systems. Certain implementations may create surveys or other tools to collect information from customers in proprietary systems without sharing the data. In certain implementations, the full process may be automated to run in seconds or over a predetermined time period (such as months, for example) with collection of external user input as part of the steps). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the marketplace platform of Ambite Molina, in combination with Yuksel and Kawas, with the limitations taught by Hightower. One of ordinary skill in the art would have been motivated to make this modification for the benefit of mitigating the risk of hallucinations that can occur when relying on an AI/ML/LLM model to generate answers (Hightower – [0025]). Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over Ambite Molina et al. (US-20230394516) in view of Yuksel et al. (US-20220318619) and Kawas et al. (US-20230316389), and further in view of Chandra et al. (US-20240281742). Claim 36 Ambite Molina further teaches the following limitations: wherein the marketplace goods include… packages combining machine learning models ([0006] The federated learning marketplace includes a central coordinator that is responsible to orchestrate the execution of the federated learning environment, and a plurality of clients that jointly train machine learning and deep learning models (i.e., federated models) on client computing devices without sharing their local private datasets. The clients only share their locally trained model parameters with the central coordinator. The central coordinator aggregates local models and computes a new global model. This process repeats for a number of synchronization periods until specific convergence criteria are met) However, Ambite Molina, in combination with Yuksel and Kawas, does not explicitly teach the following limitations: neurosymbolic packages combining machine learning models with symbolic reasoning rules and workflows. Chandra, in the same field of endeavor, teaches the following limitations: neurosymbolic packages combining machine learning models with symbolic reasoning rules and workflows ([0072] The machine learning engine 412 may, in some approaches, perform predetermined observing and/or assessing operations defined within an AI reasoning model. In some preferred approaches, the AI reasoning model is a neuro-symbolic AI model). Chandra shows that neurosymbolic models were known in the prior art before the effective filing date of the claimed invention. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself; that is, in the substitution of the neurosymbolic models of Chandra for the general machine learning models of Ambite Molina. Thus, the simple substitution of one known element for another producing a predictable result renders the claim obvious. Response to Arguments Applicant’s Argument Regarding 35 USC 112(b) Rejections of Claims 2-3, 6, 8, 10-11, 14, 16, 18-19, 22, and 24: Claims 1, 9, and 17 have been amended to provide proper antecedent basis. Examiner’s Response: Applicant’s amendments have been fully considered and they resolve the identified issues, except for claims 8,16, and 24, which have remaining antecedent basis issues. The terms “hyperparameters” and “retrieval and generation components” are preceded by “the” in the claims, however, those terms were not introduced earlier in the same claims or in the independent claims. Refer to the 112(b) rejection above. Applicant’s Argument Regarding 35 USC 101 Rejection of Claims 1-24 and 31-36: As amended, Claim 1 is directed to a specific technological system that governs the integration, assessment, optimization, and orchestration of machine learning and artificial intelligence assets in a distributed computing marketplace. The claim recites a computing system with hardware processors specifically configured for operations including (1) transacting AI/ML assets (models, datasets, embeddings, RAG models, knowledge corpora, simulations, expert responses, surveys) each associated with defined technical characteristics; (2) implementing data contract specification and enforcement for the marketplace and its goods; (3) facilitating selection and integration based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets; (4) collecting and aggregating evaluations from both human experts and AI expert models; (5) assessing interoperability and combinability of selected AI assets and evaluating efficiency and scalability of integrated assets; (6) suggesting improvements based on performance results; and (7) enabling policy enforcement mechanisms with logging features that implement the data contract specifications. These features cannot be performed in the human mind or with pencil and paper. They represent concrete computing operations that involve technical assessments of AI asset characteristics, automated evaluation of model interoperability and combinability, performance-based optimization recommendations, and enforcement of data contracts across distributed systems operations that go far beyond generic "buying and selling." The August 4, 2025 USPTO Memorandum on subject matter eligibility (the "Kim Memorandum") emphasizes that examiners must not expand the "mental process" grouping to encompass limitations that cannot practically be performed mentally, and that claims which involve AI in ways that cannot be mentally performed fall outside of this grouping. The automated assessment of interoperability and combinability of multiple AI assets, the evaluation of efficiency and scalability of integrated systems, and the generation of performance-based improvement suggestions recited in amended Claim 1 are quintessential examples of operations that cannot be performed mentally. These limitations require computational analysis of technical parameters across multiple assets, quantitative evaluation of system performance metrics, and algorithmic generation of optimization recommendations-none of which can be accomplished through mental processes or manual methods. Claim 1 integrates any abstract concept into a practical application by providing a compliance-aware, provenance-enforced marketplace infrastructure for AI/ML assets with automated technical assessment capabilities. The USPTO's Appeals Review Panel (ARP) decision in Ex parte Desjardins (Sept. 26, 2025) underscores that improvements to the functioning of machine learning models are patent-eligible subject matter. In Desjardins, the Board had initially entered a new ground of rejection under §101, characterizing the claims as merely reciting a mathematical calculation—specifically, an "approximation of a posterior distribution." On rehearing, the ARP vacated that rejection, holding that the claims were not directed to an abstract idea but instead to a practical application that improved the operation of machine learning systems. The ARP recognized that the claimed approach to sequential training directly addressed a technical problem in the field—catastrophic forgetting—by enabling a model to learn new tasks while preserving knowledge of previously learned tasks. This resulted in reduced storage requirements and lower system complexity, technical benefits that improved the functioning of the model itself. The panel stressed that such claims must not be dismissed as abstract merely because they involve mathematical techniques, and warned against equating all AI or machine learning innovations with unpatentable "algorithms." Claim 1 is closely analogous and, if anything, more clearly directed to technological improvements. It does not merely recite generic marketplace interactions or aspirational outcomes. Rather, it sets forth concrete technical mechanisms that improve the governance, orchestration, assessment, and trustworthiness of AI/ML assets in distributed computing environments. Specifically, the recited assessment of "interoperability and combinability of selected artificial intelligence assets for use in one or more integrated assets" addresses a fundamental technical challenge in AI system integration: determining whether heterogeneous models, datasets, and other AI components can be effectively combined. This assessment requires technical evaluation of factors such as data format compatibility, input/output dimensionality matching, processing pipeline alignment, and semantic consistency across assets-technical considerations that are specific to the field of AI system architecture and cannot be reduced to conventional marketplace evaluation. Similarly, the recited evaluation of "efficiency and scalability of the integrated assets" provides a technical solution to the problem of predicting system performance characteristics before deployment. This evaluation necessarily involves computational analysis of resource utilization, processing throughput, latency characteristics, and scaling behavior-metrics that are specific to distributed computing systems and AI model execution, not generic commercial considerations. Furthermore, the recited functionality of "suggesting improvements to the selected artificial intelligence assets based on performance results" implements a technical feedback mechanism that enhances the marketplace's ability to guide users toward optimal configurations. This goes beyond simple recommendation systems by tying suggestions directly to measured technical performance, thereby improving the overall functioning of how AI assets are selected, integrated, and deployed through the marketplace platform. Additional features such as compliance-aware model selection (based on user requirements, compliance constraints, and privacy requirements), expert and AI-based scoring of asset characteristics (including quality, relevance, suitability, credibility, reliability, efficiency, scalability, and interoperability), and policy enforcement with logging mechanisms provide enforceable controls that ensure only assets meeting defined technical and legal criteria are selected and integrated. Similarly, the recited ability to transact AI/ML assets under data contract enforcement, and to orchestrate distributed execution are not abstract commercial activities. These are specific architectural solutions to recognized technical challenges in AI integration, including preservation of intellectual property rights in downstream systems and compliance with privacy and usage restrictions in distributed contexts. Just as the Desjardins panel found that continual learning improvements transformed a mathematical calculation into a patent-eligible technological advance, Claim 1 transforms a high-level marketplace concept into a technical infrastructure that ensures compliance, accountability, interoperability assessment, performance evaluation, and optimization guidance for AI model integration. These improvements are inseparable from the functioning of the computing system itself—they define how the system processes, evaluates, optimizes, enforces, and orchestrates AI assets in ways that cannot be accomplished manually or with generic e-commerce platforms. The claims therefore reflect an improvement in the functioning of the technology itself, not a mere recitation of an abstract idea or method of organizing human activity. Applicant further submits that Claim 1 does not risk preemption of any alleged abstract idea. The Examiner has characterized Claim 1 as a form of marketplace activity. However, Claim 1 is meaningfully limited to a specific technological implementation that governs the integration and transaction of AI/ML assets. Claim 1 requires: (1) data contract specification and enforcement; (2) selection and integration based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets; (3) collection and aggregation of evaluations from both human experts and AI expert models on defined characteristics; (4) secure processing with enforcement of usage rights and restrictions and fund holding until confirmed delivery and acceptance; (5) assessment of interoperability and combinability of selected AI assets and evaluation of efficiency and scalability of integrated assets; (6) suggesting improvements based on performance results; and (7) policy enforcement mechanisms with logging that implement the data contract specifications. These limitations are not generic e-commerce activities; they are concrete technical requirements that confine Claim 1 to a particular approach to interoperability assessment, performance evaluation, optimization guidance, compliance enforcement, and provenance tracking in distributed AI environments. As a result, Claim 1 leaves ample room for others to implement marketplace transactions or model exchanges using different techniques. For example, one could build a marketplace for AI models without compliance-aware enforcement, without policy-based logging, without interoperability and combinability assessment, without efficiency and scalability evaluation, without performance-based improvement suggestions, or without expert/AI hybrid scoring. All such implementations fall outside the scope of Claim 1. Claim 1 therefore does not attempt to monopolize the idea of "trading AI models" in the abstract, but is instead narrowly directed to a specific, non-conventional technological solution that incorporates automated technical assessment and optimization capabilities alongside governance and compliance features. Since Claim 1 is meaningfully limited and avoids preemption, it integrates any alleged abstract idea into a practical application. Examiner’s Response: Applicant’s arguments have been fully considered but they are not persuasive. The steps of transacting assets, implementing data contract specification and enforcement for the marketplace and its goods, facilitating selection and integration based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets, collecting and aggregating evaluations from experts, assessing interoperability and combinability of selected assets and evaluating efficiency and scalability of integrated assets, suggesting improvements based on performance results, and enabling policy enforcement mechanisms with logging features that implement the data contract specifications, are all steps that recite an abstract idea, and more specifically, they fall within the Certain Methods of Organizing Human Activity grouping, and within the commercial interactions subgrouping of abstract ideas. Though the assets are AI/ML assets, this is a very high-level recitation of AI and machine learning. Further, regarding the collecting and aggregating evaluations from experts that include AI expert models, the AI expert models are recited at a high level of generality, as tools to implement the abstract idea, which is further demonstrated by the fact that the experts also include human experts. Further, the platform, computing system, and hardware processors are also recited at a high level of generality, as tools to implement the abstract idea. Regarding the argument that these features cannot be performed in the human mind or with pencil and paper, the claims were not rejected under the Mental Processes grouping of abstract ideas; they were rejected under the Certain Methods of Organizing Human Activity grouping of abstract ideas. Regarding the claims of Desjardins, they provided a technical improvement, an improvement to the functioning of machine learning models. However, the present claims do not provide any improvement to the functioning of machine learning models or any other technology. Rather, the claims recite the models at a high level, as assets in a marketplace. Further, improving the “governance, orchestration, assessment, and trustworthiness” of AI/ML assets is a recitation of an improvement to the abstract idea, and not an improvement to the technology. Regarding the assessment of "interoperability and combinability of selected artificial intelligence assets for use in one or more integrated assets," and regarding the evaluation of "efficiency and scalability of the integrated assets," the claim recites only the idea of a solution or outcome, without reciting details of how a solution to a problem is accomplished, and further, the specification does not provide any details as to how the claimed invention provides any improvement to the functioning of machine learning models. Regarding the recited functionality of "suggesting improvements to the selected artificial intelligence assets based on performance results," the steps of suggesting improvements based on performance results is a recitation of the abstract idea itself, and the AI assets are used as tools in which the abstract idea is being implemented upon. Compliance-aware selection based on user requirements, compliance constraints, and privacy requirements, expert scoring of asset characteristics including quality, relevance, suitability, credibility, reliability, efficiency, scalability, and interoperability, and policy enforcement with logging mechanisms are all part of the abstract idea. The AI expert models, as previously stated, are used as tools to implement the abstract idea. The providing of enforceable controls that ensure only assets meeting defined technical and legal criteria are selected and integrated is a recitation of an improvement to the abstract idea itself, rather than a recitation of an improvement to technology. The ability to transact assets under data contract enforcement is part of the abstract idea, and the data contract enforcement also falls under the legal interactions subgrouping including agreements in the form of contracts. The orchestrating of distributed execution of models recites distributed computing resources and models at a high level of generality, and does not provide any improvement to technology. As previously stated, the present claims do not provide any improvement to the functioning of machine learning models or any other technology. The claim recites only the idea of a solution or outcome, without reciting details of how a solution to a problem is accomplished, and further, the specification does not provide any details as to how the claimed invention provides any improvement to the functioning of machine learning models. Data contract specification and enforcement, selection and integration based on user requirements, compliance constraints, privacy requirements, and compatibility with other assets, collection and aggregation of evaluations from experts on defined characteristics, secure processing with enforcement of usage rights and restrictions and fund holding until confirmed delivery and acceptance, assessment of interoperability and combinability of selected assets and evaluation of efficiency and scalability of integrated assets, suggesting improvements based on performance results, and policy enforcement mechanisms with logging that implement the data contract specifications, are all steps that are directed to the abstract idea. As previously stated, regarding the collection and aggregation of evaluations from experts that include AI expert models, the AI expert models are recited at a high level of generality, as tools to implement the abstract idea, which is further demonstrated by the fact that the experts also include human experts. As previously stated, the present claims do not provide any improvement to the functioning of machine learning models or any other technology. The claim recites only the idea of a solution or outcome, without reciting details of how a solution to a problem is accomplished, and further, the specification does not provide any details as to how the claimed invention provides any improvement to the functioning of machine learning models. Applicant’s Argument Regarding 35 USC 103 Rejections of Claims 1-24 and 31-36: Claims 1, 9, and 17 have been amended, and the cited references, whether alone or in combination, do not teach or suggest the full scope of the claims. Examiner’s Response: Applicant’s arguments have been considered but are moot in light of the new ground of rejection above. Conclusion The prior art made of record and not relied upon, considered pertinent to applicant’s disclosure or directed to the state of art, is listed on the enclosed PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARMA EL-CHANTI whose telephone number is (571)272-3404. The examiner can normally be reached T-Sa 10am-6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at (571)270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KARMA A EL-CHANTI/Examiner, Art Unit 3629 /SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Feb 03, 2025
Non-Final Rejection — §101, §103, §112
Jun 11, 2025
Response Filed
Jul 09, 2025
Final Rejection — §101, §103, §112
Oct 17, 2025
Request for Continued Examination
Oct 20, 2025
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536567
PROVIDING TRAVEL-BASED AUGMENTED REALITY CONTENT RELATING TO USER-SUBMITTED REVIEWS
2y 5m to grant Granted Jan 27, 2026
Patent 12518247
SYSTEM AND METHOD OF TRANSLATING A TRACKING MODULE TO A UNIQUE IDENTIFIER
2y 5m to grant Granted Jan 06, 2026
Patent 12511699
SYSTEMS AND METHODS FOR CREATING SOCIAL ROOMS BASED ON PREDICTED FUTURE EVENTS
2y 5m to grant Granted Dec 30, 2025
Patent 12469060
AUTOMOBILE TRADE BROKERAGE PLATFORM SYSTEM, AUTOMOBILE TRADE BROKERAGE METHOD, AND COMPUTER PROGRAM THEREFOR
2y 5m to grant Granted Nov 11, 2025
Patent 12333500
DISTRIBUTED LEDGER AND BLOCKCHAIN TECHNOLOGY-BASED RECRUITMENT, JOB SEARCHING AND/OR PROJECT SEARCHING, SCHEDULING, AND/OR ASSET TRACKING AND/OR MONITORING, APPARATUS AND METHOD
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
72%
With Interview (+34.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 83 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month