Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,118

SYSTEM AND METHOD FOR HETEROGENEOUS MODEL COMPOSITION

Final Rejection §101§103§DP
Filed
Apr 11, 2024
Examiner
MISIR, DAYWAYSHWAR D
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Grid AI Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
451 granted / 538 resolved
+28.8% vs TC avg
Strong +48% interview lift
Without
With
+47.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
32.5%
-7.5% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The previous 25 USC 112 rejections on the relevant claim(s) are withdrawn based on the amendments submitted on those claim(s). Response to Arguments Applicant’s arguments have been fully considered but they are not persuasive. In regards to applicants’ arguments concerning the abstract idea rejections on the claims (Remarks, pp. 8-11), the Examiner respectfully disagrees and maintains that the human mind is capable of converting inputs from one format to another using evaluation, judgement and/or pen and paper. The other arguments are addressed in the rejections below. In regards to applicants’ arguments concerning the added limitation to the independent claims (Remarks, pp. 11-12), the Examiner points to the rejection(s) below where this is addressed. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 3, 5, 6, 10 are rejected on the ground of nonstatutory double patenting as being unpatentable over corresponding Claims 1, 1, 6, 4, 1, 3 of U.S. Patent No. 11367021. Although the claims at issue are not identical, they are not patentably distinct from each other because the application claims are anticipated by the corresponding patent claims. The limitation “wherein each model in the series of models comprises a machine learning model” of application Claim 5 being anticipated by “wherein the set of models comprise at least one of neural network models, regression models, or ruleset models” of patent Claim 4. Claims 1, 10, 11, 12, 14, 16, 17, 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over corresponding Claims 1 and 13, 16, 7, 7, 7, 8, 7, 11 of U.S. Patent No. 11983614. Although the claims at issue are not identical, they are not patentably distinct from each other because the application claims are anticipated by the corresponding patent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: All claims are directed towards either a method or a system and thus satisfies Step 1 as falling into one of the statutory categories. Step 2A, Prong One: Independent Claim 1 recites: determining a set of heterogeneous models; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining a set of different/ heterogeneous machine learning models based on observation and evaluation. identifying a series of models based on a request, the series of models comprising a subset of the set of heterogeneous models, wherein model layers of the series of models connect a root model of the set of heterogeneous models to an intermediary model of the set of heterogeneous models; this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of identifying/picking a subset of connected models (from root/starting model to termination/intermediate model) based on observation and evaluation. (The request is considered as insignificant extra-solution activity - see MPEP 2106.05(g)). for each model in the series of models: converting a standard-formatted input into a model-specific (MS) formatted input; this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of converting inputs from one format to another using evaluation, judgement and/or pen and paper. executing the model using the MS-formatted input to generate a MS- formatted output; this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of using inputs to generate outputs from a model using evaluation, judgement and/or pen and paper. and converting the MS-formatted output into a standard-formatted output, this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of converting inputs/outputs from one format to another using evaluation, judgement and/or pen and paper. Step 2A, Prong Two: Claim 1 recites the additional elements of: retrieving a standard-formatted input from session storage; wherein the standard-formatted output is stored in the session storage and retrieved as the standard-formatted input for a successive model in the series of models; and providing an output based on a standard-formatted output from the intermediary model. These limitations are considered as adding insignificant extra-solution activity (storing and retrieving/providing data) to the judicial exception - see MPEP 2106.05(g). As such, the additional elements do not provide a practical application. Step 2B: As pointed out above the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d). Dependent Claim 2, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of identifying a path between two models using observation and evaluation. Dependent Claim 3, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining different/incompatible formats between two models using observation and evaluation. Dependent Claims 4, 6, 7 under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of converting inputs/outputs from one format to another using evaluation, judgement and/or pen and paper. (The pre- and post-processor layers are considered as merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Dependent Claims 5, 8, 10 is considered as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Dependent Claim 9 is considered as merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Step 2A, Prong One: Independent Claim 11 recites: identify model layers based on the request, the model layers corresponding to a subset of models within the series of models, wherein the model layers connect a root model of the series of models to the model of interest; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of identifying a subset of connected models or model layers (from root/starting model to termination model/model of interest) based on observation and evaluation. (The request is considered as insignificant extra-solution activity - see MPEP 2106.05(g)). determining a model-specific input (MSI) object from a standard input object; this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining different input/output objects based on observation and evaluation. executing the model using the MSI object to generate a model-specific output (MSO) object; this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of using inputs to generate outputs from a model using evaluation, judgement and/or pen and paper. determining a standard output object from the MSO object, this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining different input/output objects based on observation and evaluation. and determine a final output based on a standard output object from the model of interest. this limitation, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of using inputs to generate outputs from a model using evaluation, judgement and/or pen and paper, and then determining a final output. Step 2A, Prong Two: Claim 11 recites the additional elements of: receive a request associated with a model of interest within a series of models; retrieving a standard-formatted input from session storage; facilitate execution of the model layers, comprising, for each model in the subset of models: wherein the standard output object from the model is stored in the session storage and retrieved as the standard input object for a successive model in the subset of models; These limitations are considered as adding insignificant extra-solution activity (receiving requests and storing and retrieving data for further actions) to the judicial exception - see MPEP 2106.05(g). The additional limitation of “a processing system” is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is therefore directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d). The additional element of “a processing system” amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim therefore is not patent eligible. Dependent Claims 12, 16, 18-20 is considered as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). Dependent Claims 13, 15 under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining compatibility/incompatibility of input/output objects for a model using observation and evaluation. Dependent Claim 14, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of converting inputs/outputs from one format to another using evaluation, judgement and/or pen and paper. (The pre- and post-processor handlers are considered as merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Dependent Claim 17, under its broadest reasonable interpretation, also covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of deserializing/converting inputs from one format to another using evaluation, judgement and/or pen and paper; and using inputs to generate outputs from a model using evaluation, judgement and/or pen and paper, and then determining a final output. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Atcheson, US 2020/0160299 A1, in view of Goodsitt, US 2021/0192282 A1. Regarding Claim 1, Atcheson teaches: A method, comprising: determining a set of heterogeneous models (Abstract; paragraphs 3-4, 39: multiple and different machine learning models for selection/composition. Examiner’s note: see also Pinel, US 2020/0089817 A1, for example Abstract; and Perumalla, US 12067482 B1, for example Abstract); identifying a series of models based on a request (Abstract; paragraphs 39, 65: multiple and different machine learning models used sequentially; and based on a receiving an input/request), for each model in the series of models: retrieving a standard-formatted input from session storage (paragraph 54: “Based on the available data included in the profile information 112, the outcome selection module 114 is configured to communicate with a database storing machine learning models, such as data base 120, and identify one or more machine learning models 122 that are useable to generate an output given the data included in the profile information 112. The database 120 may be implemented locally in storage of the computing device 102 or may be implemented in a storage location remote from the computing device”. That is various machine learning models, such as a standard-formatted input type, are stored in the database and retrieved based on appropriateness of use); converting the standard-formatted input into a model-specific (MS) formatted input; executing the model using the MS-formatted input to generate a MS- formatted output; and converting the MS-formatted output into a standard-formatted output, wherein the standard-formatted output is stored in the session storage and retrieved as the standard-formatted input for a successive model in the series of models (Abstract; paragraphs 4, 33-36, 38-39, 54-55, 63, 65: formatting data so that it’s acceptable to be used in the machine learning models for generating their respective outputs that are also formatted for use in subsequent machine learning models or in plain text for a user understanding. And, the process may be repeated for as many iterations as necessary such that output from one model is input to the subsequent model, the inputs/outputs being appropriately formatted for use); and providing an output based on a standard-formatted output from the intermediary model (paragraphs 39, 41, 55-56, 63, 65: generating the outputs using the selected/subset machine learning models. And further describes the formatting and generating the output of the machine learning model in a final comprehensible form). Atcheson may not have explicitly taught: the series of models comprising a subset of the set of heterogeneous models, wherein model layers of the series of models connect a root model of the set of heterogeneous models to an intermediary model of the set of heterogeneous models. However, Goodsitt shows (Abstract; paragraphs 6, 60, 77-78, 95, 99-100: wherein the machine learning models are different types of models arranged in parent nodes and child nodes configuration and a subset of these nodes may be selected). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Goodsitt with that of Atcheson for determining a subset of models connecting a root model of the series of heterogenous models to the intermediary model. The ordinary artisan would have been motivated to modify Atcheson in the manner set forth above for the purposes of building an ensemble machine learning model for use on data in different categories and subcategories [Goodsitt: paragraph 6]. Regarding Claim 2, Goodsitt further teaches: The method of Claim 1, wherein identifying the series of models comprises identifying a path between the root model and the intermediary model (Fig. 4; paragraphs 78, 99-100: wherein it is shown and described the graph with nodes consisting of various models connected with edges or paths). Regarding Claim 3, Atcheson further teaches: The method of Claim 1, wherein at least two models in the series of models have incompatible MS-formats (paragraphs 4, 35, 39: usage of a data translation module to ensure that the incompatible input and output data generated and/or used by the various machine learning models will be of an appropriate type). Regarding Claim 4, Atcheson further teaches: The method of Claim 1, wherein, for each model in the series of models, the standard-formatted input for the model is converted into the MS-formatted input for the model using a pre-processor layer of the model, wherein the MS-formatted output from the model is converted into the standard-formatted output from the model using a post-processor layer of the model (paragraph 39: wherein for each model, a data translation module that is considered the pre and post processors, is used to standardize inputs/outputs for use with the other models). Regarding Claim 5, Atcheson further teaches: The method of Claim 1, wherein each model in the series of models comprises a machine learning model (paragraphs 43, 54: describes the series of machine learning models that includes neural networks, linear and logistic regression models, decision trees and support vector machines that are ruleset models). Regarding Claim 6, Atcheson further teaches: The method of Claim 1, further comprising, for the intermediary model: converting the standard-formatted output from a final model of the series of models into an MS-formatted input for the intermediary model; executing the intermediary model using the MS-formatted input for the intermediary model to generate an MS-formatted output from the intermediary model; and converting the MS-formatted output from the intermediary model to the standard-formatted output from the intermediary model (Abstract; paragraphs 4, 33-36, 38-39, 54-55, 63, 65: formatting data so that it’s acceptable to be used in the machine learning models for generating their respective outputs that are also formatted for use in subsequent machine learning models or in plain text for a user understanding. And, the process may be repeated for as many iterations as necessary such that output from one model is input to the subsequent model, the inputs/outputs being appropriately formatted for use). Regarding Claim 7, Atcheson further teaches: The method of Claim 1, further comprising, for the root model: converting an input into a standard-formatted input for the root model; converting the standard-formatted input for a parent model to an MS-formatted input for the root model; executing the root model using the MS-formatted input for the root model to generate an MS-formatted output from the root model; and converting the MS-formatted output from the root model to a standard-formatted output from the root model, wherein the standard-formatted output from the root model is used as the standard-formatted input for a first model in the series of models (Abstract; paragraphs 4, 33-36, 38-39, 54-55, 63, 65: formatting data so that it’s acceptable to be used in the machine learning models for generating their respective outputs that are also formatted for use in subsequent machine learning models or in plain text for a user understanding. And, the process may be repeated for as many iterations as necessary such that output from one model is input to the subsequent model, the inputs/outputs being appropriately formatted for use). Regarding Claim 8, Atcheson further teaches: The method of Claim 7, wherein the input is determined based on the request (Abstract; paragraphs 39, 65: multiple and different machine learning models used sequentially; and based on a receiving an input/request). Regarding Claim 9, Goodsitt further teaches: The method of Claim 1, wherein the series of models are executed using a set of GPUs (paragraph 30: the computing resources can use GPUs). Regarding Claim 10, Atcheson further teaches: The method of Claim 1, wherein at least two models within the series of models are authored by different entities (paragraph 39: wherein the machine learning models are disparate machine learning models designed by different data scientists). Regarding Claim 11, Atcheson teaches: A system, comprising: a processing system configured to (paragraph 126): receive a request associated with a model of interest within a series of models (Abstract; paragraphs 39, 65: multiple and different machine learning models used sequentially; and based on a receiving an input/request); facilitate execution of the model layers, comprising, for each model in the subset of models: retrieving a standard-formatted input from session storage (paragraph 54: “Based on the available data included in the profile information 112, the outcome selection module 114 is configured to communicate with a database storing machine learning models, such as data base 120, and identify one or more machine learning models 122 that are useable to generate an output given the data included in the profile information 112. The database 120 may be implemented locally in storage of the computing device 102 or may be implemented in a storage location remote from the computing device”. That is various machine learning models, such as a standard-formatted input type, are stored in the database and retrieved based on appropriateness of use); determining a model-specific input (MSI) object from a standard input object; executing the model using the MSI object to generate a model- specific output (MSO) object; determining a standard output object from the MSO object, wherein the standard output object from the model is stored in the session storage and retrieved as the standard input object for a successive model in the subset of models (Abstract; paragraphs 4, 33-36, 38-39, 54-55, 63, 65: formatting data so that it’s acceptable to be used in the machine learning models for generating their respective outputs that are also formatted for use in subsequent machine learning models or in plain text for a user understanding. And, the process may be repeated for as many iterations as necessary such that output from one model is input to the subsequent model, the inputs/outputs being appropriately formatted for use); and determine a final output based on a standard output object from the model of interest (paragraphs 39, 41, 55-56, 63, 65: generating the outputs using the selected/subset machine learning models. And further describes the formatting and generating the output of the machine learning model in a final comprehensible form). Atcheson may not have explicitly taught: identify model layers based on the request, the model layers corresponding to a subset of models within the series of models, wherein the model layers connect a root model of the series of models to the model of interest. However, Goodsitt shows (Abstract; paragraphs 6, 60, 77-78, 95, 99-100: wherein the machine learning models are different types of models arranged in parent nodes and child nodes configuration and a subset of these nodes may be selected. And, paragraphs 26, 34-37: wherein as described each node may be a layer for the model and a sequence of nodes is used in performing the operations/request on the datasets). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Goodsitt with that of Atcheson for identifying model layers based on the request, the model layers corresponding to a subset of models within the series of models, wherein the model layers connect a root model of the series of models to the model of interest. The ordinary artisan would have been motivated to modify Atcheson in the manner set forth above for the purposes of building an ensemble machine learning model for use on data in different categories and subcategories [Goodsitt: paragraph 6]. Regarding Claim 12, Atcheson further teaches: The system of Claim 11, wherein the series of models comprises a series of heterogeneous models (Abstract; paragraphs 3-4, 39: multiple and different machine learning models for selection/composition. Examiner’s note: see also Pinel, US 2020/0089817 A1, for example Abstract; and Perumalla, US 12067482 B1, for example Abstract). Regarding Claim 13, Atcheson further teaches: The system of Claim 11, wherein the MSO object from a first model in the subset of models is incompatible with the MSI object for a successive model of the first model in the subset of models (paragraphs 4, 35, 39: usage of a data translation module to ensure that the incompatible input and output data generated and/or used by the various machine learning models will be of an appropriate type). Regarding Claim 14, Atcheson further teaches: The system of Claim 11, wherein, for each model in the subset of models, the standard input object for the model is converted into the MSI object for the model using a pre-processor of a model-specific handler, wherein the MSO object from the model is converted into the standard output object from the model using a post-processor of the model-specific handler (paragraph 39: wherein for each model, a data translation module that is considered the pre and post processors, is used to standardize inputs/outputs for use with the other models). Regarding Claim 15, Atcheson further teaches: The system of Claim 11, wherein the processing system is further configured to, for each model in the subset of models, verify compatibility of the standard output object from the model for the successive model in the subset of models (paragraphs 4, 35, 39: usage of a data translation module to ensure that the incompatible input and output data generated and/or used by the various machine learning models will be of an appropriate type). Regarding Claim 16, Atcheson further teaches: The system of Claim 11, wherein at least two models in the subset of models are authored by different entities (paragraph 39: wherein the machine learning models are disparate machine learning models designed by different data scientists). Regarding Claim 17, Atcheson further teaches: The system of Claim 11, further comprising: a deserializer connected to the root model, wherein the deserializer is configured to deserialize an input into a standard input object for the root model; and a serializer connected to the model of interest, wherein the serializer is configured to determine the final output based on the standard output object from the model of interest (paragraph 39: wherein for each model, a data translation module that is considered the serializer and deserializer, is used to standardize inputs/outputs for use with the other models. Examiner’s note: see also GHANEA-HERCOCK, US 20210150416 A1, for serializing, for example paragraphs 66, 73). Regarding Claim 18, Goodsitt further teaches: The system of Claim 11, wherein each standard input object comprises a tensor (paragraphs 56, 74: data can be in tensor form. Examiner’s note: Matsuo, US 2020/0234120 Ai, also teaches this, see for example paragraph 28; as well as Hu, US 2021/0271986 Ail, see for example paragraph 29). Regarding Claim 19, Goodsitt further teaches: The system of Claim 11, wherein the series of models is represented as a directed acyclic graph (paragraphs 24, 60, 100: models represented in a directed acyclic graph). Regarding Claim 20, Atcheson further teaches: The system of Claim 11, wherein the series of models is selected by a user using an interface (paragraphs 38, 88: selection of the models by the user via a user interface). Examiner’s Note: The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant’s disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art relating to this application where for example Givental, US 2021/0279644 A1, teaches the use of ensemble machine learning models. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached on 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVE MISIR/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Aug 28, 2025
Non-Final Rejection — §101, §103, §DP
Feb 02, 2026
Response Filed
Feb 12, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602619
MACHINE LEARNING SYSTEM AND MACHINE LEARNING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585991
DIGITAL RIGHTS MANAGEMENT OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579475
ARTIFICIAL INTELLIGENCE MODEL GENERATED USING AGENTIC WORKFLOW SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE MODEL ALIGNED WITH DOMAIN-SPECIFIC PRINCIPLES
2y 5m to grant Granted Mar 17, 2026
Patent 12572802
METHODS AND DEVICES IN PERFORMING A VISION TESTING PROCEDURE ON A PERSON
2y 5m to grant Granted Mar 10, 2026
Patent 12562242
DATA DRIVEN FEATURIZATION AND MODELING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+47.8%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month