Prosecution Insights
Last updated: April 19, 2026
Application No. 17/671,079

SYSTEM AND METHOD FOR FACILITATING HIGH FREQUENCY PROCESSING USING STORED MODELS

Non-Final OA §101§103§112
Filed
Feb 14, 2022
Examiner
KIM, SEHWAN
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
86 granted / 144 resolved
+4.7% vs TC avg
Strong +66% interview lift
Without
With
+65.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
35 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 144 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/26/2026 has been entered. Examiner’s Note For clarification, claim 1 may be amended (e.g., “the system comprising: a processor; a non-transitory storage device containing instructions when executed by the processor, causes the processor”) to make sure that the claim clearly falls within one of the four statutory categories. The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 101, 103, for moving forward allowance. Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner. Priority Acknowledgment is made of applicant's claim for the present application filed on 02/14/2022. Response to Arguments Applicant’s arguments regarding 35 USC § 103 with respect to the independent claims have been considered but are moot because the arguments are directed to amended limitation(s) that has/have not been previously examined. Claim Objections Claim(s) 1-3, 6-10, 13-17, 20 is/are objected to because of the following informalities. Claim(s) 1 is/are objected to because of the following informalities: it appears that “inputted data” (line 8) needs to read “the inputted data” or something else. Appropriate correction is required. In addition, claim(s) 8, 15 is/are objected to for the same reason. This will avoid rejections under 35 USC § 112 on “the inputted data” on their dependent claims (e.g., claims 2, 6, 9, 13, 16, 20). Claim(s) 15 is/are objected to because of the following informalities: it appears that “a processing decision” (3rd last line) needs to read “determining a processing decision” or something else. Appropriate correction is required. Claim(s) 1, 8, 15 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are objected to at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-3, 6-10, 13-17, 20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim(s) 1 does not recite(s) the limitation “receiving a set of code relating to a machine learning model configured to process data” (between line 5 and line 6) without any strikethrough to show the limitation has been removed. Thus, it is not clear if it has been removed or not. It appears it has been removed by mistake, in view of the other independent analogous claims. For the purposes of examination, “receiving a set of code relating to a machine learning model configured to process data” is used. Claim(s) 1 recite(s) the limitation “the set of code” (line 6). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a set of code”, or something else. For the purposes of examination, “a set of code” is used. Claim(s) 1 recite(s) the limitation “the machine learning model” (line 6). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a machine learning model”, or something else. For the purposes of examination, “a machine learning model” is used. The term “high” (claim 1, line 9) is a relative term which renders the claim indefinite. The term “high” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In addition, claim(s) 1 (three times more) is/are rejected for the same reason. In addition, claim(s) 8, 15 is/are rejected for the same reason. Claim(s) 2 recite(s) the limitation “the inputted data” (line 3). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since it may indicate “inputted data” (claim 1, line 7) or “inputted data” (claim 1, line 8), or something else. It appears “inputted data” (claim 1, line 8) may need to read “the inputted data”, or something else. For the purposes of examination, “the inputted data” (claim 1, line 8) is used. In addition, claim(s) 6 (two times), 9, 13 (two times), 16, 20 (two times) is/are rejected for the same reason. Claim(s) 1-2, 6, 8-9, 13, 15-16, 20 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-10, 13-17, 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “… for facilitating processing using stored models, …: …: …, …, …; …; …: …; …; and determining a processing decision for high frequency trading based at least in part on an output of the model executable file and at least one of one or more additional outputs from the one or more additional model executable files”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“the system comprising: a processing device; a non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: generating a model executable file from the set of code relating to the machine learning model, wherein the model executable file is configured to process inputted data using the machine learning model upon execution”, “executing the model executable file on the inputted high frequency trading data on the in-memory of the local device”, “creating one or more additional model executable files based on converting a set of code of one or more additional machine learning models, wherein the one or more additional model executable files are created via interpreter conversion of the set of code of one or more additional machine learning models into the one or more additional model executable files”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“wherein inputted data comprises inputted high frequency trading data”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) In particular, the claim recites an additional element(s) (“storing the model executable file on an in-memory of a local device”) – the act of storing data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of storing data is recited at a high-level of generality (i.e., as a generic act of storing performing a generic act function of storing data) such that it amounts no more than a mere act to apply the exception using a generic act of storing. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“transmitting the model executable file to a remote device to allow the remote device to store and run the model executable file on additional high frequency trading data received by the remote device”) – the act of receiving/transmitting data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving/transmitting data is recited at a high-level of generality (i.e., as a generic act of performing a generic act function of receiving/transmitting data) such that it amounts no more than a mere act to apply the exception using a generic act of transmitting. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, the claim recites the additional element(s) of storing data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g) – storing data. However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. As discussed above, the claim recites the additional element(s) of receiving/transmitting data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. Regarding claim 2 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“further comprising executing the model executable file on the in-memory of the local device, wherein the model executable file is configured to process the inputted data”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 3 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “further comprising determining a processing decision based at least in part on an output of the model executable file”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible. Regarding claim 6 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the inputted data is streaming data received from a plurality of sources”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“wherein the model executable file is configured to process the inputted data from the plurality of sources, wherein the model executable file is executed simultaneously for two sets of inputted data”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 7 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element(s) (“wherein a plurality of model executable files are stored for a plurality of machine learning models, wherein each of the plurality of executable files is stored on the in-memory of the local device”) – the act of storing data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of storing data is recited at a high-level of generality (i.e., as a generic act of storing performing a generic act function of storing data) such that it amounts no more than a mere act to apply the exception using a generic act of storing. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claim recites the additional element(s) of storing data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g) – storing data. However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. Regarding claim 8 The claim recites “A computer program product for facilitating processing using stored models, the computer program product comprising at least one non-transitory computer-readable medium having computer-readable program code portions embodied therein, the computer-readable program code portions comprising:” to perform precisely the system of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1. Regarding claim 9 The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 10 The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 13 The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 14 The claim is rejected for the reasons set forth in the rejection of Claim 7 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 15 The claim recites “A computer-implemented method for facilitating processing using stored models, the method comprising:” to perform precisely the system of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1. Regarding claim 16 The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 17 The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 20 The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-10, 13-17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US20210405990A1) in view of Hassanzadeh et al. (US 2022/0414661 A1) in view of Konda et al. (US 20230156038 A1) Regarding claim 1 Li teaches A system for facilitating processing using stored models, the system comprising: (Li [par(s) 41] “For example, software manager 416 not only is responsible for automatic deployment and automatic execution, but also may store program files (for example, the executable files) of the machine learning models generated by code generator 414 and information about the files (for example, DL frameworks, DL models, target device configurations or types, and so on). For example, N recent program files may be stored, and the number of N may be predefined.”;) a processing device; (Li [par(s) 49-55] “FIG. 6 is a schematic block diagram of device 600 that may be configured to implement an embodiment of the present disclosure. As shown in FIG. 6, device 600 includes central processing unit (CPU) 601 that may perform various appropriate actions and processing according to computer program instructions stored in read-only memory (ROM) 602 or computer program instructions loaded from storage unit 608 to random access memory (RAM) 603. Various programs and data required for the operation of device 600 may also be stored in RAM 603. CPU 601, ROM 602, and RAM 603 are connected to each other through bus 604. Input/output (I/O) interface 605 is also connected to bus 604.”;) a non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of: (Li [par(s) 49-55] “FIG. 6 is a schematic block diagram of device 600 that may be configured to implement an embodiment of the present disclosure. As shown in FIG. 6, device 600 includes central processing unit (CPU) 601 that may perform various appropriate actions and processing according to computer program instructions stored in read-only memory (ROM) 602 or computer program instructions loaded from storage unit 608 to random access memory (RAM) 603. Various programs and data required for the operation of device 600 may also be stored in RAM 603. CPU 601, ROM 602, and RAM 603 are connected to each other through bus 604. Input/output (I/O) interface 605 is also connected to bus 604.”;) receiving a set of code relating to a machine learning model configured to process data; (Li [fig(s) 4-5] [par(s) 31-33] “User 302 may further provide application analyzer 312 with information of a machine learning model to be deployed by each edge computing device, for example, specify a deep learning (DL) framework. FIG. 3 shows deep learning framework MXNet 322, TensorFlow 324, and other deep learning frameworks 326. User 302 may specify a deep learning framework used by each edge computing device, so that code generator 314 can generate code according to the specified deep learning framework. … Code generator 314 may acquire a configuration of an edge computing device and a specified deep learning framework from application analyzer 312, and automatically generate a machine learning model, for example, an executable file of a machine learning model, based on the con figuration of the edge computing device and the specified deep learning framework.” [par(s) 35-36] “In 451, user 402 may configure a machine learning model (for example, a reasoning program), for example, specify a deep learning framework, through client terminal 404.”; e.g., “deep learning framework MXNet 322, TensorFlow 324, and other deep learning frameworks 326” read(s) on “code”. In addition, e.g., “User 302 may further provide application analyzer 312 with information of a machine learning model to be deployed by each edge computing device, for example, specify a deep learning (DL) framework” along with “Code generator 314 may acquire … a specified deep learning framework from application analyzer 312” read(s) on “receive a set of code relating to a machine learning model”. For more details about MXNet, please refer to Chen et al. (MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems) (e.g., [sec(s) Abs] “MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks”. In addition, for more details about TensorFlow, please refer to Abadi et al. (TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems) (e.g., [sec(s) 12] “We have described TensorFlow, a flexible data flow-based programming model, as well as single machine and distributed implementations of this programming model” [sec(s) Abs] “TensorFlow [1] is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms.”)) (Note: Hereinafter, if a limitation has bold brackets (i.e. [·]) around claim languages, the bracketed claim languages indicate that they have not been taught yet by the current prior art reference but they will be taught by another prior art reference afterwards.) generating a model executable file from the set of code relating to the machine learning model, wherein the model executable file is configured to process inputted data using the machine learning model upon execution, wherein inputted data comprises inputted high frequency [trading] data; (Li [fig(s) 4-5] [par(s) 31-33] “code generator 314 can generate code according to the specified deep learning framework. … Code generator 314 may acquire a configuration of an edge computing device and a specified deep learning framework from application analyzer 312, and automatically generate a machine learning model, for example, an executable file of a machine learning model, based on the configuration of the edge computing device and the specified deep learning framework.” [par(s) 35-37] “In 454, application analyzer 412 sends an analysis result to code generator 414. In 455, code generator 414 generates a program or code of the machine learning model, for example, an executable file, based on the configuration of the edge computing device and a target deep learning framework. In 456, software manager 416 acquires the device list and the analysis result from application analyzer 412. In 457, software manager 416 may establish a connection, for example, a remote procedure call (RPC) connection, with client terminal 404, edge devices 406, 408, and other edge computing devices according to the acquired device list. Through the RPC, the platform may remotely deploy runtime libraries and code on the edge computing devices and start executing the code” [par(s) 22] “Data collector 105 may be any device capable of collecting data, which may be, for example, various similar sensors. Examples of data collector 105 include an image sensor, a motion sensor, a temperature sensor, a position sensor, an illumination sensor, a humidity sensor, a power sensing sensor, a gas sensor, a smoke sensor, a humidity sensor, a pressure sensor, a positioning sensor, an accelerometer, a gyroscope, a meter, a decibel sensor, and so on.”;) storing the model executable file [on an in-memory] of a local device; (Li [par(s) 41] “For example, software manager 416 not only is responsible for automatic deployment and automatic execution, but also may store program files (for example, the executable files) of the machine learning models generated by code generator 414 and information about the files (for example, DL frameworks, DL models, target device configurations or types, and so on). For example, N recent program files may be stored, and the number of N may be predefined. After application analyzer 412 obtains the configuration information from client terminal 404, application analyzer 412 may check whether a new machine learning model uses the same configuration as the stored N program files. If the new machine learning model has the same configuration as one of the N program files, application analyzer 412 may not trigger code generator 414 to perform code generation, but may trigger software manager 416 to start automatic deployment and execution.”;) transmitting the model executable file to a remote device to allow the remote device to store and run the model executable file on additional high frequency [trading] data received by the remote device; and (Li [fig(s) 4-5] [par(s) 31-33] “code generator 314 can generate code according to the specified deep learning framework. … Code generator 314 may acquire a configuration of an edge computing device and a specified deep learning framework from application analyzer 312, and automatically generate a machine learning model, for example, an executable file of a machine learning model, based on the configuration of the edge computing device and the specified deep learning framework.” [par(s) 35-37] “In 454, application analyzer 412 sends an analysis result to code generator 414. In 455, code generator 414 generates a program or code of the machine learning model, for example, an executable file, based on the configuration of the edge computing device and a target deep learning framework. In 456, software manager 416 acquires the device list and the analysis result from application analyzer 412. In 457, software manager 416 may establish a connection, for example, a remote procedure call (RPC) connection, with client terminal 404, edge devices 406, 408, and other edge computing devices according to the acquired device list. Through the RPC, the platform may remotely deploy runtime libraries and code on the edge computing devices and start executing the code” [par(s) 39] “In 458, software manager 416 selects a machine learning model corresponding to each edge computing device, for example, an executable file of the machine learning model, from the code generated by code generator 414. In 459, software manager 416 deploys the corresponding machine learning model (for example, the executable file) to each edge computing device and starts running the executable files.” [par(s) 41] “software manager 416 not only is responsible for automatic deployment and automatic execution, but also may store program files (for example, the executable files) of the machine learning models generated by code generator 414 and information about the files (for example, DL frameworks, DL models, target device configurations or types, and so on).”; e.g., “remotely deploy runtime libraries and code on the edge computing devices” read(s) on “transmitting the model executable file to a remote device”.) creating one or more additional model executable files based on converting a set of code of one or more additional machine learning models, wherein the one or more additional model executable files are created via interpreter conversion of the set of code of one or more additional machine learning models into the one or more additional model executable files; and (Li [fig(s) 4-5] [par(s) 31-33] “code generator 314 can generate code according to the specified deep learning framework. … Code generator 314 may acquire a configuration of an edge computing device and a specified deep learning framework from application analyzer 312, and automatically generate a machine learning model, for example, an executable file of a machine learning model, based on the configuration of the edge computing device and the specified deep learning framework.” [par(s) 35-37] “In 454, application analyzer 412 sends an analysis result to code generator 414. In 455, code generator 414 generates a program or code of the machine learning model, for example, an executable file, based on the configuration of the edge computing device and a target deep learning framework. In 456, software manager 416 acquires the device list and the analysis result from application analyzer 412.” [par(s) 41] “software manager 416 not only is responsible for automatic deployment and automatic execution, but also may store program files (for example, the executable files) of the machine learning models generated by code generator 414 and information about the files (for example, DL frameworks, DL models, target device configurations or types, and so on). For example, N recent program files may be stored, and the number of N may be predefined. After application analyzer 412 obtains the configuration information from client terminal 404, application analyzer 412 may check whether a new machine learning model uses the same configuration as the stored N program files. If the new machine learning model has the same configuration as one of the N program files, application analyzer 412 may not trigger code generator 414 to perform code generation, but may trigger software manager 416 to start automatic deployment and execution.”; e.g., “program files (for example, the executable files) of the machine learning models generated by code generator” along with “DL frameworks” read(s) on “creating one or more additional model executable files based on converting a set of code of one or more additional machine learning models”. In addition, e.g., “Application analyzer” and “Code generator” read(s) on “interpreter” since they analyze and interpret configurations of edge computing devices and specified deep learning frameworks, and then generate executable files of machine learning models based on them. Examiner notes that paragraph 60 of the Instant Specification describes “The interpreter may convert the machine learning model into a model executable file”) However, Li does not appear to explicitly teach: wherein inputted data comprises inputted high frequency [trading] data; storing the model executable file [on an in-memory] of a local device; executing the model executable file on the inputted high frequency trading data on the in-memory of the local device; transmitting the model executable file to a remote device to allow the remote device to store and run the model executable file on additional high frequency [trading] data received by the remote device; and determining a processing decision for high frequency trading based at least in part on an output of the model executable file and at least one of one or more additional outputs from the one or more additional model executable files. (Note: Hereinafter, if a limitation has one or more bold underlines, the one or more underlined claim languages indicate that they are taught by the current prior art reference, while the one or more non-underlined claim languages indicate that they have been taught already by one or more previous art references.) Hassanzadeh teaches wherein inputted data comprises inputted high frequency trading data; (Hassanzadeh [par(s) 35] “To illustrate, the ML model may be trained to predict whether a financial transaction is fraudulent based on input financial data, and the clients may include or correspond to banks, credit unions, credit card providers, lenders, investment agencies, brokers, other financial institutions, or the like. In some implementations, the input financial data” [par(s) 68] “The training data may include feature data labeled with whether the features correspond to a fraudulent financial transaction or a legitimate financial transaction, and the features may include or indicate transaction history, a billing address, a user identifier, a signature, an available credit line, a last trans action location, a transaction time, an amount of transactions during a threshold time period, other financial information, or a combination thereof . The client data from which the features are extracted may include text data, image data, audio data, or a combination thereof may indicate a transaction history, a billing address, an available credit line, a last transaction location, a transaction time, an amount of transactions during a threshold time period, or a combination thereof. For example, the ML model may be trained to predict that a financial transaction is fraudulent based on one or more underlying combinations of billing address, number of transactions during a threshold time period, and an amount of a transaction, as a non-limiting example. Although described herein in the context of fraud prediction for financial transactions, in other implementations, the ML model may be cooperatively trained to perform other types of predictions for other clients, such as in the health industry, network service providers, government agencies, or any other environment in which data privacy is important or required.”;) storing the model executable file on an in-memory of a local device; (Hassanzadeh [par(s) 8] “a system for cooperative training, by a service provider, of machine learning models using distributed executable file packages includes a memory and one or more processors communicatively coupled to the memory. The memory is configured to store an executable file package. The executable file package includes one or more configuration files and one or more cooperative ML libraries. The one or more processors are configured to execute the executable file package to cause the one or more processors to generate and provide, to multiple client devices: a parameter set corresponding to an initial ML model, a parameter set corresponding to a partial ML model that is split from the initial ML model, or multiple parameter sets corresponding to multiple partial ML models that are split from the initial ML model. The one or more processors are also configured to receive, from the multiple client devices, respective output data or output parameter sets based on training of the initial ML model, the partial ML model, or the multiple partial ML models at the multiple client devices. Execution of the executable file package further causes the one or more processors to aggregate the output data or the output parameter sets to generate an ML model configured to generate a prediction based on input data. The one or more processors are further configured to initiate deployment of the ML model to at least one of the multiple client devices, an endpoint node, or a combination thereof.”;) executing the model executable file on the inputted high frequency trading data on the in-memory of the local device; (Hassanzadeh [par(s) 11] “a device for cooperative training, by a client, of machine learning models using distributed executable file packages includes a memory and one or more processors communicatively coupled to the memory. The memory is configured to store client data and an executable file package. The executable file package includes one or more configuration files and one or more cooperative ML libraries. The one or more processors are configured to obtain, from a server, a parameter set corresponding to a ML model. The one or more processors are also configured to execute the executable file package to cause the one or more processors to provide the client data as training data to the ML model to train the ML model.” [par(s) 35] “To illustrate, the ML model may be trained to predict whether a financial transaction is fraudulent based on input financial data, and the clients may include or correspond to banks, credit unions, credit card providers, lenders, investment agencies, brokers, other financial institutions, or the like. In some implementations, the input financial data” [par(s) 68] “The training data may include feature data labeled with whether the features correspond to a fraudulent financial transaction or a legitimate financial transaction, and the features may include or indicate transaction history, a billing address, a user identifier, a signature, an available credit line, a last trans action location, a transaction time, an amount of transactions during a threshold time period, other financial information, or a combination thereof.”;) transmitting the model executable file to a remote device to allow the remote device to store and run the model executable file on additional high frequency trading data received by the remote device; and (Hassanzadeh [par(s) 11] “a device for cooperative training, by a client, of machine learning models using distributed executable file packages includes a memory and one or more processors communicatively coupled to the memory. The memory is configured to store client data and an executable file package. The executable file package includes one or more configuration files and one or more cooperative ML libraries. The one or more processors are configured to obtain, from a server, a parameter set corresponding to a ML model. The one or more processors are also configured to execute the executable file package to cause the one or more processors to provide the client data as training data to the ML model to train the ML model.” [par(s) 35] “To illustrate, the ML model may be trained to predict whether a financial transaction is fraudulent based on input financial data, and the clients may include or correspond to banks, credit unions, credit card providers, lenders, investment agencies, brokers, other financial institutions, or the like. In some implementations, the input financial data” [par(s) 68] “The training data may include feature data labeled with whether the features correspond to a fraudulent financial transaction or a legitimate financial transaction, and the features may include or indicate transaction history, a billing address, a user identifier, a signature, an available credit line, a last trans action location, a transaction time, an amount of transactions during a threshold time period, other financial information, or a combination thereof.”;) determining a processing decision for high frequency trading based at least in part on an output of the model executable file and at least one of one or more additional outputs from the one or more additional model executable files. (Hassanzadeh [fig(s) 1] “executable file package” [par(s) 5] “The executable file packages may include configuration files, ML libraries, pre-processing libraries, operating systems, scripts, other files, and the like that enable cooperative ML model training” [par(s) 53] “The user device 150 may transmit the request 190 to the endpoint node, or to the server 102 … The endpoint node, or the server 102, may transmit the prediction 192 to the user device 150.” [par(s) 48] “if the first client device 140 has significantly fewer available computer resources than the Nth client device 142, the server 102 may assign a relatively low weight to the first training output 170 and a relatively high weight to the Nth training output 172, such as a first weight of 0.3 and a second weight of 0.7, respectively.” [par(s) 86] “The method 600 includes receiving, from the multiple client devices, respective output data or output parameter sets based on training of the initial ML model, the partial ML model, or the multiple partial ML models at the multiple client devices, at 604. Execution of the executable file package further causes aggregation of the output data or the output parameter sets to generate a ML model configured to generate a prediction based on input data. For example, the output data or output parameter sets may include or correspond to the first training output 170 and the Nth training output 172 of FIG. 1, and the ML model configured to generate the prediction may include or correspond to the aggregated ML model parameter set 120 of FIG. 1.”; e.g., generating a prediction using outputs from executable files on multiple client devices by updating an ML model on the server read(s) on “determining a processing decision for high frequency trading based at least in part on an output of the model executable file and at least one of one or more additional outputs from the one or more additional model executable files.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Li with the processing decision from multiple model executable file outputs of Hassanzadeh. One of ordinary skill in the art would have been motived to combine in order to provide more flexibility and improve the robustness of a resulting trained ML model as well as improve computing resource utilization across the server, the first client device, and the Nth client device. (Hassanzadeh [par(s) 40] “It will be appreciated that the differences between client-side partial ML models of different clients may be similarly based on any desired characteristic or information associated with the clients. Performing individual splits on a client-by-client basis is more flexible and may improve the robustness of a resulting trained ML model as well as improve computing resource utilization across the server 102, the first client device 140, and the Nth client device 142 as compared to performing the same split for all clients, or not splitting the initial ML model. Specific examples of splitting ML models into different partial ML models for different clients are described herein with reference to FIG. 2.”) In the alternative, Konda can also be interpreted to teach the following limitations: Konda teaches storing the model executable file on an in-memory of a local device. (Konda [fig(s) 2] [par(s) 87] “the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282. In some examples, the machine-learning model(s) 282 may be obtained from the central facility server 110. In some examples, the machine-learning model(s) 282 may be generated and/or otherwise trained by the security handler 260. The machine-learning model datastore 280 may be implemented by a volatile memory (e.g., an SDRAM, a DRAM, an RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory). The machine-learning model datastore 280 may additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc. … Furthermore, the machine-learning model(s) 282 stored in the machine-learning model datastore 280 may be in any data format such as, for example, binary data, a file (e.g., an executable file), etc.” See also [sec(s) 128];) executing the model executable file on the inputted high frequency trading data on the in-memory of the local device; (Konda [fig(s) 2] [par(s) 81] “the security handler 260 may invoke execution of the machine-learning model(s) 282 to process input data (e.g., one or more TLS parameters of a TLS flow, a hash value, a vendor identifier, an IP address, a MAC address, a serial number, a certificate, etc.) to generate an output (e.g., a label of a TLS flow as malicious or benign (e.g., legitimate, trustworthy, etc.)) based on patterns and/or associations previously learned by the machine-learning model(s) 282 via a training process. For example, the central facility server 110 may train the machine-learning model(s) 282 with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) (e.g., machine-learning output(s)) consistent with the recognized patterns and/or associations.” [par(s) 87] “the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282. In some examples, the machine-learning model(s) 282 may be obtained from the central facility server 110. In some examples, the machine-learning model(s) 282 may be generated and/or otherwise trained by the security handler 260. The machine-learning model datastore 280 may be implemented by a volatile memory (e.g., an SDRAM, a DRAM, an RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory). The machine-learning model datastore 280 may additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc. … Furthermore, the machine-learning model(s) 282 stored in the machine-learning model datastore 280 may be in any data format such as, for example, binary data, a file (e.g., an executable file), etc.” See also [sec(s) 128]; Note Hassanzadeh that teaches “trading data.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Li, Hassanzadeh with the executable file on a memory of Konda. One of ordinary skill in the art would have been motived to combine in order to improve the efficiency at which a TLS (Transport Layer Security) flow may be identified as either malware or legitimate because a reduced number of TLS flows may be profiled rather than all TLS flows. In addition, the reduced number may achieve a significantly reduction in resources and time duration needed to classify such TLS flows. (Konda [par(s) 29] “Advantageously, examples disclosed herein improve the efficiency at which a TLS flow may be identified as either malware (e.g., a malicious flow) or legitimate (e.g., a benign flow) because a reduced number of TLS flows may be profiled rather than all TLS flows, and the reduced number may achieve a significantly reduction in resources and time duration needed to classify such TLS flows.”) Regarding claim 2 The combination of Li, Hassanzadeh, Konda teaches claim 1. Konda further teaches executing the model executable file on the in-memory of the local device, wherein the model executable file is configured to process the inputted data. (Konda [fig(s) 2] [par(s) 81] “the security handler 260 may invoke execution of the machine-learning model(s) 282 to process input data (e.g., one or more TLS parameters of a TLS flow, a hash value, a vendor identifier, an IP address, a MAC address, a serial number, a certificate, etc.) to generate an output (e.g., a label of a TLS flow as malicious or benign (e.g., legitimate, trustworthy, etc.)) based on patterns and/or associations previously learned by the machine-learning model(s) 282 via a training process. For example, the central facility server 110 may train the machine-learning model(s) 282 with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) (e.g., machine-learning output(s)) consistent with the recognized patterns and/or associations.” [par(s) 87] “the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282. In some examples, the machine-learning model(s) 282 may be obtained from the central facility server 110. In some examples, the machine-learning model(s) 282 may be generated and/or otherwise trained by the security handler 260. The machine-learning model datastore 280 may be implemented by a volatile memory (e.g., an SDRAM, a DRAM, an RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory). The machine-learning model datastore 280 may additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc. … Furthermore, the machine-learning model(s) 282 stored in the machine-learning model datastore 280 may be in any data format such as, for example, binary data, a file (e.g., an executable file), etc.” See also [sec(s) 128];) The combination of Li, Hassanzadeh, Konda is combinable with Konda for the same rationale as set forth above with respect to claim 1. Regarding claim 3 The combination of Li, Hassanzadeh, Konda teaches claim 1. Konda further teaches determining a processing decision based at least in part on an output of the model executable file. (Konda [fig(s) 2] [par(s) 81] “the security handler 260 may invoke execution of the machine-learning model(s) 282 to process input data (e.g., one or more TLS parameters of a TLS flow, a hash value, a vendor identifier, an IP address, a MAC address, a serial number, a certificate, etc.) to generate an output (e.g., a label of a TLS flow as malicious or benign (e.g., legitimate, trustworthy, etc.)) based on patterns and/or associations previously learned by the machine-learning model(s) 282 via a training process.” [par(s) 83] “the security handler 260 determines whether malicious behavior is detected associated with the data communication. For example, the security handler 260 may determine that malicious behavior is detected in response to at least one of determining that one or more firewall rules have been violated or an output of the machine-learning model(s) 282 indicates that the data communication is indicative of malicious computing activity. In some such examples, in response to detecting malicious computing behavior or activity, the security handler 260 may execute one or more mitigation measures. For example, the security handler 260 may block or discard communications from a source (e.g., one(s) of the IoT devices 104, 106, 108, one(s) of the servers 118, 120, 122, etc.) of the malicious behavior, store the communications in a sandbox or trusted execution environment (TEE), or generate an alert to a user, a computing device, etc., indicative of the malicious behavior.” [par(s) 87] “the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282. In some examples, the machine-learning model(s) 282 may be obtained from the central facility server 110. In some examples, the machine-learning model(s) 282 may be generated and/or otherwise trained by the security handler 260.”; e.g., “in response to detecting malicious computing behavior or activity, the security handler 260 may execute one or more mitigation measures” read(s) on “determine a processing decision”.) The combination of Li, Hassanzadeh, Konda is combinable with Konda for the same rationale as set forth above with respect to claim 1. Regarding claim 6 The combination of Li, Hassanzadeh, Konda teaches claim 1. Konda further teaches wherein the inputted data is streaming data received from a plurality of sources, wherein the model executable file is configured to process the inputted data from the plurality of sources, wherein the model executable file is executed simultaneously for two sets of inputted data. (Konda [fig(s) 1-2] [par(s) 191] “At block 1106, the telemetry controller 102 receives a data communication including Transport Layer Security (TLS) telemetry data from the device(s) and server(s).” [par(s) 80-83] “the security handler 260 compares a data communication, such as a TLS flow, received from the IoT devices 104, 106, 108 to one or more firewall rules. … the security handler 260 detects malicious behavior associated with one(s) of the IoT devices 104, 106, 108 in response to executing one(s) of the more machine-learning model(s) 282. For example, the security handler 260 may execute one of the machine-learning model(s) 282 stored in the machine-learning model datastore 280. … the security handler 260 determines whether malicious behavior is detected associated with the data communication.” [par(s) 85-87] “the means for executing is to execute a machine-learning model to generate a machine-learning output based on at least one of the first telemetry data, the second telemetry data, the TLS client sub-profile, the TLS server sub-profile, or the hash value. … the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282.” [par(s) 201] “At block 1120, the telemetry controller 102 determines whether malicious behavior associated with the data communication is detected. For example, in response to the firewall rule comparison(s) and/or the machine-learning model execution(s), the security handler 260 may not detect malicious behavior associated with the data communication from the first IoT device 104. … the security handler 260 may discard or block the TLS flow in response to a detection of malicious computing behavior to protect the first network device 112, the second IoT device 106, the third IoT device 108, and/or, more generally, the network environment 126 of FIG. 1, from being compromised by a malicious actor or entity.”; e.g., “the security handler 260 may execute one of the machine-learning model(s)” along with “machine-learning model datastore 280 may store one or more executable files” read(s) on “model executable file”.) The combination of Li, Hassanzadeh, Konda is combinable with Konda for the same rationale as set forth above with respect to claim 1. Regarding claim 7 The combination of Li, Hassanzadeh, Konda teaches claim 1. Li further teaches wherein a plurality of model executable files are stored for a plurality of machine learning models, wherein each of the plurality of executable files is stored [on the in-memory] of the local device. (Li [fig(s) 4-5] [par(s) 41] “For example, software manager 416 not only is responsible for automatic deployment and automatic execution, but also may store program files (for example, the executable files) of the machine learning models generated by code generator 414 and information about the files (for example, DL frameworks, DL models, target device configurations or types, and so on). For example, N recent program files may be stored, and the number of N may be predefined. After application analyzer 412 obtains the configuration information from client terminal 404, application analyzer 412 may check whether a new machine learning model uses the same configuration as the stored N program files. If the new machine learning model has the same configuration as one of the N program files, application analyzer 412 may not trigger code generator 414 to perform code generation, but may trigger software manager 416 to start automatic deployment and execution.”;) Konda further teaches wherein each of the plurality of executable files is stored on the in-memory of the local device. (Konda [fig(s) 2] [par(s) 87] “the machine-learning model datastore 280 may store one or more executable files that, when executed, implement and/or otherwise cause an execution of the machine-learning model(s) 282. In some examples, the machine-learning model(s) 282 may be obtained from the central facility server 110. In some examples, the machine-learning model(s) 282 may be generated and/or otherwise trained by the security handler 260. The machine-learning model datastore 280 may be implemented by a volatile memory (e.g., an SDRAM, a DRAM, an RDRAM, etc.) and/or a non-volatile memory (e.g., flash memory). The machine-learning model datastore 280 may additionally or alternatively be implemented by one or more DDR memories, such as DDR, DDR2, DDR3, DDR4, mDDR, etc. … Furthermore, the machine-learning model(s) 282 stored in the machine-learning model datastore 280 may be in any data format such as, for example, binary data, a file (e.g., an executable file), etc.” See also [sec(s) 128];) The combination of Li, Hassanzadeh, Konda is combinable with Konda for the same rationale as set forth above with respect to claim 1. Regarding claim 8 The claim is a computer program product claim corresponding to the system claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 9 The claim is a computer program product claim corresponding to the system claim 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 10 The claim is a computer program product claim corresponding to the system claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 13 The claim is a computer program product claim corresponding to the system claim 6, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 14 The claim is a computer program product claim corresponding to the system claim 7, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 15 The claim is a method claim corresponding to the system claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 16 The claim is a method claim corresponding to the system claim 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 17 The claim is a method claim corresponding to the system claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 20 The claim is a method claim corresponding to the system claim 6, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. SARDESAI et al. (US 20210105624 A1) teaches receiving file packages. Pandey et al. (US 20220292334 A1) teaches an inference output from different MLM outputs. Sugino et al. (US 20210301983 A1) teaches storing executable files on a memory. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Thu 7:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEHWAN KIM/Examiner, Art Unit 2129 2/17/2026
Read full office action

Prosecution Timeline

Feb 14, 2022
Application Filed
Mar 27, 2025
Non-Final Rejection — §101, §103, §112
Jul 02, 2025
Response Filed
Oct 22, 2025
Final Rejection — §101, §103, §112
Jan 26, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602595
SYSTEM AND METHOD OF USING A KNOWLEDGE REPRESENTATION FOR FEATURES IN A MACHINE LEARNING CLASSIFIER
2y 5m to grant Granted Apr 14, 2026
Patent 12602580
Dataset Dependent Low Rank Decomposition Of Neural Networks
2y 5m to grant Granted Apr 14, 2026
Patent 12602581
Systems and Methods for Out-of-Distribution Detection
2y 5m to grant Granted Apr 14, 2026
Patent 12602606
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMPROVED GLOBAL QUBIT POSITIONING IN A QUANTUM COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12541722
MACHINE LEARNING TECHNIQUES FOR VALIDATING AND MUTATING OUTPUTS FROM PREDICTIVE SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+65.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month