Prosecution Insights
Last updated: April 18, 2026
Application No. 18/098,397

SYSTEMS AND METHODS FOR INTEGRATION OF MACHINE LEARNING MODELS WITH CLIENT APPLICATIONS

Final Rejection §112
Filed
Jan 18, 2023
Examiner
HAEFNER, KAITLYN RENEE
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Fmr LLC
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-5.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
32 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
31.1%
-8.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§112
DETAILED ACTION This action is in response to the amendment filed 02/03/2026. Claims 1-9, 12-20, and 23-26 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-9, 12-20, and 23-26 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1, lines 8-13 reciting “…during deployment of the software container, automatically generating, by the server computing device, a protocol buffer profile derived from container-declared input and output interfaces of the machine learning model, the protocol buffer profile defining one or more executable RPC functions bound to the software container…” is not supported by the specification. Examiner notes that paragraph 0030 states “once the model container 108a is deployed, server computing device 106 generates (step 204) a protocol buffer profile 109a from the model container 108a image. As described above, protocol buffer profile 109a defines one or more Remote Procedure Call (RPC) functions for interactions between ML classification model ll0a and a consuming client application (e.g., application 103) using the RPC server module 11la and RPC client module 109a.” This excerpt shows support for generating a protocol buffer profile, but does not show support for the protocol buffer profile being derived from the container-declared input and output interfaces. Paragraph 0030 also does not show support that the generation of the protocol buffer profile occurs during deployment since the container is already deployed according to the specification. Examiner further notes that the specification does not show support for RPC function that are bound to the software container. Regarding claim 1, lines 14-17 reciting “enforcing, by the server computing device, compatibility constraints between the client application and the software container based on the protocol buffer profile, including validation of input structure and output schema prior to runtime execution…” is not supported by the specification. There is no mention of validation in the specification. Examiner notes that paragraph 0025 states “Server computing device 106 can be configured to execute many software containers, in isolation from each other, that access a single operating system (OS) kernel. Server computing device 106 executes each software container in a separate OS process and constrains each container’s access to physical resources (e.g., CPU, memory) of server computing device 106 so that a single container does not utilize all of the available physical resources. Upon execution, server computing device 106 executes the software application code stored in one or more of the model containers 108a-108n to, e.g., launch RPC server module 111a-111n and make the corresponding ML classification model 110a-110n available to downstream computing applications (e.g., client application 103 of device 102) using Remote Procedure Calls defined by the associated protocol buffer profile 109a-109n for the container.” This shows support for constraints between the physical resources and the software container based on the protocol buffer profile, but does not show support for constraints between the client application and the software container including validation of input structure and output schema prior to runtime execution. Regarding claim 1, lines 21-24 reciting “executing, by the server computing device, the machine learning model within the software container using the protocol buffer profile as a native invocation interface without intermediate translation layers, to generate a classification value for input provided in the request” is not supported by the specification. There is no mention of a native invocation interface within the specification. Additionally, there is no mention of translation layers withing the specification. Regarding claim 1, lines 27-30 reciting “…wherein the automatic generation of the protocol buffer profile and enforcement of compatibility constraints reduce runtime latency and prevent execution failures caused by interface mismatch between the client application and the machine learning model” is not supported by the specification. Examiner notes that there is no mention of latency in the specification, and thus, claiming to reduce runtime latency has no support within the specification. Examiner further notes that preventing execution failures is also not supported by the specification since there is no mention of preventing failures. Lastly, examiner notes that there is no mention of interface mismatch in the specification, and thus claiming to prevent execution failures caused by interface mismatch between the client application and the machine learning model has no support from the specification. Regarding claims 2-9 and 23-24, claims 2-9 and 23-24 are rejected for at least the same reasons as claim 1 since claims 2-9 depend on claim 1. Regarding claim 2, lines 8-13 reciting “…in accordance with the protocol buffer profile generated during deployment” is not supported by the specification. Examiner notes that paragraph 0030 states “once the model container 108a is deployed, server computing device 106 generates (step 204) a protocol buffer profile 109a from the model container 108a image. As described above, protocol buffer profile 109a defines one or more Remote Procedure Call (RPC) functions for interactions between ML classification model ll0a and a consuming client application (e.g., application 103) using the RPC server module 11la and RPC client module 109a.” Paragraph 0030 does not show support that the generation of the protocol buffer profile occurs during deployment since the container is already deployed according to the specification. Regarding claim 3, lines 6-8 reciting “mapping, by the RPC server module, the input provided in the request to one or more input parameters for the machine learning model based on schema definitions enforced by the protocol buffer profile” is not supported by the specification. Examiner notes that paragraph 0030 states “once the model container 108a is deployed, server computing device 106 generates (step 204) a protocol buffer profile 109a from the model container 108a image. As described above, protocol buffer profile 109a defines one or more Remote Procedure Call (RPC) functions for interactions between ML classification model ll0a and a consuming client application (e.g., application 103) using the RPC server module 11la and RPC client module 109a. For example, protocol buffer profile 109a can map an RPC request function to an input API call for ML classification model 110a that includes one or more input parameters. Protocol buffer profile 109a can map an RPC response function to an API response call returned from ML classification model 110a that includes output parameters from execution of the model.” While paragraph 0030 describes mapping input to input parameters, it does not describe schema definitions that are enforced by the protocol buffer profile. Regarding claim 5, lines 7-8 reciting “mapping… the input provided by the machine learning model to an output parameter of the second RPC function validated against the protocol buffer profile” is not supported by the specification. Examiner notes that the term “validate” cannot be found in the specification, and thus this limitation is not supported by the specification. Regarding claim 23, “wherein automatically generating the protocol buffer during deployment eliminates a runtime interface translation layer between the client application and the software container, thereby reducing end-to-end request latency during execution of the machine learning model” is not supported by the specification. Examiner notes that paragraph 0030 states “once the model container 108a is deployed, server computing device 106 generates (step 204) a protocol buffer profile 109a from the model container 108a image. As described above, protocol buffer profile 109a defines one or more Remote Procedure Call (RPC) functions for interactions between ML classification model ll0a and a consuming client application (e.g., application 103) using the RPC server module 11la and RPC client module 109a.” Paragraph 0030 does not show support that the generation of the protocol buffer profile occurs during deployment since the container is already deployed according to the specification. Examiner further notes that specification has no mention of an interface translation nor latency and thus cannot support claim 23. Regarding claim 24, “wherein enforcing compatibility constraints comprises preventing execution of the machine learning model when an invoked RPC function does not correspond to the software container to which the protocol buffer profile is bound” is not supported by the specification. Examiner notes that paragraph 0025 states “Server computing device 106 can be configured to execute many software containers, in isolation from each other, that access a single operating system (OS) kernel. Server computing device 106 executes each software container in a separate OS process and constrains each container’s access to physical resources (e.g., CPU, memory) of server computing device 106 so that a single container does not utilize all of the available physical resources. Upon execution, server computing device 106 executes the software application code stored in one or more of the model containers 108a-108n to, e.g., launch RPC server module 111a-111n and make the corresponding ML classification model 110a-110n available to downstream computing applications (e.g., client application 103 of device 102) using Remote Procedure Calls defined by the associated protocol buffer profile 109a-109n for the container.” This shows support for constraints between the physical resources and the software container based on the protocol buffer profile, but does not show support for constraints between the client application and the software container as claimed in independent claim 1. The specification also has no mention of the term “prevent” nor “bound” and thus cannot support the claim limitation of “preventing execution of the machine learning model when an invoked RPC function does not correspond to the software container to which the buffer profile is bound”. Regarding claim 12, lines 9-14 reciting “…during deployment of the software container, automatically generating, by the server computing device, a protocol buffer profile derived from container-declared input and output interfaces of the machine learning model, the protocol buffer profile defining one or more executable RPC functions bound to the software container…” is not supported by the specification. Examiner notes that paragraph 0030 states “once the model container 108a is deployed, server computing device 106 generates (step 204) a protocol buffer profile 109a from the model container 108a image. As described above, protocol buffer profile 109a defines one or more Remote Procedure Call (RPC) functions for interactions between ML classification model ll0a and a consuming client application (e.g., application 103) using the RPC server module 11la and RPC client module 109a.” This excerpt shows support for generating a protocol buffer profile, but does not show support for the protocol buffer profile being derived from the container-declared input and output interfaces. Paragraph 0030 also does not show support that the generation of the protocol buffer profile occurs during deployment since the container is already deployed according to the specification. Examiner further notes that the specification does not show support for RPC function that are bound to the software container. Regarding claim 12, lines 15-17 reciting “enforcing, by the server computing device, compatibility constraints between the client application and the software container based on the protocol buffer profile, including validation of input structure and output schema prior to runtime execution…” is not supported by the specification. There is no mention of validation in the specification. Examiner notes that paragraph 0025 states “Server computing device 106 can be configured to execute many software containers, in isolation from each other, that access a single operating system (OS) kernel. Server computing device 106 executes each software container in a separate OS process and constrains each container’s access to physical resources (e.g., CPU, memory) of server computing device 106 so that a single container does not utilize all of the available physical resources. Upon execution, server computing device 106 executes the software application code stored in one or more of the model containers 108a-108n to, e.g., launch RPC server module 111a-111n and make the corresponding ML classification model 110a-110n available to downstream computing applications (e.g., client application 103 of device 102) using Remote Procedure Calls defined by the associated protocol buffer profile 109a-109n for the container.” This shows support for constraints between the physical resources and the software container based on the protocol buffer profile, but does not show support for constraints between the client application and the software container including validation of input structure and output schema prior to runtime execution. Regarding claim 12, lines 21-23 reciting “executing, by the server computing device, the machine learning model within the software container using the protocol buffer profile as a native invocation interface without intermediate translation layers, to generate a classification value for input provided in the request” is not supported by the specification. There is no mention of a native invocation interface within the specification. Additionally, there is no mention of translation layers withing the specification. Regarding claim 12, lines 26-29 reciting “…wherein the automatic generation of the protocol buffer profile and enforcement of compatibility constraints reduce runtime latency and prevent execution failures caused by interface mismatch between the client application and the machine learning model” is not supported by the specification. Examiner notes that there is no mention of latency in the specification, and thus, claiming to reduce runtime latency has no support within the specification. Examiner further notes that preventing execution failures is also not supported by the specification since there is no mention of preventing failures. Lastly, examiner notes that there is no mention of interface mismatch in the specification, and thus claiming to prevent execution failures caused by interface mismatch between the client application and the machine learning model has no support from the specification. Regarding claims 13-20 and 25-26, claims 13-20 and 25-26 are rejected for at least the same reasons as claim 12 since claims 13-20 and 25-26 depend on claim 12. Regarding claim 13, claim 13 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis. Regarding claim 14, claim 14 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis. Regarding claim 15, claim 15 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis. Regarding claim 16, claim 16 recites substantially similar limitations to claim 5, and is therefore rejected under the same analysis. Regarding claim 25, claim 25 recites substantially similar limitations to claim 23, and is therefore rejected under the same analysis. Regarding claim 26, claim 26 recites substantially similar limitations to claim 24, and is therefore rejected under the same analysis. Allowable Subject Matter Claims 1-9, 12-20, and 23-26 are be allowable over the prior art of record under the current interpretation noted in the 112(a) rejection. Examiner notes that if the 112(a) is resolved, further search and consideration is required. Specifically, regarding claim 1, “enforcing, by the server computing device, compatibility constraints between the client application and the software container based on the protocol buffer profile, including validation of input structure and output schema prior to runtime execution” and “wherein the automatic generation of the protocol buffer profile and enforcement of compatibility constraints reduce runtime latency and prevent execution failures caused by interface mismatch between the client application and the machine learning model” in conjunction with the other limitations of the claims are not taught by the prior art of record. The closest prior art is Sanapo et al. (US 2024/0256313 A1) (hereafter referred to as Sanapo), Kearney et al. (US 10,891,539 B1) (hereafter referred to as Kearney), and Zhang et al. (US 2019/0034937 A1) (hereafter referred to as Zhang). Sanapo discloses a computerized method of integrating one or more machine learning models with a client application using Remote Procedure Calls (RPCs) (Sanapo, page 10, paragraph 0033-0034; Sanapo, page 9, paragraph 0024; Sanapo, page 10, paragraph 0035), deploying, by a server computing device, a software container associated with a client application, the software container comprising executable code corresponding to a machine learning model of a plurality of machine learning models, a plurality of inputs to the machine learning model, and a plurality of outputs of the machine learning model (Sanapo, page 10, paragraph 0033-0034; Sanapo, page 9, paragraph 0024; Sanapo, page 11, paragraph 0039; Sanapo, page 11, paragraph 0044), during deployment of the software container, automatically generating, by the server computing device, a protocol buffer profile derived from container-declared input and output interfaces the machine learning model, the protocol buffer profile defining one or more executable RPC functions bound to the software container (Sanapo, page 10, paragraph 0033-0034; Sanapo, page 9, paragraph 0024; Sanapo, page 10, paragraph 0035), receiving, by the server computing device from the client application, a request invoking a first one of the RPC functions to access the machine learning model (Sanapo, page 10, paragraph 0033-0034; Sanapo, page 10, paragraph 0035), executing, by the server computing device, the machine learning model within the software container using the protocol buffer profile as a native invocation interface without intermediate translation layers, to generate a classification value for input provided in the request (Sanapo, page 10, paragraph 0033-0036; Sanapo, page 9, paragraph 0024; Sanapo, page 6, Fig. 5; Sanapo, page 10, paragraph 0035), and transmitting, by the server computing device, the classification value to the client application using a second one of the RPC functions (Sanapo, page 10, paragraph 0034; Sanapo, page 11, paragraph 0039). Sanapo does not disclose enforcing, by the server computing device, compatibility constraints between the client application and the software container based on the protocol buffer profile, including validation of input structure and output schema prior to runtime execution, or wherein the automatic generation of the protocol buffer profile and enforcement of compatibility constraints reduce runtime latency and prevent execution failures caused by interface mismatch between the client application and the machine learning model. Kearney discloses reducing latency (Kearney, page 22, column 3, lines 41-43), but does not disclose a protocol buffer or a protocol buffer profile. Kearney also does not disclose an interface mismatch. Zhang discloses validating input structure and output schema prior to runtime execution (Zhang, page 8, paragraph 0045), but does not disclose a protocol buffer profile nor a container based on the protocol buffer profile. Therefore, the prior art of record, individually, or in combination, does not disclose claim 1 as a whole. Claims 2-9 and 23-24 are allowable at least due to their dependencies on claim 1 under the current interpretation noted in the 112(a) rejection. Examiner notes that if the 112(a) is resolved, further search and consideration is required. Claim 12 recites substantially similar limitations as claim 1 and is therefore allowable under the same rationale under the current interpretation noted in the 112(a) rejection. Claims 13-20 and 25-26 are allowable at least due to their dependencies on claim 12 under the current interpretation noted in the 112(a) rejection. Examiner notes that if the 112(a) is resolved, further search and consideration is required. Response to Arguments Applicant’s arguments with respect to 101 on pages 8-15 and in light of the instant amendments have been fully considered and are persuasive. Specifically, the amendments to the claim which provided more structure to the server computing device were persuasive. The 101 rejections of the claims have been withdrawn. Examiner notes that should the 112(a) be resolved, further search and consideration regarding 101 rejection will be required. Applicant’s arguments with respect to the prior art rejections on pages 15-19 and in light of the instant amendments have been fully considered and are persuasive. Specifically, the prior art of record does not disclose “enforcing, by the server computing device, compatibility constraints between the client application and the software container based on the protocol buffer profile, including validation of input structure and output schema prior to runtime execution” and “wherein the automatic generation of the protocol buffer profile and enforcement of compatibility constraints reduce runtime latency and prevent execution failures caused by interface mismatch between the client application and the machine learning model”. The prior art rejections of the claims have been withdrawn. Examiner notes that should the 112(a) be resolved, further search and consideration regarding prior art rejections will be required. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Oct 23, 2025
Non-Final Rejection — §112
Feb 03, 2026
Response Filed
Mar 23, 2026
Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602431
METHODS FOR PERFORMING INPUT-OUTPUT OPERATIONS IN A STORAGE SYSTEM USING ARTIFICIAL INTELLIGENCE AND DEVICES THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12572828
METHOD FOR INDUSTRY TEXT INCREMENT AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+66.7%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month