Prosecution Insights
Last updated: April 19, 2026
Application No. 17/159,639

USING CONTAINER INFORMATION TO SELECT CONTAINERS FOR EXECUTING MODELS

Final Rejection §103§DP
Filed
Jan 27, 2021
Examiner
CHEN, WUJI
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
Salesforce Inc.
OA Round
6 (Final)
71%
Grant Probability
Favorable
7-8
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
170 granted / 239 resolved
+13.1% vs TC avg
Strong +38% interview lift
Without
With
+37.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
26 currently pending
Career history
265
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 239 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to communication filed on 11/21/2025. Claims 1-20 are pending. Response to Arguments Applicant's response to double patenting rejection is acknowledged, and double patenting rejection is maintained. Applicant's argument(s) filed on 11/21/2025 with respect to claim(s) 1-20 have been fully considered but they are not persuasive. In the communication field, applicant argues in substance that: a. Regarding claim(s) 1, 8 and 14, Applicant argues (Remark page(s) 10-12) “For example, the asserted combination does not teach the features of "watch, by a watcher associated with a routing container, for changes in available serving containers associated with the routing container" and "providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information" of claims 1, 8, and 14 (emphasis added).” In response to argument [a], Examiners respectfully disagrees. The examiner interprets the claim limitation as "a watcher/software/program/device etc. monitors and collects all containers data/information, determines available serving containers and updates mapping/table/list/database/data store etc. of the available serving containers based on monitoring data/information". Therefore, Fichtenholtz teaches this interpretation because "[0029], Figs. 1-2 illustrates a diagram of container instances 108 being prepared to service requests for tenants, according to some embodiments. At this stage, the router 106 has received a request to be serviced. First, the router 106 can work in conjunction with the data store 112 to determine whether an available container instance is operating and available to service the request. The data store 112 can include a service registry 214 that catalogs each of the available instances in the system. The router 106 can keep a local copy at least a portion of the service registry 214. The data store 112 can intermittently update the router 106 with a list of changes to the service registry 214. In some embodiments, the data store 112 can update the router 106 with a list of available container instances that can receive requests. [0033], once a service is assigned to a container in the gateway, the service may perform a periodic “heartbeat” as an indication to other services that it is alive and functioning properly. For example, when a service is loaded into a container, it may perform a heartbeat to let the router(s) know that it is available to service requests. Performing a heartbeat may include updating a corresponding entry in the service registry of the data store 112. [examiner notes: the data store 112 interprets to be the watcher. The routers 106 performs same function as the routing container to monitor and update available serving containers.]”. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting over Claims 1-20 of co-pending Application No. 17159805 in view of Fichtenholtz (US 20190102206 A1) hereby “Fichtenholtz”, as set forth in the table below. Although the claims at issue are not identical, they are not patentably distinct from each other because it would be obvious to one of ordinary skill in the art at the time of filing that the claims cover substantially the same subject matter, i.e., tracking content sharing. The following table illustrates how the claims of the instant application correspond with the claims of the co-pending application. This is a provisional double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application Co-pending Application # US 17159805 Claim 1 A system for using container information to select containers for executing models, the system comprising: one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processors to: watch, by a watcher associated with a routing container, for changes in available serving containers; providing, by the watcher, information about the changes in the available serving containers to a routing container; identify a version of a machine-learning model associated with a request, in response to receiving the request from an application; identify a set of serving containers corresponding to the version of the machine- learning model from a cluster of available serving containers associated with the version of the machine-learning model, wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the set of serving containers and an identifier of any corresponding machine-learning model; select a serving container from the set of serving containers corresponding to the version of the machine-learning model; load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; execute, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and respond to the request based on executing the machine-learning model on behalf of the request. Claim 1 A system for using container and model information to select containers for executing models, the system comprising: one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processors to: identify a version of a machine-learning model associated with a request, in response to receiving the request from an application; identify model information associated with machine learning models corresponding to a cluster of available serving containers associated with the version of the machine-learning model; select, based on the model information, a serving container from the cluster of available serving containers; load the machine-learning model in the serving container, in response to a determination that the machine-learning model is not loaded in the serving container; execute, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and respond to the request based on executing the machine-learning model on behalf of the request. 2. The system of claim 1, wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers. 2. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to update a data structure comprising at least one of model information associated with machine learning models corresponding to any serving containers in any cluster of serving containers and container information associated with serving containers in any corresponding cluster of serving containers. 3. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify another version of another machine-learning model associated with the request; identify another set of each serving container corresponding to the other machine- learning model from another cluster of available serving containers associated with the other version of the other machine-learning model; select another serving container from the other set of each serving container corresponding to the other machine-learning model; load the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and execute, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 3. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify another version of another machine-learning model associated with the request; identify model information associated with machine learning models corresponding to another cluster of available serving containers associated with the other version of the other machine-learning model; select, based on the model information, another serving container from the other cluster of available serving containers; load the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and execute, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 4. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identify the set of each serving container corresponding to the machine-learning model from the cluster of available serving containers associated with the version of the machine-learning model; select an additional serving container from the set of each serving container corresponding to the machine-learning model; load a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; execute, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and respond to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 4. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identify model information associated with machine learning models corresponding to the cluster of available serving containers associated with the version of the machine- learning model; select, based on the model information, an additional serving container from the cluster of available serving containers; load a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; execute, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and respond to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 5. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application; identify an extra set of each serving container corresponding to the extra machine- learning model from the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine-learning model; select an extra serving container from the extra set of each serving container corresponding to the extra machine-learning model; load the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; execute, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and respond to the extra request based on executing the extra machine-learning model on behalf of the extra request. 5. The system of claim 1, comprising further instructions, which when executed, cause the one or more processors to: identify an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application; identify model information associated with machine learning models corresponding to the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine- learning model; select, based on the model information, an extra serving container from the cluster of available serving containers; load the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; execute, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and respond to the extra request based on executing the extra machine-learning model on behalf of the extra request. 6. The system of claim 5, wherein the application is associated with a first tenant and the extra application is associated with a second tenant. 6. The system of claim 5, wherein the application is associated with a first tenant and the extra application is associated with a second tenant 7. The system of claim 1, wherein any set of each serving container corresponding to any machine-learning model is identified based on executing a consistent hashing function applied to identifiers of each serving container associated with any version of any corresponding machine-learning model and an identifier of any corresponding machine-learning model. 7. The system of claim 1, wherein identifying any serving container from any cluster of available serving containers is based on one of leveraging a bin-packing algorithm and leveraging a consistent hashing algorithm with identifiers of each serving container associated with any version of the corresponding machine-learning model and with an identifier of any corresponding machine-learning model. 8. A computer program product comprising computer-readable program code to be executed by one or more processors when retrieved from a non- transitory computer- readable medium, the program code including instructions to: watch, by a watcher associated with a routing container, for changes in available serving containers; providing, by the watcher, information about the changes in the available serving containers to a routing container; identify a version of a machine-learning model associated with a request, in response to receiving the request from an application; identify a set of serving containers corresponding to the version of the machine- learning model from a cluster of available serving containers associated with the version of the machine-learning model, wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the serving containers and an identifier of any corresponding machine-learning model; select a serving container from the set of each serving containers corresponding to the version of the machine-learning model; load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; execute, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and respond to the request based on executing the machine-learning model on behalf of the request. 8. A computer program product comprising computer-readable program code to be executed by one or more processors when retrieved from a non-transitory computer- readable medium, the program code including instructions to: identify a version of a machine-learning model associated with a request, in response to receiving the request from an application; identify model information associated with machine learning models corresponding to a cluster of available serving containers associated with the version of the machine- learning model; select, based on the model information, a serving container from the cluster of available serving containers; load the machine-learning model in the serving container, in response to a determination that the machine-learning model is not loaded in the serving container; execute, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and respond to the request based on executing the machine-learning model on behalf of the request. 9. The computer program product of claim 8, wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers. 9. The computer program product of claim 8, wherein the program code comprises further instructions to update a data structure comprising at least one of model information associated with machine learning models corresponding to any serving containers in any cluster of serving containers and container information associated with serving containers in any corresponding cluster of serving containers. 10. The computer program product of claim 8, wherein the program code comprises further instructions to: identify another version of another machine-learning model associated with the request; identify another set of each serving container corresponding to the other machine- learning model from another cluster of available serving containers associated with the other version of the other machine-learning model; select another serving container from the other set of each serving container corresponding to the other machine-learning model; load the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and execute, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 10. The computer program product of claim 8, wherein the program code comprises further instructions to: identify another version of another machine-learning model associated with the request; identify model information associated with machine learning models corresponding to another cluster of available serving containers associated with the other version of the other machine-learning model; select, based on the model information, another serving container from the other cluster of available serving containers; load the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and execute, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 11. The computer program product of claim 8, wherein the program code comprises further instructions to: identify the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identify the set of each serving container corresponding to the machine-learning model from the cluster of available serving containers associated with the version of the machine-learning model; select an additional serving container from the set of each serving container corresponding to the machine-learning model; load a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; execute, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and respond to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 11. The computer program product of claim 8, wherein the program code comprises further instructions to: identify the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identify model information associated with machine learning models corresponding to the cluster of available serving containers associated with the version of the machine- learning model; select, based on the model information, an additional serving container from the cluster of available serving containers; load a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; execute, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and respond to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 12. The computer program product of claim 8, wherein the program code comprises further instructions to: identify an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application, wherein the application is associated with a first tenant and the extra application is associated with a second tenant; identify an extra set of each serving container corresponding to the extra machine- learning model from the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine-learning model; select an extra serving container from the extra set of each serving container corresponding to the extra machine-learning model; load the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; execute, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and respond to the extra request based on executing the extra machine-learning model on behalf of the extra request. 12. The computer program product of claim 8, wherein the program code comprises further instructions to: identify an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application, wherein the application is associated with a first tenant and the extra application is associated with a second tenant; identify model information associated with machine learning models corresponding to the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine- learning model; select, based on the model information, an extra serving container from the cluster of available serving containers; load the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; execute, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and respond to the extra request based on executing the extra machine-learning model on behalf of the extra request. 14. A computer-implemented method for using container information to select containers for executing models, the computer-implemented method comprising: watching, by a watcher associated with a routing container, for changes in available serving containers; providing, by the watcher, information about the changes in the available serving containers to a routing container; identifying a version of a machine-learning model associated with a request, in response to receiving the request from an application; identifying a set of serving containers corresponding to the version of the machine-learning model from a cluster of available serving containers associated with the version of the machine-learning model, wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the serving containers and an identifier of any corresponding machine-learning model; selecting a serving container from the set of serving containers corresponding to the version of the machine-learning model; loading the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; executing, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and responding to the request based on executing the machine-learning model on behalf of the request. 14. A computer-implemented method for using container and model information to select containers for executing models, the computer-implemented method comprising: identifying a version of a machine-learning model associated with a request, in response to receiving the request from an application; identifying model information associated with machine learning models corresponding to a cluster of available serving containers associated with the version of the machine-learning model; selecting, based on the model information, a serving container from the cluster of available serving containers; loading the machine-learning model in the serving container, in response to a determination that the machine-learning model is not loaded in the serving container; executing, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and responding to the request based on executing the machine-learning model on behalf of the request. 15. The computer-implemented method of claim 14, wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers. 15. The computer-implemented method of claim 14, the computer-implemented method further comprising updating a data structure comprising at least one of model information associated with machine learning models corresponding to any serving containers in any cluster of serving containers and container information associated with serving containers in any corresponding cluster of serving containers. 16. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying another version of another machine-learning model associated with the request; identifying another set of each serving container corresponding to the other machine- learning model from another cluster of available serving containers associated with the other version of the other machine-learning model; selecting another serving container from the other set of each serving container corresponding to the other machine-learning model; loading the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and executing, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 16. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying another version of another machine-learning model associated with the request; identifying model information associated with machine learning models corresponding to another cluster of available serving containers associated with the other version of the other machine-learning model; selecting, based on the model information, another serving container from the other cluster of available serving containers; loading the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; and executing, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. 17. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identifying the set of each serving container corresponding to the machine-learning model from the cluster of available serving containers associated with the version of the machine-learning model; selecting an additional serving container from the set of each serving container corresponding to the machine-learning model; loading a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; executing, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and responding to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 17. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; identifying model information associated with machine learning models corresponding to the cluster of available serving containers associated with the version of the machine-learning model; selecting. based on the model information, an additional serving container from the cluster of available serving containers; loading a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; executing, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and responding to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. 18. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application; identifying an extra set of each serving container corresponding to the extra machine- learning model from the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine-learning model; selecting an extra serving container from the extra set of each serving container corresponding to the extra machine-learning model; loading the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; executing, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and responding to the extra request based on executing the extra machine-learning model on behalf of the extra request. 18. The computer-implemented method of claim 14, the computer-implemented method further comprising: identifying an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application; identifying model information associated with machine learning models corresponding to the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine- learning model; selecting, based on the model information, an extra serving container from the cluster of available serving containers; loading the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; executing, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and responding to the extra request based on executing the extra machine-learning model on behalf of the extra request. 19. The computer-implemented method of claim 18, wherein the application is associated with a first tenant and the extra application is associated with a second tenant. 19. The computer-implemented method of claim 18, wherein the application is associated with a first tenant and the extra application is associated with a second tenant. 20. The computer-implemented method of claim 14, wherein any set of each serving container corresponding to any machine-learning model is identified based on executing a consistent hashing function applied to identifiers of each serving container associated with any version of any corresponding machine-learning model and an identifier of any corresponding machine-learning model. 20. The computer-implemented method of claim 14, wherein identifying any serving container from any cluster of available serving containers is based on one of leveraging a bin-packing algorithm and leveraging a consistent hashing algorithm with identifiers of each serving container associated with any version of the corresponding machine- learning model and with an identifier of any corresponding machine-learning model. Co-pending Application No. 17159805 does not explicitly disclose “watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information;”. Fichtenholtz discloses monitoring availability of containers and updating availability of containers on data store and notify a router availability of containers (Evans para [0024], [0029], [0044]). It would have been obvious to one of ordinary skill in the art at the time of the effective filing to modify the method of the embodiment shown in Figure 1 of Fichtenholtz to watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information as taught by the related embodiment of Fichtenholtz. One of ordinary skill in the art would have been motivated to combine watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information to prevent the container from simultaneously servicing requests associated with different tenants. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claim(s) 1, 3-8, 10-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over FAULHABER (US 20190155633 A1) in view of Lewis (US20220083363 A1) in view of Vishnoi (US 20210319360 A1) in view of Fichtenholtz (US 20190102206 A1). With respect to independent claims: Regarding claim(s) 1, FAULHABER teaches a system for using container information to select containers for executing models, the system comprising: one or more processors; and anon-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processors to: (FAULHABER,[185], each of the one or more electronic devices 1520 may include an operating system that provides executable program instructions for the general administration and operation of that device and typically will include computer-readable medium storing instructions that, when executed by a processor of the device, allow the device to perform its intended functions.) identify a version of a machine-learning model associated with a request, in response to receiving the request from an application; (FAULHABER, [0029], Fig.1; The user devices 102 can interact with the model training system 120 via frontend 129 of the model training system 120. For example, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model (e.g., a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, etc.). identify a set of serving containers corresponding to the version of the machine- learning model from a cluster of available serving containers associated with the version of the machine-learning model, (FAULHABER, [0029], Fig.1; the user devices 102 can interact with the model training system 120 via frontend 129 of the model training system 120. For example, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored, an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model (e.g., a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, etc.).) select a serving container from the set of serving containers corresponding to the version of the machine-learning model; (FAULHABER, [0125], the training and/or hosting of machine learning models can be performed without needing significant knowledge on the part of users as to how these models are to be trained or used. For example, in some embodiments users can select or create a container including machine learning related code—potentially using any language(s)/package(s) that the user desires—that adheres to a specification (or a “schema”) proscribed by the machine learning service 1006. With a container that follows this specification, the machine learning service 1006 can transparently implement all of the training and/or hosting of the model without specific user instructions or knowledge of how these tasks are being performed.) execute, in the serving container, the machine-learning model on behalf of the request, in response to a determination that the machine-learning model is loaded in the serving container; and respond to the request based on executing the machine- learning model on behalf of the request. (FAULHABER, [0046], container manager 110 may provide a machine learning model agnostic functionality that allows client device 102 to request a particular type of machine learning model that container manager 110 may leverage to provision a container 114 having the particular machine learning model type pre-loaded therein.). FAULHABER does not teach wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the set of serving containers and an identifier of any corresponding machine-learning model; load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information; Lewis however in the same field of computer networking teaches wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the set of serving containers and an identifier of any corresponding machine-learning model; (Lewis, [0048], [0061]- [0065], Fig.6, based on the deployment identifier, at block 610, container configuration computing platform 110 may select a deployment by matching the deployment identifier to the correct identifier. For example, based on the deployment identifier, at block 610, container configuration computing platform 110 may identify a type of deployment, identify types of virtualization containers the request has to be routed to, and so forth. container configuration computing platform 110 may, based on a deployment identifier and a request, determine a type of virtualization container, machine learning models, scripts, test data sets, and so forth, that may be needed. The deployment identifier associated with the request may be “abc123,” the model identifier may be “123,” and a hash reference identifier for a second version may be “dxyj2,” while a hash reference identifier for a first version may be “dbty.” Accordingly, container configuration computing platform 110 may compare “dxyj2” and “dbty,” detect a change, and retrieve the second version of the model associated with the model identifier “123.” [examiner notes: deployment identifier, model identifier and hash reference identifier are hash, A hash is a fixed-length string of characters that is created from a longer message or data. Hashing is the process of creating a hash, and a hash function is the specific algorithm or mathematical function used to create the hash.]) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the method/system of FAULHABER to specify wherein the set of serving containers is identified based, at least in part, on executing a hashing function applied to identifiers of the serving containers and an identifier of any corresponding machine- learning model as taught by Lewis. The motivation/suggestion would have been because there is a need to ensuring that the models are available in a timely and efficient manner, and changes, and/or updates are performed seamlessly, may be highly advantageous to providing an efficient and effective platform to users of such models. (Lewis, [0002]). FAULHABER does not teach load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information; Vishnoi however in the same field of computer networking teaches load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container; (Vishnoi, [0082], in step 320 (i.e., when the skillbot is not being served by any deployment), the serving gateway of the query serving system transmits a request to the serving operator of the query serving system to instantiate a new deployment to host a model associated with the skillbot.) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the method/system of FAULHABER to specify load the machine-learning model in the serving container, in response to a determination that the machine-learning model having the version is not loaded in the serving container as taught by Vishnoi. The motivation/suggestion would have been because there is a need to providing a fast, efficient, and scalable multi-tenant serve pool for chatbot systems (Vishnoi, [0002]). FAULHABER does not teach watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information; Fichtenholtz however in the same field of computer networking teaches watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; (Fichtenholtz, [0029], Figs. 1-2 illustrates a diagram of container instances 108 being prepared to service requests for tenants, according to some embodiments. At this stage, the router 106 has received a request to be serviced. First, the router 106 can work in conjunction with the data store 112 to determine whether an available container instance is operating and available to service the request. The data store 112 can include a service registry 214 that catalogs each of the available instances in the system. The router 106 can keep a local copy at least a portion of the service registry 214. The data store 112 can intermittently update the router 106 with a list of changes to the service registry 214. In some embodiments, the data store 112 can update the router 106 with a list of available container instances that can receive requests. [0033], once a service is assigned to a container in the gateway, the service may perform a periodic “heartbeat” as an indication to other services that it is alive and functioning properly. For example, when a service is loaded into a container, it may perform a heartbeat to let the router(s) know that it is available to service requests. Performing a heartbeat may include updating a corresponding entry in the service registry of the data store 112. [examiner notes: the data store 112 interprets to be the watcher. The routers 106 performs same function as the routing container to monitor and update available serving containers.]) providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information; (Fichtenholtz, [0029], Figs. 1-2 illustrates a diagram of container instances 108 being prepared to service requests for tenants, according to some embodiments. At this stage, the router 106 has received a request to be serviced. First, the router 106 can work in conjunction with the data store 112 to determine whether an available container instance is operating and available to service the request. The data store 112 can include a service registry 214 that catalogs each of the available instances in the system. The router 106 can keep a local copy at least a portion of the service registry 214. The data store 112 can intermittently update the router 106 with a list of changes to the service registry 214. In some embodiments, the data store 112 can update the router 106 with a list of available container instances that can receive requests. [0033], once a service is assigned to a container in the gateway, the service may perform a periodic “heartbeat” as an indication to other services that it is alive and functioning properly. For example, when a service is loaded into a container, it may perform a heartbeat to let the router(s) know that it is available to service requests. Performing a heartbeat may include updating a corresponding entry in the service registry of the data store 112. [examiner notes: the data store 112 interprets to be the watcher. The routers 106 performs same function as the routing container to monitor and update available serving containers.]) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the method/system of FAULHABER to specify watch, by a watcher associated with a routing container, for changes in available serving containers associated with a routing container; providing, by the watcher, information about the changes in the available serving containers to the routing container to update a mapping of the available serving containers based on the information as taught by Fichtenholtz. The motivation/suggestion would have been because there is a need to prevent the container from simultaneously servicing requests associated with different tenants (Fichtenholtz, [0006]). Claim(s) 8 and 14 is/are substantially similar to claim 1, and is thus rejected under substantially the same rationale. With respect to dependent claims: Regarding claim(s) 3, the system of claim 1, FAULHABER-Lewis-Vishnoi-Fichtenholtz teach comprising further instructions, which when executed, cause the one or more processors to: identify another version of another machine-learning model associated with the request; (FAULHABER,[0061], the deployment request can identify multiple model data files corresponding to different trained machine learning models because the trained machine learning models are related (e.g., the output of one trained machine learning model is used as an input to another trained machine learning model). Thus, the user may desire to deploy multiple machine learning models to eventually receive a single output that relies on the outputs of multiple machine learning models. identify another set of each serving container corresponding to the other machine- learning model from another cluster of available serving containers associated with the other version of the other machine-learning model; (FAULHABER, [0029], Fig.1, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model. [0159], Fig.10; when executed, the container 1022 can be provided the model, e.g., by mounting or importing model artifacts 1040 for the container 1022 to use. Thus, when the container is executed, it can read the model from a specification-defined location (in whatever format the user wants), and can start serving requests. [0068], the operating environment 100 supports many different types of machine learning models, such as multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like.) select another serving container from the other set of each serving container corresponding to the other machine-learning model; (FAULHABER, [0125], the training and/or hosting of machine learning models can be performed without needing significant knowledge on the part of users as to how these models are to be trained or used. For example, in some embodiments users can select or create a container including machine learning related code—potentially using any language(s)/package(s) that the user desires—that adheres to a specification (or a “schema”) proscribed by the machine learning service 1006. With a container that follows this specification, the machine learning service 1006 can transparently implement all of the training and/or hosting of the model without specific user instructions or knowledge of how these tasks are being performed.) and execute, in the other serving container, the other machine-learning model on behalf of the request, in response to a determination that the other machine-learning model is loaded in the other serving container; (FAULHABER,[0046], container manager 110 may provide a machine learning model agnostic functionality that allows client device 102 to request a particular type of machine learning model that container manager 110 may leverage to provision a container 114 having the particular machine learning model type pre-loaded therein.) wherein responding to the request is further based on executing the other machine-learning model on behalf of the request. (FAULHABER, [0101] FIG. 6 is a block diagram of the operating environment 100 of FIG. 1 illustrating the operations performed by the components of the operating environment 100 to execute related machine learning models, according to some embodiments. As illustrated in FIG. 6, user device 102 transmits a machine learning model execution request to the frontend 149 at (1). The frontend 149 then forwards the execution request to a first ML scoring container 150A initialized in a virtual machine instance 142 at (2).) load the other machine-learning model in the other serving container, in response to a determination that the other machine-learning model is not loaded in the other serving container; (Vishnoi, [0082], in step 320 (i.e., when the skillbot is not being served by any deployment), the serving gateway of the query serving system transmits a request to the serving operator of the query serving system to instantiate a new deployment to host a model associated with the skillbot.) The same motivation to combine as the independent claim 1 applies here. Regarding claim(s) 4, the system of claim 1, FAULHABER-Lewis-Vishnoi-Fichtenholtz teach comprising further instructions, which when executed, cause the one or more processors to: identify the version of the machine-learning model associated with an additional request, in response to receiving the additional request from the application; (FAULHABER, [0029], Fig.1, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model (e.g., a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, etc.). [0068], the operating environment 100 supports many different types of machine learning models, such as multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like.) identify the set of each serving container corresponding to the machine-learning model from the cluster of available serving containers associated with the version of the machine-learning model; (FAULHABER, [0026], users, on the other hand, may desire to train and use many different types of machine learning models that can receive different types of input data formats. [0029], Fig.1, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model. [0159], Fig.10; when executed, the container 1022 can be provided the model, e.g., by mounting or importing model artifacts 1040 for the container 1022 to use. Thus, when the container is executed, it can read the model from a specification-defined location (in whatever format the user wants), and can start serving requests. [0068], the operating environment 100 supports many different types of machine learning models, such as multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like.) select an additional serving container from the set of each serving container corresponding to the machine-learning model; (FAULHABER, [0125], the training and/or hosting of machine learning models can be performed without needing significant knowledge on the part of users as to how these models are to be trained or used. For example, in some embodiments users can select or create a container including machine learning related code—potentially using any language(s)/package(s) that the user desires—that adheres to a specification (or a “schema”) proscribed by the machine learning service 1006. With a container that follows this specification, the machine learning service 1006 can transparently implement all of the training and/or hosting of the model without specific user instructions or knowledge of how these tasks are being performed.) execute, in the additional serving container, the copy of the machine-learning model on behalf of the additional request, in response to a determination that the copy of the machine-learning model is loaded in the additional serving container; and (FAULHABER,[0046], container manager 110 may provide a machine learning model agnostic functionality that allows client device 102 to request a particular type of machine learning model that container manager 110 may leverage to provision a container 114 having the particular machine learning model type pre-loaded therein.) respond to the additional request based on executing the copy of the machine-learning model on behalf of the additional request. (FAULHABER, [0101] FIG. 6 is a block diagram of the operating environment 100 of FIG. 1 illustrating the operations performed by the components of the operating environment 100 to execute related machine learning models, according to some embodiments. As illustrated in FIG. 6, user device 102 transmits a machine learning model execution request to the frontend 149 at (1). The frontend 149 then forwards the execution request to a first ML scoring container 150A initialized in a virtual machine instance 142 at (2).) load a copy of the machine-learning model in the additional serving container, in response to a determination that the copy of the machine-learning model is not loaded in the additional serving container; (Vishnoi, [0082], in step 320 (i.e., when the skillbot is not being served by any deployment), the serving gateway of the query serving system transmits a request to the serving operator of the query serving system to instantiate a new deployment to host a model associated with the skillbot.) The same motivation to combine as the independent claim 1 applies here. Regarding claim(s) 5, the system of claim 1, FAULHABER-Lewis-Vishnoi-Fichtenholtz teach comprising further instructions, which when executed, cause the one or more processors to: identify an extra version of an extra machine-learning model associated with an extra request, in response to receiving the extra request from an extra application; (FAULHABER, [0029], Fig.1, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine on which to train a machine learning model (e.g., a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, etc.). [0068], the operating environment 100 supports many different types of machine learning models, such as multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like.) identify an extra set of each serving container corresponding to the extra machine- learning model from the cluster of available serving containers which is associated with both the extra version of the extra machine-learning model and the version of the machine-learning model; (FAULHABER, [0029], Fig.1; The user devices 102 can interact with the model training system 120 via frontend 129 of the model training system 120. For example, a user device 102 can provide a training request to the frontend 129 that includes a container image (or multiple container images, or an identifier of one or multiple locations where container images are stored), an indicator of input data (e.g., an address or location of input data), one or more hyperparameter values (e.g., values indicating how the algorithm will operate, how many algorithms to run in parallel, how many clusters into which to separate data, etc.), and/or information describing the computing machine (the version of the machine-learning model) on which to train a machine learning model (e.g., a graphical processing unit (GPU) instance type, a central processing unit (CPU) instance type, an amount of memory to allocate, a type of virtual machine instance to use for training, etc.). [0159], Fig.10; For the hosting container 1022, the specification may indicate the container 1022 is to act as a web server, respond to certain types of requests on a particular port or ports (e.g., 8080), respond to certain types of requests in a certain way, etc. When executed, the container 1022 can be provided the model, e.g., by mounting or importing model artifacts 1040 for the container 1022 to use. Thus, when the container is executed, it can read the model from a specification-defined location (in whatever format the user wants), and can start serving requests. [0068], the operating environment 100 supports many different types of machine learning models, such as multi arm bandit models, reinforcement learning models, ensemble machine learning models, deep learning models, and/or the like.) select an extra serving container from the extra set of each serving container corresponding to the extra machine-learning model; (FAULHABER, [0125], the training and/or hosting of machine learning models can be performed without needing significant knowledge on the part of users as to how these models are to be trained or used. For example, in some embodiments users can select or create a container including machine learning related code—potentially using any language(s)/package(s) that the user desires—that adheres to a specification (or a “schema”) proscribed by the machine learning service 1006. With a container that follows this specification, the machine learning service 1006 can transparently implement all of the training and/or hosting of the model without specific user instructions or knowledge of how these tasks are being performed.) execute, in the extra serving container, the extra machine-learning model on behalf of the extra request, in response to a determination that the extra machine-learning model is loaded in the extra serving container; and (FAULHABER,[0046], container manager 110 may provide a machine learning model agnostic functionality that allows client device 102 to request a particular type of machine learning model that container manager 110 may leverage to provision a container 114 having the particular machine learning model type pre-loaded therein.) respond to the extra request based on executing the extra machine-learning model on behalf of the extra request. (FAULHABER, [0101] FIG. 6 is a block diagram of the operating environment 100 of FIG. 1 illustrating the operations performed by the components of the operating environment 100 to execute related machine learning models, according to some embodiments. As illustrated in FIG. 6, user device 102 transmits a machine learning model execution request to the frontend 149 at (1). The frontend 149 then forwards the execution request to a first ML scoring container 150A initialized in a virtual machine instance 142 at (2).) load the extra machine-learning model in the extra serving container, in response to a determination that the extra machine-learning model is not loaded in the extra serving container; (Vishnoi, [0082], in step 320 (i.e., when the skillbot is not being served by any deployment), the serving gateway of the query serving system transmits a request to the serving operator of the query serving system to instantiate a new deployment to host a model associated with the skillbot.) The same motivation to combine as the independent claim 1 applies here. Regarding claim(s) 6, the system of claim 5, FAULHABER-Lewis-Vishnoi-Fichtenholtz teach wherein the application is associated with a first tenant and the extra application is associated with a second tenant. (FAULHABER [0124], users are able to write arbitrary pieces of code for machine learning, and using a defined packaging mechanism, the users can “inject” their code into a machine learning environment (e.g., provided by machine learning service 1006), where the models can seamlessly be trained (e.g., in model training system 120) based on training data 1016, and the resulting models may or may not thereafter be deployed in a hosted environment (e.g., model hosting system 140). With these hosted models (e.g., inference code 1024 executed by a container 1022), client applications 1008—whether hosted within the provider network 199 or external to the provider network 199—can issue requests via one or more inference endpoints 1010 (e.g., as HTTP requests) to perform inference using the model.) Regarding claim(s) 7, the system of claim 1, FAULHABER does not teach wherein any set of each serving container corresponding to any machine-learning model is identified based on executing a consistent hashing function applied to identifiers of each serving container associated with any version of any corresponding machine-learning model and an identifier of any corresponding machine-learning model. wherein any set of each serving container corresponding to any machine-learning model is identified based on executing a consistent hashing function applied to identifiers of each serving container associated with any version of any corresponding machine-learning model and an identifier of any corresponding machine-learning model. (Lewis, [0048], [0061]- [0065], Fig.6, based on the deployment identifier, at block 610, container configuration computing platform 110 may select a deployment by matching the deployment identifier to the correct identifier. For example, based on the deployment identifier, at block 610, container configuration computing platform 110 may identify a type of deployment, identify types of virtualization containers the request has to be routed to, and so forth. container configuration computing platform 110 may, based on a deployment identifier and a request, determine a type of virtualization container, machine learning models, scripts, test data sets, and so forth, that may be needed. The deployment identifier associated with the request may be “abc123,” the model identifier may be “123,” and a hash reference identifier for a second version may be “dxyj2,” while a hash reference identifier for a first version may be “dbty.” Accordingly, container configuration computing platform 110 may compare “dxyj2” and “dbty,” detect a change, and retrieve the second version of the model associated with the model identifier “123.” [examiner notes: deployment identifier, model identifier and hash reference identifier are hash, A hash is a fixed-length string of characters that is created from a longer message or data. Hashing is the process of creating a hash, and a hash function is the specific algorithm or mathematical function used to create the hash.) The same motivation to combine as the independent claim 1 applies here. Claim(s) 10 and 16 is/are substantially similar to claim 3, and is thus rejected under substantially the same rationale. Claim(s) 11 and 17 is/are substantially similar to claim 4, and is thus rejected under substantially the same rationale. Claim(s) 12 and 18 and is/are substantially similar to claim 5, and is thus rejected under substantially the same rationale. Claim(s) 19 and is/are substantially similar to claim 6, and is thus rejected under substantially the same rationale. Claim(s) 13 and 20 is/are substantially similar to claim 7, and is thus rejected under substantially the same rationale. 2. Claim(s) 2, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over in view of FAULHABER in view of Lewis in view of Vishnoi in view of Fichtenholtz further in view of Meduri (US 11055273 B1). Regarding claim(s) 2, the system of claim 1, FAULHABER-Lewis-Vishnoi-Fichtenholtz do not teach wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers. Meduri however in the same field of computer networking teaches wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers. (Meduri, col.2, lines 1-21; the state change information includes a version number that may be used to indicate an ordering of state changes for the corresponding software container. Col.10, lines 34-55; the customers 202 need not choose where the tasks should be executed. The placement scheme of the scheduler 208 may be configured to distribute tasks evenly over the cluster (e.g., round robin fashion, stochastic distribution scheme, etc.), and may be configured to distribute tasks based on a current or projected resource consumption by the cluster, in order to make the most efficient use of available resources. The scheduler 208 may obtain cluster manager metadata and other information about the availability of the container instances 218 in a cluster via the container manager backend services 214. The cluster manager metadata and other information may include data about the current state of the container instances 218 assigned to the cluster, available resources within the container instances, containers running within the container instances, and other information usable by the scheduler 208 to make placement decisions. The DescribeCluster application programming interface call may cause the cluster manager backend service to provide the cluster metadata for the specified cluster.) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the method/system of FAULHABER to specify wherein selecting from any cluster of available serving containers is based on updating a data structure comprising container information associated with serving containers in any corresponding cluster of serving containers as taught by Meduri. The motivation/suggestion would have been because there is a need to reduce the expense and overhead associated with maintaining their own computing resources have turned instead to purchasing remote computing services, such as remote program execution over multiple virtual machine instances and remote data storage, offered by computing resource service providers to customers (Meduri, BACKGROUND]). Claim(s) 9 and 15 is/are substantially similar to claim 2, and is thus rejected under substantially the same rationale. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WUJI CHEN whose telephone number is (571)270-0365. The examiner can normally be reached on 9am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK SRIVASTAVA can be reached on (571) 272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WUJI CHEN/ Examiner, Art Unit 2449 /VIVEK SRIVASTAVA/Supervisory Patent Examiner, Art Unit 2449
Read full office action

Prosecution Timeline

Jan 27, 2021
Application Filed
Mar 05, 2024
Non-Final Rejection — §103, §DP
Jun 11, 2024
Response Filed
Jul 02, 2024
Final Rejection — §103, §DP
Oct 01, 2024
Interview Requested
Oct 10, 2024
Applicant Interview (Telephonic)
Oct 10, 2024
Examiner Interview Summary
Oct 15, 2024
Request for Continued Examination
Oct 17, 2024
Response after Non-Final Action
Oct 22, 2024
Non-Final Rejection — §103, §DP
Feb 25, 2025
Response Filed
Apr 08, 2025
Final Rejection — §103, §DP
Jun 02, 2025
Interview Requested
Aug 12, 2025
Applicant Interview (Telephonic)
Aug 12, 2025
Examiner Interview Summary
Aug 13, 2025
Request for Continued Examination
Aug 20, 2025
Response after Non-Final Action
Oct 06, 2025
Non-Final Rejection — §103, §DP
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Examiner Interview (Telephonic)
Nov 21, 2025
Response Filed
Feb 04, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603932
REMOTE DESKTOP INFRASTRUCTURE
2y 5m to grant Granted Apr 14, 2026
Patent 12598155
GEOCODING WITH GEOFENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572482
A NOVEL DATA PROCESSING ARCHITECTURE AND RELATED PROCEDURES AND HARDWARE IMPROVEMENTS
2y 5m to grant Granted Mar 10, 2026
Patent 12549924
SYSTEMS, METHODS AND APPARATUS FOR GEOFENCE NETWORKS
2y 5m to grant Granted Feb 10, 2026
Patent 12526224
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR SELECTING NETWORK FUNCTION (NF) PROFILES OF NF SET MATES TO ENABLE ALTERNATE ROUTING
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+37.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 239 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month