Prosecution Insights
Last updated: April 19, 2026
Application No. 18/376,856

MANAGING DEPLOYMENT OF CUSTOM RESOURCES IN A CONTAINER ORCHESTRATION SYSTEM

Non-Final OA §103
Filed
Oct 05, 2023
Examiner
ESPANA, CARLOS ALBERTO
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
17 granted / 23 resolved
+18.9% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
29 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Miriyala (US 20230104568 A1) in view of Baid (US 20220350687 A1). Regarding claim 1, Miriyala teaches: A method of managing a custom resource in a container orchestration (CO) system, the method comprising. (Claim 13. A method comprising:) receiving, from a manager at a management cluster of the CO system, an intent identifier, the management cluster including a controller configured to manage the custom resource, the management cluster storing the intent identifier in a database. ([0235] In convention SDN architectures, the network controller handles the orchestration for all use cases. The configuration nodes translate intents into configuration objects based on the data model and write them into a database (e.g., Cassandra). In some cases, at the same time, a notification is sent to all clients awaiting the configuration, e.g., via RabbitMQ. [0284] Central cluster 902 includes respective SDN controller managers 303-1-303-N (collectively, “SDN controller managers 303”) for workload clusters 930. SDN controller manager 303-1 is the interface between native resources of the orchestration platform (e.g., (Service, Namespace, Pod, Network Policy, Network Attachment Definition) and custom resources for SDN architecture configuration and, more particularly, for custom resources for workload cluster 930-1 configuration. [0287] In multicluster mode of multicluster deployment for SDN architecture 1000, each of the distributed workload clusters 930 is associated to central cluster 902 via a dedicated one of SDN controller managers 303. SDN controller managers 303 run on central cluster 902 to facilitate better lifecycle management (LCM) of SDN controller managers 303, configuration nodes 230, and control nodes 232, and better and more manageable handling of security and permissions by consolidating these tasks to a single central cluster 902. [0288] Virtual router agents for virtual routers 910 communicate with control nodes 232 to obtain routing and configuration information, as described elsewhere in this disclosure. When an orchestration platform native resource like Pod or Service is created on workload cluster 930-1, for example, SDN controller manager 303-1 running on central cluster 902 receives an indication of the create event and its reconciler(s) may create/update/delete custom resources for SDN architecture configuration such as VirtualMachine, VirtualMachineInterface, InstanceIP. In addition, SDN controller manager 303-1 may associate those new custom resources with a virtual network for the Pod or Service. This virtual network may be the default virtual network or a virtual network indicated (user annotated) in the manifest for the Pod or Service.) receiving, from the manager at the management cluster, a request that updates state of the custom resource. ([0285] SDN controller manager 303-1 watches API server 300-1 of workload cluster 930-1 for changes on native resources of the orchestration platform for workload cluster 930-1. SDN controller manager 303-1 also watches custom API server 301 for central cluster 902. This is known as a “double watch”. To implement double watch, SDN controller managers 303 may use, for example, the admiralty multicluster-controller go library or the multicluster manager implementation provided by Kubernetes community, each of which support functionality to watch resources in multiple clusters. As a result of the double watch, SDN controller manager 303-1 performs operations on custom resources whether initiated at API server 300-1 of workload cluster 930-1 or at API server 300-C/custom API server 301. In other words, custom resources on central cluster 902 may be created (1) directly or interactively by a user or agent interaction with configuration nodes 230, or (2) indirectly by an event caused by an orchestration platform native resource operation on one of workload clusters 930 and detected by the responsible one of SDN controller managers 303, which may responsively create custom resources in configuration store 920-C using custom API server 301 in order to implement the native resource on the workload cluster.) executing, by the controller of the management cluster, a management process for the custom resource in the CO system in response to the match and the request. ([0290] With resources properly associated with their corresponding one of workload clusters 930, e.g., using a cluster identifier as described above, SDN controller managers 303 validate and allow users or agents to only use custom resources for SDN architecture configuration that belong to a namespace that is associated with the workload cluster. For example, if a user attempts to create a Pod in workload cluster 930-1 by issuing a request to API server 300-1, and an annotation for the Pod manifest specifies a particular virtual network “VN1”, then SDN controller manager 303-1 for workload cluster 930-1 will validate the request by determining whether “VN1” belongs to a namespace associated with workload cluster 930-1. If valid, then SDN controller manager 303-1 creates the custom resources for SDN architecture configuration using custom API server 301. Control nodes 232 configure the configuration objects for the new custom resources in workload cluster 930-1.) Miriyala does not appear to explicitly teach: determining, at the management cluster, a match between an intent identifier in the request and the intent identifier stored in the database; and However, Baid teaches [0067] receiving, by a central controller executing in the cloud computing environment, an indication of a custom resource of the cloud computing environment, the custom resource defining a configuration for an object in one of the Kubernetes clusters in the cloud computing environment, the configuration including a desired state for the object, wherein the Kubernetes cluster containing the object is different from a Kubernetes cluster hosting the central controller;[0068] instantiating an API at the Kubernetes cluster that is to be accessed, the API operable to provide a state of the object and allow the central controller to cause an action by the object;[0069] in response to receiving, by the central controller via the API, an indication that a desired state of the object matches a current state of the object, indicating that the desired state has been reached; and otherwise:[0070] sending, via a corresponding API to a corresponding Kubernetes cluster, a message indicating an action to reconcile the desired state with the current state; and[0071] receiving, via the corresponding API, a message indicating that the action to reconcile the desired state with the current state has been completed. Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Baid and Baid before them, to include Baid’s state comparison determining that a desire state matches a current store state in Miriyala’s management cluster controller workflow. One would have been motivated to make such a combination to more efficiently manage actions avoid acting on stale or mismatched request. Regarding claim 2, Baid teaches: The method of claim 1, wherein the intent identifier stored in the database comprises a first intent identifier, the request comprises a first request, and the management process comprises a first management process, the method further comprising: receiving, from the manager at the management cluster, a second intent identifier different from the first intent identifier. replacing, by the management cluster, the first intent identifier with the second intent identifier in the database; receiving, from the manager at the management cluster, a second request that updates the state of the custom resource; determining, at the management cluster, another match between an intent identifier in the second request and the second intent identifier stored in the database; and executing, by the controller of the management cluster, a second management process for the custom resource in the CO system in response to the other match and the second request. ([0034] Referring to FIG. 2, illustrated is an example environment 200 where the disclosed techniques can be implemented. FIG. 2 illustrates a general-purpose Kubernetes controller 222 which is configured to receive the desired state information in a cluster 230. A custom resource object 201 can be created in a different cluster which can be acted upon by a domain-specific controller running in cluster 230. The general-purpose Kubernetes controller 222, in the central cluster where the controller is running, may communicate via API calls to Kubernetes clusters 230 in order to create, watch, update, and delete objects based on user-defined inputs in the custom resource object 201. The general-purpose Kubernetes controller 220 may receive as input metadata including the API, entity URL or identifier, access credentials, desired states, and other information needed to run the controller 220. In an embodiment, the files that describe the desired set of Kubernetes resources can be defined by a Helm chart 205 that contains input metadata 210.) Same motivation as claim 1. Regarding claim 3, Miriyala teaches: The method of claim 2, wherein the management cluster stops the first management process in response to replacement of the first intent identifier with the second intent identifier. ([0290] With resources properly associated with their corresponding one of workload clusters 930, e.g., using a cluster identifier as described above, SDN controller managers 303 validate and allow users or agents to only use custom resources for SDN architecture configuration that belong to a namespace that is associated with the workload cluster. For example, if a user attempts to create a Pod in workload cluster 930-1 by issuing a request to API server 300-1, and an annotation for the Pod manifest specifies a particular virtual network “VN1”, then SDN controller manager 303-1 for workload cluster 930-1 will validate the request by determining whether “VN1” belongs to a namespace associated with workload cluster 930-1. If valid, then SDN controller manager 303-1 creates the custom resources for SDN architecture configuration using custom API server 301. Control nodes 232 configure the configuration objects for the new custom resources in workload cluster 930-1. If invalid, SDN controller manager 303-1 may delete the resource using API server 300-1.) Regarding claim 4, Miriyala teaches: The method of claim 1, wherein the manager executes in a datacenter and the management cluster executes in a site remote from the datacenter. (0033] In general, one or more data center(s) 10 provide an operating environment for applications and services for customer sites 11 (illustrated as “customers 11”) having one or more customer networks coupled to the data center by service provider network 7. Each of data center(s) 10 may, for example, host infrastructure equipment, such as networking and storage systems, redundant power supplies, and environmental controls. Service provider network 7 is coupled to public network 15, which may represent one or more networks administered by other providers, and may thus form part of a large-scale public network infrastructure, e.g., the Internet. Public network 15 may represent, for instance, a local area network (LAN), a wide area network (WAN), the Internet, a virtual LAN (VLAN), an enterprise LAN, a layer 3 virtual private network (VPN), an Internet Protocol (IP) intranet operated by the service provider that operates service provider network 7, an enterprise IP network, or some combination thereof.) Regarding claim 5, Miriyala teaches: The method of claim 4, wherein the custom resource comprises a workload cluster configured to execute at the site or another site remote from the datacenter. ([0288] Virtual router agents for virtual routers 910 communicate with control nodes 232 to obtain routing and configuration information, as described elsewhere in this disclosure. When an orchestration platform native resource like Pod or Service is created on workload cluster 930-1, for example, SDN controller manager 303-1 running on central cluster 902 receives an indication of the create event and its reconciler(s) may create/update/delete custom resources for SDN architecture configuration such as VirtualMachine, VirtualMachineInterface, InstanceIP. In addition, SDN controller manager 303-1 may associate those new custom resources with a virtual network for the Pod or Service. This virtual network may be the default virtual network or a virtual network indicated (user annotated) in the manifest for the Pod or Service. [0289] Custom resources may have namespace scope for different clusters. Custom resources that are associated with one of workload clusters 930 will have a corresponding namespace created in central cluster 902 along with a cluster identifier (e.g., cluster name or a unique identifier), with custom resources created under this namespace. Cluster-scoped custom resources may be stored with a naming convention, such as clustername-resourcename-unique identifier. The unique identifier may be a hash of the clustername, namespace of the resource, and the resourcename.) Regarding claim 6, Baid teaches: The method of claim 1, wherein the request comprises a first request, the method further comprising: receiving, from the manager at the management cluster, a second request that updates the state of the custom resource; determining, by the management cluster, that a difference between an intent identifier in the second request and the intent identifier stored in the database; and dropping, by the management cluster, the second request in response to the difference. ([0034] Referring to FIG. 2, illustrated is an example environment 200 where the disclosed techniques can be implemented. FIG. 2 illustrates a general-purpose Kubernetes controller 222 which is configured to receive the desired state information in a cluster 230. A custom resource object 201 can be created in a different cluster which can be acted upon by a domain-specific controller running in cluster 230. The general-purpose Kubernetes controller 222, in the central cluster where the controller is running, may communicate via API calls to Kubernetes clusters 230 in order to create, watch, update, and delete objects based on user-defined inputs in the custom resource object 201. The general-purpose Kubernetes controller 220 may receive as input metadata including the API, entity URL or identifier, access credentials, desired states, and other information needed to run the controller 220. In an embodiment, the files that describe the desired set of Kubernetes resources can be defined by a Helm chart 205 that contains input metadata 210. See also [0069-0082]) Same motivation as claim 1. Regarding claim 7, Miriyala teaches: The method of claim 1, wherein the custom resource comprises a workload cluster, wherein the management process comprises deployment of the workload cluster, wherein the CO system comprises a data center and a plurality of sites, the management cluster disposed in the data center, the workload cluster disposed in the plurality of sites, the workload cluster executing containerized network functions (CNFs). ([0036] In some examples, each of data center(s) 10 may represent one of many geographically distributed network data centers, which may be connected to one another via service provider network 7, dedicated network links, dark fiber, or other connections. As illustrated in the example of FIG. 1, data center(s) 10 may include facilities that provide network services for customers. A customer of the service provider may be a collective entity such as enterprises and governments or individuals. For example, a network data center may host web services for several enterprises and end users. Other exemplary services may include data storage, virtual private networks, traffic engineering, file service, data mining, scientific- or super-computing, and so on. Although illustrated as a separate edge network of service provider network 7, elements of data center(s) 10 such as one or more physical network functions (PNFs) or virtualized network functions (VNFs) may be included within the service provider network 7 core.) Regarding claim 8, Miriyala teaches the elements of claim 1 as outlined above. Miriyala also teaches: A non-transitory computer readable medium comprising instructions to be executed in a computing device to cause the computing device to carry out a method of managing a custom resource in a container orchestration (CO) system, the method comprising: (Claim 20. A non-transitory computer-readable medium comprising instructions for causing processing circuitry to) Regarding claim 9, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding claim 10, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 11, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 12, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding claim 13, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Regarding claim 14, the claim recites similar limitation as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Regarding claim 15, Miriyala teaches the elements of claim 1 as outlined above. Miriyala also teaches: A computer system, comprising. (Claim 1. A network controller for a software-defined networking (SDN) architecture system, the network controller comprising:) Regarding claim 16, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding claim 17, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 18, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 19, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding claim 20, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS A ESPANA whose telephone number is (703)756-1069. The examiner can normally be reached Monday - Friday 8 a.m - 5 p.m EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK JR can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.A.E./Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Oct 05, 2023
Application Filed
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554553
DYNAMIC SCALING FOR WORKLOAD EXECUTION
2y 5m to grant Granted Feb 17, 2026
Patent 12541404
ADMISSION CONTROL BASED ON UNIVERSAL REFERENCES FOR HARDWARE AND/OR SOFTWARE CONFIGURATIONS
2y 5m to grant Granted Feb 03, 2026
Patent 12511126
DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND DATA PROCESSING PROGRAM
2y 5m to grant Granted Dec 30, 2025
Patent 12474952
TRAFFIC MANAGEMENT ON AN INTERNAL FABRIC OF A STORAGE SYSTEM
2y 5m to grant Granted Nov 18, 2025
Patent 12436790
SCALABLE ASYNCHRONOUS COMMUNICATION FOR ENCRYPTED VIRTUAL MACHINES
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
91%
With Interview (+17.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month