Prosecution Insights
Last updated: April 19, 2026
Application No. 17/976,898

AUTOMATIC DISCOVERY OF APPLICATION RESOURCES FOR APPLICATION BACKUP IN A CONTAINER ORCHESTRATION PLATFORM

Final Rejection §103§112
Filed
Oct 31, 2022
Examiner
TALUKDAR, ARVIND
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
84%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
449 granted / 557 resolved
+25.6% vs TC avg
Minimal +4% lift
Without
With
+3.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
36 currently pending
Career history
593
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1, 5-6, 11-12, 18 are amended. Claims 8-9, 16-17, 20 were previously canceled. Claims 1-7, 10-15, 18-19, 21-25 are pending. Priority: July 22, 2022 Assignee: VMWare Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-7, 10-15, 18-19, 21-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Particularly, claim(s) 1-7, 10-15, 18-19, 21-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are described below: 1.Amended claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s). Amended claim 1 recites, ‘constructing,…., a comprehensive resource hierarchy of application resources….’. The spec does not recite this limitation. Para-0018 of the spec recites, ‘Technology described below addresses these deficiencies by automatically identifying a comprehensive hierarchy of application resources in the container orchestration platform….needed for data protection or recovery’. This recitation, which is the basis of the incorrect amendment, is associated with data protection/backup and recovery, and not constructing the (initial) resource hierarchy. The word ‘identifying’ from the above citation has been substituted with ‘constructing’, and the new amendment has been formulated. But the formulation is out of context, hence incorrect. The incorrect formulation leads to uncertainty about the effectiveness of the controller in constructing a reliable resource hierarchy of the (Kubernetes) application, as required by spec, Fig. 6, step 640. Since claim 1 has been incorrectly amended, dependent claims 4,5,6,7,15,19,22,25 are inconsistent. See objection below. It has also led to the other 112(b)’s described below. Hence claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Claims 12, 18 also have the same issue. For examination, the spec is used. 2.Amended claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s). Amended claim 1 recites, ‘constructing, by the….controller, a comprehensive resource hierarchy of application resources based on the pod…..owner object of the pod…’. The spec does not recite this limitation. The spec does not recite constructing a comprehensive resource hierarchy of application resources. Spec, Fig. 6, Para-0068 clearly recites, ‘At 640, a resource hierarchy of the application is constructed based on the pod, the owner object of the pod, and the resources mounted on the pod and on the owner object of the pod’. Other Paras-0005,0044 also recite the same thing. The ‘application’ is a Kubernetes application, as per spec Para-0014. Furthermore, spec, Para-0083 recites, ‘The resources mounted on the pod and on the owner object of the pod comprise one or more of a persistent volume claim, a local storage device, a path on a host device, a secret, or a ConfigMap’. The spec does not recite that these resources are ‘application resources’. The spec does not recite that ‘application’ aka Kubernetes application is synonymous with ‘application resources’. Therefore, based on Paras-0005,0044,0068, the amendment recites an invalid representation of the hierarchy, leading to uncertainty about the scope of the claim and how effectively the controller interacts with the API server to perform its claimed functions to produce correct results. Hence claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Claims 12, 18 also have the same issue. For examination, the spec is used. 3.Amended claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s). Claim 1 recites, ‘the application resources comprising application data’. The spec does not recite this limitation. When constructing the resource hierarchy of the (Kubernetes) application, spec Para-0083 does not recite that the resources mounted on the pod and its owner are associated with ‘application data’. More importantly, spec Paras-0014,0015,0028 associate ‘application data’ with backup, and not when constructing the initial resource hierachy of the application. For example, Para-0015 recites, ‘Application resources for application backup in the container orchestration platform….application resources can include objects, configurations, application data, and other workload or API resources that make up the application’. Though not new matter, the amendment, being inconsistent with the spec, raises uncertainty about the claim scope. Hence claim 1 is rejected for reciting a limitation that is incorrect and indefinite. Claims 12, 18 have a similar issue. For examination, the spec is used. Where applicant acts as his or her own lexicographer to specifically define terminology or limitations of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim terminology/limitation and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim terminology/limitation. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The following terminology/limitation is indefinite because the specification does not clearly redefine the terminology/limitation. This is explained below. Particularly Amended claim 11 is rejected for reciting a limitation that is unclear, ambiguous and indefinite. Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s). Claim 11 recites, ‘wherein the resource hierarchy identifies resources associated with the application to avoid missing any critical resources when backing the resources of the application’. Here it is unclear what ‘when backing the resources….’, means. It is unclear if ‘when backing….’,suggests that backing up/backup is in progress. The spec does not recite the amendment. Spec,Para-0004 recites, ‘For data protection in scenarios such as disaster recovery,…., a successful recovery of an application may require identifying all the resources associated with the application to be backed up to avoid missing any critical resources’. And Para-0018 recites, ‘….and can help avoid missing resources that are needed for data protection or recovery’. Amended claim 1 recites identifying a pod, checking resources…. in limitations#1,3 and performing the backup in last limitation#8. And as per Paras-0004,0018 of the spec, the identifying happens before backup, not when backup is in progress or ‘when backing’. Hence reciting ‘identifies resources.….when backing the resources’, leads to uncertainty about the claim scope. Accordingly claim 11 is rejected for reciting a limitation that is unclear, ambiguous and indefinite. Claim objections 1.Claims 4,5,6,7,15,19,22,25 are objected to for reciting a limitation that is inconsistent. As shown above in the 112(b), amended claim 1 recites an incorrect limitation: ‘constructing a ….resource hierarchy of application resources’. However, this has created an inconsistency with dependent claims 4,5,6,7,15,19,22,25 as they recite, ‘resource hierarchy of the application’. 2.Claim 14 is objected to for reciting a limitation that is inconsistent with the spec. Note: In the Remarks, the Applicant does not mention the relevant specification paragraph(s) that recite the amendment(s). Claim 14 recites, ‘determining an owner object of the owner object of the pod, wherein the resource hierarchy is further based on the owner object of the owner object of the pod’. The spec does not recite this limitation. As shown below, the recitation, ‘wherein the resource hierarchy is further based on the owner object of the owner object of the pod’, is inconsistent with the spec. But Para-0067 of the spec recites, ‘determining an owner object of the owner object of the pod, and wherein constructing the resource hierarchy of the application based on the pod, the owner object of the pod, and the resources mounted on the pod and on the owner object of the pod comprises constructing the resource hierarchy of the application based on the pod, the owner object of the pod, and the owner object of the owner object of the pod’. The spec only discloses bottom-up processing. Given the above spec recitation, claim 14 is inconsistent. For examination, the spec is used. Note: This issue was pointed out in the previous O/A but it was not fixed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-7, 10-12, 14-15, 18-19 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Tal et al (20210200814) in view of Raut et al (10944691) and Burns et al (‘Managing Kubernetes: Operating Kubernetes Clusters in the Real World’, 2019, O’Reilly Book, Pgs. 1-154). As per Claim 1, Tal discloses a computer-implemented method (Tal, [0224 - Fig. 8G shows a discovery process, used to discover resources associated with a containerized application platform, such as the containerized application platform 609 shown in Fig. 6C. The discovery process includes a remote network management platform interacting with a Fig. 7, computing cluster 604, using an API]), comprising: identifying, by an application discovery controller deployed (Tal, [0222 – In Fig. 8F, remote network management platform/RNMP 320 determines whether containerized application platform 609 in Fig. 6C shares the namespace associated with containerized orchestration engine 680]; [0150,0151 – Resource Manager, Master Node 606=containerized orchestration engine 680+containerized application platform 609]; [0168 - Computing cluster 604 forms part of RNMP 320 and is collocated therewith; So application discovery controller=RNMP 320+Master Node 606; Since the spec does not recite how the controller is ‘deployed’ in a COP, the citation is a valid interpretation]) in a container orchestration platform (Tal, [0011 - containerization orchestration platform]; [0151 – In Fig. 5C, containerized application platform 609 streamlines management and deployment of containerized applications across computing cluster 604 using the containerized orchestration engine 680]), a pod of an application deployed in a container orchestration platform (Tal, [0148 – Fig. 6A: pod 620A is a Kubernetes pod and includes containers 622A]; [0237 - RNMP 320/controller queries a route API associated with the containerized application platform 609 to obtain route data associated with a route of the containerized application platform 609. The route refers to/identifies a pod; Since the claim does not recite how the ‘identifying’ is done, the citations imply identifying a pod of an application deployed in a container orchestration platform]); determining, by the application discovery controller (Tal, [Figs. 6C, 7: RNMP 320+Master Node 606]), an owner object of the pod (Tal, [0199 – In Fig. 8C, master node 606 periodically polls worker node 612D to determine the state of the pods executing thereon, thereby implying an association between the worker node 612D/owner and the pod]; [0203 - The stored traffic data indicates that pod 808C is hosted by worker node 612C and not by worker node 612D, thereby implying worker node 612C as owner]; [0195 – Fig. 9A shows that requests from remote user 602 are handled by application 800 executing in pod 806 on worker node 612A. Also see Fig. 6B where developer/user accesses/owns his resources]); checking, by the application discovery controller (Tal, [Figs. 6C, 7: RNMP 320+Master Node 606]), resources mounted on the pod and on the owner object of the pod in the container orchestration platform (Tal, [Fig. 8F]; [0250 – In Fig. 10, analyzing traffic data to determine communicative relationships between worker nodes/owners and pods]; [0247 – In Fig. 10, step 1000 involves requesting and receiving, from a worker node/owner and by a computing device disposed within a remote network management platform that manages a managed network, configuration data identifying containerized software applications/resources executing on the one or more worker nodes, thereby implying checking resources mounted on the pod and on the owner object of the pod]); constructing, by the application discovery controller (Tal, [Figs. 6C, 7: RNMP 320+Master Node 606]), a comprehensive (Tal, [See 112(b)]; [0123 – In Fig. 5A, once discovery completes, a snapshot representation of each discovered device, application, and service is available in CMDB 500, thereby implying that the discovered data is complete/comprehensive and ready for construction]) resource hierarchy of the application resources (Tal, [0247-0261 - In Fig. 10, the resource graph/hierarchy is generated by the controller and the configuration data and mappings/relationships are stored in a database; Similar to Para-0031 of the spec]; [0250 – In Fig. 10, analyzing traffic data to determine communicative relationships between worker nodes/owners and pods]) based on the pod, the owner object of the pod, and the resources mounted on the pod and on the owner object of the pod (Tal, [0175 - Packet detection modules 700A and 700B store traffic data indicative of the extent of data transmitted between software applications 624A and 624B, containers 622A and 622B, pods 620A and 620B, and worker nodes 612A and 612B]; [0190 - The network traffic between the first user, pod 806, pod 808A, and pod 810 is monitored by packet detection modules disposed on worker nodes 612A, 612B, and 612C to generate and store traffic data indicative of communicative relationships between software applications 800, 802, and 804, their containers, pods, and/or worker nodes]); the application resources comprising application data (Tal, [See 112(b)]; [0218 – In Fig. 8F, after identifying that there is an associated containerized application platform 609, the process may involve discovering the resources of the containerized application platform 609]; [0157 – In Fig. 6B, browser interface includes among other things, a list of applications running across one or more clusters using the containerized orchestration engine/containerized application platform 609, resources being used to execute the containerized software application, storage devices being used to store information about the containerized software application, e.g., application data, source code, builds, etc. The browser interface also provides for building and deploying versions of the application, thereby implying the association with backup as per spec Paras-0014,0015]); monitoring, by the application discovery controller, the pod (Tal, [Fig. 8C: Check pod 808B, implies monitoring pod 808B by the controller]; [0199 – In Fig. 8C, master node 606 periodically polls worker node 612D to determine the state of the pods]) for changes (Tal, [0199 – In Fig. 8C, after termination of pod 808B, master node determines that pod 808B is no-longer-executing/change on worker node 612D and, in response deploys a replacement pod for software application 802 to one of the other worker nodes in computing cluster 604]) after constructing the resource hierarchy (Tal, [Fig. 10 is the initial construction]; [0251 – In Fig. 10, last step 1008 involves storing, in a database within RNMP 320 and by the computing device, the configuration data and the mappings, thereby implying constructing and storing the resource hierarchy]); updating, by the application discovery controller and in response to detecting a change event on the pod, the resource hierarchy (Tal, [0007 - Master node/controller manages the distribution of pods when they are destroyed and replaces them]; [0200 – In Fig. 8C, after the termination and replacement of the pod, the change/update in the distribution of pods across computing cluster 604 is identified based on the configuration data and additional traffic data generated by packet detection modules disposed on worker nodes 612A,612B,612C, thereby implying an updated resource hierarchy]; [Fig. 9A shows how the resource hierarchy changes/updates with time]; [0197 - When displayed on user interface 900, the nodes of the graph are interactive, allowing for the level within the hierarchy represented by each node to be modified, thereby implying generating a change event by the user, detection of the event and updating the resource hierarchy; Here a change event can be addition, deletion, move, update etc. of platform objects such as a pod]); identifying a backup specification (Tal, [0017 - The deployment configuration provides a deployment template/backup specification/policy by which executable images of the containerized software applications are deployed/propagated/backed-up across the one or more pods using a replication/backup controller of the containerized orchestration engine. Also see Para-0161]) for backup of the application (Tal, [0208 – In Fig. 8E, step 842, S2I 665 of the containerized application platform 609 creates a new image based on the updated source code using directions stored within the S2I 665]; [0209 - At step 843, the build configuration 663 constantly monitors S2I 665 to identify when a new image is present and then retrieves it, thereby implying identifying/needing a backup specification/policy for backup of the application, because at step 840 the Developer updated source code associated with the application and a new application build image is available]); backing up resources of the application based on the backup specification (Tal, [0212-0215 – In Fig. 8E, step 846, the new/updated image is retrieved and based on it at step 847, a new deployment template is created. At step 849, the replication/backup controller 681 deploys/backs-up the deployment template created using the deployment configuration 661 to one or more of the pods 682. The replication controller 681 has instructions regarding how to deploy and monitor containerized software applications/resources in deployments across the pods]) and the updated resource hierarchy of the application (Tal, [0204 – In Fig. 9B, GUI 900 is updated to show the communicative relationships after replacement of pod 808B with pod 808C, thereby implying an updated resource hierarchy]; [0150 - API 608 allows operator 600 to specify a desired number of copies of software applications 624A and/or 624B, i.e., a deployment configuration, to be executed by computing cluster 604, and to roll out updates to software applications 624A and/or 624B; Here deployment configuration 661 which provides the backup policy, is associated with the resource hierarchy. See at least Para-0182. This implies backing up resources of the application based on the backup specification/template/policy and the updated resource hierarchy of the application. Since the claim does not define ‘backup specification’, the citation is a valid interpretation. Also the spec does not recite how this limitation is implemented]). Raut clarifies the connectivity based resource checking, resource hierarchy and updated resource hierarchy as follows, checking (Raut, [Figs. 7-9 shows container based connectivity checks]), by the application discovery controller (Raut, [Fig. 1: container plugin 110/controller]), resources mounted on the pod (Raut, [Col. 12, lines 37-45 – In Fig. 7, step 710, container plugin 110 detects a request for a connectivity check between a first container-based resource, POD4 144 and a second container-based resource, POD3 143. At step 720, a first logical network element, LP4 154 associated with first container-based resource and a second logical network element, LP3 153 associated with the second container-based resource are identified, thereby identifying POD4 144 with resource LP4 154]) and on the owner object of the pod (Raut, [Col. 9, lines 36-40 - Fig. 1:workerNode 122(owner) for POD4 144]; [Fig. 4: step 422 res=masterNode,lne=LP(type:parent); step 423 res=pod,lne=LP(type:child)]) in the container orchestration platform (Raut, [Fig. 1: container orchestration system 101]); monitoring, by the application discovery controller, the pod for changes (Raut, [Col. 8, lines 10-17 – In Fig. 4, step 405, container plugin 110/controller monitors events on container orchestration system 101, including configuration container-based resources and container-based network policies. Container plugin 110 monitors an API server supported by container orchestration system 101 for any CREATE, READ, UPDATE and REMOVE events. When a user creates or modifies a container-based resource, a corresponding CREATE or UPDATE event may be detected by container plugin 110]) after constructing the resource hierarchy (Raut, [Col. 3, lines 15-16 - Fig. 1 shows resource hierarchy. The container-based resources are cluster:120, master node:121, worker nodes:122-123, pods:141-144]; [Col. 7, lines 6-8 – Fig. 3, which is executed by the container plugin 110/controller, the term ‘configure’ at steps 310 and 340 refer to creating a new entity]; [Col. 1, lines 40-42 - Fig. 3 performs container-based network policy configuration in the SDN environment]; [Col. 7, lines 11-16 – In Fig. 3, step 310, container plugin 110 detects a first request, 181 in Fig. 1, to assign a container-based resource with a first label via container orchestration system 101. In Fig. 1, container-based resource=master node 121 in cluster 120 is associated with logical network element=logical switch port LP1 151, thereby implying that steps of Fig. 3 construct the resource hierarchy based on network policy]); updating, by the application discovery controller and in response to detecting a change event on the pod (Raut, [Col. 8, lines 15-17 - When a user modifies a container-based resource, a corresponding UPDATE/change event is detected by container plugin 110]; [Col. 12, lines 65-67 - The container-based network policy may be a firewall rule that is configured to allow or block communication between the container-based resources, thereby implying an update change event]), the resource hierarchy (Raut, [Col. 3, lines 43-48 - By monitoring container orchestration system 101, container plugin 110 detects events, e.g., update and delete, associated with container-based resources and translates the desired state into necessary configuration/hierarchy of logical network elements via SDN manager 102, thereby implying updating the hierarchy]; [Col. 5, lines 65-67 – In Fig. 2, the mapping/hierarchy may change, such as when the logical port is mapped to a different virtual port on a different virtual switch after migration of the corresponding VM]; [Col. 10, lines 53-58 - In Fig. 6, the network policy is used to define whitelist rules to allow traffic matching a set of match fields. In practice, the network policy may also be used to define blacklist rules/firewall to block traffic matching a set of match fields, thereby implying that the blacklist rules or changed ‘connections’ between resources result in updating the resource hierarchy]); Tal discloses analyzing network traffic data to find the connectivity between resources. Raut performs further connectivity checks. Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the resource hierarchy update of Raut into the containerized platform of Tal for the benefit of configuring and managing container-based network policies in the SDN environment, wherein the container-based network policy refers to a set of rule or rules that define how a container-based resource may communicate with another container-based resource, such as between a node and a pod, among multiple nodes or pods, etc., thereby updating the resource hierarchy (Raut, Col. 6, lines 50-56). Burns further clarifies application resources and application data as follows, the application resources (Burns, [Pg. 92, Para-4 - Resources in Kubernetes are constructs such as Pods, Services, and Deployments]; [Pg. 10, Sec. Kubernetes API, Para-2 - Basic Objects: Pods, ReplicaSets, and Services; Similar to spec, Para-0015]; [Pg. 13, Para-5 - A ConfigMap represents a collection of configuration files. In Kubernetes, different configurations exist for the same container image; Similar to spec, Para-0015]) comprising application data (Burns, [Pg. 143, Sec. Persistent Volumes, Para-3 - Implementing the backup of application data depends on the implementation chosen for how volumes are presented to Kubernetes; This is similar to spec,Para-0015]; [Pg. 145, Sec. Ark - Ark a tool for backup and recovery, also serves as a framework for managing application data]); Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the application data of Burns into the containerized platform of Tal, Raut for the benefit of recovering stateful Pods by also recovering any persistent data associated with those Pods (Burns, Pg. 142). As per Claim 6, the rejection of claim 1 is incorporated, and Tal, Raut, Burns disclose, determining an owner object of the owner object of the pod (Raut, [Fig. 1: master node 121]; [Col. 9, lines 4-13 – In Fig. 4, step 422, in response to detecting a first request 510 to configure container-based resource=master node 121 for cluster 120, logical network element=LP1 151 is configured. In this case, master node 121 is implemented using VM1 131, and LP1 151 is a logical switch port of type=parent. Through container orchestration system 101, master node 121 is assigned with a first label specifying key-value pair ‘nodeType: master’ to indicate its node type]; [Col. 1, lines 40-44 - Figs. 3-4 perform container-based network policy configuration in the SDN environment, thereby implying that the nodetype:master is determined based on network policy]); wherein constructing the resource hierarchy of the application based on the pod, the owner object of the pod, and the resources mounted on the pod and on the owner object of the pod (Raut, [Fig. 1]) comprises constructing the resource hierarchy of the application (Raut, [Col. 7, lines 6-8 – Fig. 3, which is executed by the container plugin 110/controller, the term ‘configure’ at steps 310 and 340 refer to creating a new entity]; [Col. 7, lines 11-16 – In Fig. 3, step 310, container plugin 110/controller detects a first request, 181 in Fig. 1, to assign a container-based resource with a first label via container orchestration system 101, thereby implying that steps of Fig. 3 construct the resource hierarchy]) based on the pod (Raut, [Fig. 1: pod 143]), the owner object of the pod (Raut, [Col. 9, lines 36-40 - Fig. 1:workerNode 122(owner) for POD4 144]; [Fig. 4: step 422 res=masterNode,lne=LP(type:parent); step 423 res=pod,lne=LP(type:child)]), and the owner object (Raut, [Fig. 1: master node 121]) of the owner object of the pod (Raut, [Fig. 1: worker node 123]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the resource hierarchy update of Raut into the containerized platform of Tal, Burns for the benefit of configuring and managing container-based network policies in the SDN environment, wherein the container-based network policy refers to a set of rule or rules that define how a container-based resource may communicate with another container-based resource, such as between a node and a pod, among multiple nodes or pods, etc., thereby updating the resource hierarchy (Raut, Col. 6, lines 50-56). As per Claim 7, the rejection of claim 1 is incorporated, and Tal discloses, wherein constructing the resource hierarchy of the application (Tal, [0197 - When displayed on user interface 900, the nodes of the graph are interactive, allowing for the level within the hierarchy represented by each node to be modified. For example, a node representing a pod may be clicked or otherwise selected to view the containerized software applications/resources executing therein]) comprises creating a data structure to represent the resource hierarchy of the application (Tal, [0196 – In Fig. 9A, the mapping shown in user interface 900 is stored in a database as a graph/data structure, with worker node/parent, pod/child, and software application represented as nodes or hierarchical sub-nodes, and each communicative relationship represented as a link between corresponding nodes or sub-nodes]). As per Claim 10, the rejection of claim 1 is incorporated, and Tal discloses, identifying one or more namespaces of the resources of the application (Tal, [Fig. 8F]; [0011 - After executing a discovery of a containerized orchestration engine, a remote network management platform discovers an associated containerized application platform using the namespace associated with the resources of the previously discovered containerized orchestration engine]); identifying resources within the one or more namespaces (Tal, [0223 – In Fig. 8F, step 868, if a containerized application platform shares the namespace associated with the containerized orchestration engine, initiating, by the remote network management platform, discovery of resources associated with the containerized application platform based on the shared namespace]; [0011 - If, when attempting to discover resources associated with a containerized application platform, it is determined that no containerized application platform shares the namespace with the containerized software engine, then it is determined that no associated containerized application platform exists and the discovery process terminates. If, however, there are one or more resources associated with a containerized application platform that share the namespace with the containerized orchestration engine, the discovery process proceeds by identifying those resources; Thereby the citations imply identifying resources within the one or more namespaces]); identifying the backup specification comprises identifying the backup specification (Tal, [0214 – In Fig. 8E, step 848, the replication/backup controller 681 retrieves the deployment template from deployment configuration 661]) using the one or more namespaces and the resources within the one or more namespaces (Tal, [0166 - APIs such as REST APIs are queried to identify one or more objects within the containerized namespace 650 of Fig. 6C associated with the containerized orchestration engine 680. The APIs relate to one or more of the following entities/resources within the containerized namespace 650 and containerized orchestration engine 680 of Fig. 6C, namely, replication controller 681, pods 682, services 683, persistent volume claims 684, persistent volumes 685, and cluster 686]). As per Claim 11, the rejection of claim 1 is incorporated, and Tal, Raut disclose configuring and updating a container-based resource hierarchy. Burns further discloses, wherein the resource hierarchy identifies resources (Burns, [Pgs. 86,112,87 - $ kubectl config view, $kubectl get pods;shows all pods, kubectl get pod <pod-name> -oyaml;shows owner of pod]; [Pg. 117 - topology of network]; [Pg. 136, Sec. Monitoring Kubernetes, Applications - Kubernetes Service discovery is used to automatically discover and monitor the Kubernetes components in the cluster]; [Pg. 24, Fig. 3,2, Para-4 - Annotations are general metadata about the object, e.g., the icon to display next to the object when it is rendered graphically]) associated with the application (Burns, [Pg. 76, Sec. Users, Para-1 - kubectl command-line interface/tool; It is well-known in the prior art that the kubectl tool allows users to run commands against Kubernetes clusters to manage applications, inspect/view resources, and view logs]) to avoid missing any critical resources (Burns, [Pg. 22, Paras-1,2 - To achieve automatic self-healing or self-correcting behaviors, Kubernetes is structured based on a large number of independent reconciliation or control loops, thereby implying missing critical resources]) when backing the resources (Burns, [Pg. 95, Para-4 - A Kubernetes controller is interested in watching resources across namespaces and then reconciling cluster states appropriately. ClusterRole policies ensure that controllers only have access to the resources they care about/critical resources]; [Pg. 95, Para-2 – ClusterRole grants very specific permissions to a Kubernetes controller]) of the application (Burns, [Pg. 145, Sec. Ark - A purpose-built tool that is widely used for backup and recovery of Kubernetes clusters is Ark, from Heptio. Ark performs backups by way of the Kubernetes API itself. This ensures that the data is always consistent, thereby implying that misses have been avoided when backing the application]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the ClusterRole of Burns into the containerized platform of Tal, Raut for the benefit of supporting consistent backup and allowing for selective backup strategies such as Partial backup and restore, Restoration to a new environment, Partial restoration, Persistent data backup, Scheduled backups, Off-cluster backups (Burns, Pg. 146). As per Claim 12, Tal discloses a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations (Tal, [0065 – In Fig. 1, Memory 104 stores program instructions and data on which program instructions operate. For example, memory 104 stores these program instructions on a non-transitory, computer-readable medium, such that the instructions are executable by processor 102 to carry out methods, processes, or operations]), the operations comprising: The remaining limitations are similar to claim 1 and therefore the same rejections are incorporated. As per Claim 14, it is similar to claim 6 and therefore the same rejections are incorporated. As per Claim 15, it is similar to claim 7 and therefore the same rejections are incorporated. As per Claim 18, Tal discloses a computer-implemented system (Tal, [0168 - Fig. 7 shows a system for discovery and mapping of software applications executing on a platform for hosting containerized software applications]), comprising: one or more computers (Tal, [Fig. 7: worker nodes 612A and 612B]; [Fig. 1: computer 602]); one or more computer memory devices inter-operably coupled with the one or more computers and having tangible, non-transitory, machine-readable media storing one or more instructions that (Tal, [0065 – In Fig. 1, memory 104 stores program instructions and data on which program instructions operate. For example, memory 104 stores these program instructions on a non-transitory, computer-readable medium, such that the instructions are executable by processor 102 to carry out methods, processes, or operations]), when executed by the one or more computers, perform one or more operations (Tal, [0070 - In Fig. 2, operations of computing device 100 are distributed between server devices 202, data storage 204, and routers 206, all of which are connected by local cluster network 208]), the one or more operations comprising: The remaining limitations are similar to claims 1, 12 and therefore the same rejections are incorporated. As per Claim 19, it is similar to claim 7 and therefore the same rejections are incorporated. Claims 2-3, 13, 21, 23-24 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Tal et al (20210200814) in view Raut et al (10944691), Burns et al (‘Managing Kubernetes’, 2019) and Mitkar et al (20210011812). As per Claim 2, the rejection of claim 1 is incorporated, and Tal, Raut, Burns disclose configuring and updating a container-based resource hierarchy. Mitkar discloses, wherein the container orchestration platform comprises a Kubernetes system (Mitkar, [0329 – In Fig. 3A, Container-orchestration pod 310 is embodied as a Kubernetes pod that operates within a Kubernetes container-orchestration system/platform]), and the resources of the application comprises the pod and other Kubernetes resources of the application (Mitkar, [0335 – In Fig. 3A, containerized applications 320, e.g., MySQL DBMS 320-1, PostgreSQL DBMS 320-2, Microsoft SQL DBMS 320-3, etc., are applications configured to execute within container 319 within pod 310]; [0325 - Fig. 3A depicts data storage management system 302 and container-orchestration pod 310, comprising backup services container 301, container 319 comprising containerized applications 320, and data storage volumes 330]), wherein the other Kubernetes resources comprise one or more of a persistent volume (Mitkar, [0336 - In Fig. 3A, data storage volumes 330 are embodied as Kubernetes volumes when implemented in a Kubernetes pod. A Kubernetes Volume provides persistent storage that exists for the lifetime of the pod itself]), a custom resource definition (Mitkar, [0207 - A storage policy/definition indicates that storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection, thereby implying a custom resource definition]), a custom resource (Mitkar, [0354 - Discovery logic 450 is broadly inclusive in finding configuration data structures, e.g., container configuration files/definitions, interpreting their contents, and discovering assets present within other containers 319 configured within Kubernetes pod]; [0317 - A vendor-specific example of how cloud service availability zones are organized in the Google Cloud, thereby implying a custom resource]), or a service account (Mitkar, [0313 - Each cloud service account has authentication features, e.g., passwords, certificates, etc., to restrict and control access to the cloud service]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the resources of Mitkar into the containerized platform of Tal, Raut, Burns for the benefit of efficient backup wherein a Kubernetes pod is specially configured with components, e.g., data agents, media agents, storage manager, storage resources, etc., of the proprietary data storage management system, thereby forming a backup services pod. The backup services pod facilitates backup operations and/or improves backup performance for data in the Kubernetes node (Mitkar, 0012). As per Claim 3, the rejection of claim 1 is incorporated, and Tal, Raut, Burns disclose configuring and updating a container-based resource hierarchy. Mitkar further discloses, wherein the resources mounted on the pod and on the owner object of the pod (Mitkar, [Fig. 3A: Container-Orchestration Pod 310]; [0325 - Fig. 3A depicts data storage management system 302, container-orchestration Kubernetes pod 310, comprising backup services container 301, container 319 comprising containerized applications 320, and data storage volumes 330]; [0086 - Metadata includes information about data objects. Metadata can include the data owner, e.g., the client or user that generates the data]; [0386 – In Fig. 7, discovery logic 450 includes, container 319 identifiers and attributes, containerized applications 320 and attributes, storage volumes 330 and attributes, labels assigned by users for the various applications 320 and/or volumes 330]; [0292 - The provider's computing resources are pooled to serve multiple consumers/users using a multi-tenant/owner model with different physical and virtual resources dynamically assigned and reassigned according to consumer demand, the citations thereby implying the resources mounted on the pod and on the owner object of the pod; Since the claim does not define ‘owner’ and does not recite how the resources mounted on the pod and on the owner object of the pod are assigned and determined, the citations are valid]) comprise one or more of a persistent volume claim (Mitkar, [0336 - A Kubernetes Volume provides persistent storage that exists for the lifetime of the pod itself]), a local storage device (Mitkar, [0330 - A pod defines one or more data storage volumes, such as a network disk or local disk directory, which is exposed to the containers in the pod]; [Fig. 3A: storage volumes 330-1 and 330-2]), a path on a host device (Mitkar, [0280 - Where NFS is used, secondary storage subsystem 218 allocates an NFS network path to the client computing device 202/host]), a secret (Mitkar, [0195 - Client computing device 102 has access to an encryption key or passphrase for decrypting the data upon restore]), or a ConfigMap (Mitkar, [0122 – Storage Manager 140 maintains database 146 which maintains logical associations between components/resources of the system, mappings of particular information management users or user accounts to certain computing devices/resources or other components, etc.]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the resources of Mitkar into the containerized platform of Tal, Raut, Burns for the benefit of efficient backup wherein a Kubernetes pod is specially configured with components, e.g., data agents, media agents, storage manager, storage resources, etc., of the proprietary data storage management system, thereby forming a backup services pod. The backup services pod facilitates backup operations and/or improves backup performance for data in the Kubernetes node (Mitkar, 0012). As per Claim 13, it is similar to claim 2 and therefore the same rejections are incorporated. As per Claim 21, it is similar to claim 3 and therefore the same rejections are incorporated. As per Claim 23, it is similar to claim 2 and therefore the same rejections are incorporated. As per Claim 24, it is similar to claim 3 and therefore the same rejections are incorporated. Claim 4 is rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Tal et al (20210200814) in view of Raut et al (10944691), Burns et al (‘Managing Kubernetes’, 2019) and Zwiegincew et al (20190235775). As per Claim 4, the rejection of claim 1 is incorporated, and Tal, Raut, Burns disclose constructing a resource hierarchy. Zwiegincew further discloses, wherein the application comprises a plurality of pods (Zwiegincew, [0021 – In Fig. 1, a user submits request 129 to deployment system 120 for deploying application 117]; [Fig. 4: pod objects 226]; [0033 - A pod object 226 represents a pod]), constructing the resource hierarchy (Zwiegincew, [0039 - As per Figs. 1-4, fault domains 110 are a collection of resources 115. Accordingly, a hierarchical system exists in which different levels in the hierarchy correspond to different scopes or ways for grouping resources 115]; [0036 - FDS controller 130 performs two control loops: one for FDS objects 229 and another for FD objects 228]) further comprises constructing the resource hierarchy of the application (Zwiegincew, [0046 – As per Fig. 4, a user is allowed to specify FD objects 228 and FDS objects 229. As shown in FDS tree 400, particular instantiations of these objects form a hierarchy. This arrangement allows FDS controller 130 to determine which fault domains 110 have been provisioned to a given FDS 135, e.g., by enumerating over FD objects 228]) based the plurality of pods (Zwiegincew, [0046 – In Fig. 4, Kubernetes allows a user to define pod objects 226 and statefulset objects 227 that respectively specify pods and update domains]), respective owner objects of the plurality of pods (Zwiegincew, [0033 – In Fig. 4, in Kubernetes, a statefulset object 227 corresponds to a collection of one or more pod objects 226 along with storage volumes 217 associated with those objects. A statefulset object 227/owner represents an update domain, which is used to provide an update to some subset of the pods that are running within computer cluster 210]), and respective resources mounted on the plurality of pods and the respective owner objects of the plurality of pods (Zwiegincew, [0047 – In Fig. 4, a parent-child relationship exists between objects handled by system 100. Accordingly, FDS controller 130 uses these relationships to determine which entities, e.g., fault domains 110, update domains, etc., are children of a particular entity, e.g., FDS 135, fault domains 110, etc. Each object of tree 400, except for FDS object 229, includes an owner reference that identifies the object that owns it]; [0039 - A data processing center and a server rack may be considered different scopes of fault domains. A higher-order fault domain 110, e.g., a data processing center may include multiple distinct lower-order fault domains 110, e.g., server racks. FDS boundary 312 specifies a level in the hierarchy where all pods and volumes 217 of a fault domain 110 is guaranteed to be provisioned]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the owner references of Zwiegincew into the containerized platform of Tal, Raut, Burns for the benefit of resource management wherein the FDS controller determines whether a given object in the tree, a resource hierarchy structure, has become orphaned by checking if its owner exists. If an object has become orphaned, then the resources corresponding to that object become available again for provisioning (Zwiegincew, 0048). Raut clarifies, wherein the application (Raut, [Col. 3, lines 30-32 - An application that is implemented using multiple containers is a containerized application]) comprises a plurality of pods (Raut, [Fig. 1: pods 141-144]); wherein constructing the resource hierarchy (Raut, [Col. 7, lines 6-8 – Fig. 3, which is executed by the container plugin 110/controller, the term ‘configure’ at steps 310 and 340 refer to creating a new entity]; [Col. 1, lines 40-42 - Fig. 3 performs container-based network policy configuration in the SDN environment]; [Col. 7, lines 11-16 – In Fig. 3, step 310, container plugin 110/controller detects a first request, 181 in Fig. 1, to assign a container-based resource with a first label via container orchestration system 101, thereby implying that steps of Fig. 3 construct the resource hierarchy based on network policy]) further comprises constructing the resource hierarchy of the application based the plurality of pods (Raut, [Fig. 1: pods 141-144]), respective owner objects of the plurality of pods (Raut, [Fig. 1: worker nodes 122-123]), and respective resources mounted on the plurality of pods and the respective owner objects of the plurality of pods (Raut, [Col. 3, lines 54-56 – Fig. 1: logical routers 170-172, logical switches 161-162, logical switch ports 151-154]; [Col. 9, lines 30-33 - Logical switch ports LP2 152, LP4 154 and LP5 155 are logical switch ports of type=child because of their association with respective POD2 142, POD4 144 and POD5 145]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the resource hierarchy update of Raut into the containerized platform of Tal, Burns, Zwiegincew for the benefit of configuring and managing container-based network policies in the SDN environment, wherein the container-based network policy refers to a set of rule or rules that define how a container-based resource may communicate with another container-based resource, such as between a node and a pod, among multiple nodes or pods, etc., thereby updating the resource hierarchy (Raut, Col. 6, lines 50-56). Claims 5, 22, 25 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Tal et al (20210200814) in view of Raut et al (10944691), Burns et al (‘Managing Kubernetes’, 2019) and Thoemmes et al (20220027217). As per Claim 5, the rejection of claim 1 is incorporated, and Tal discloses, propagating (Tal, [0224 – In Fig. 8G, shows the message diagram between controller and API 608, of the discovery process to discover resources of the containerized application platform 609; This interaction between the two is similar to Para-0031 of the spec]), by the application discovery controller (Tal, [Figs. 6C,7 - RNMP 320+Master Node 606]]) and through an API server (Tal, [Fig. 6A: API 608/API server]), the resource hierarchy of the application to a management service (Tal, [0166 – APIs/API 608 re
Read full office action

Prosecution Timeline

Oct 31, 2022
Application Filed
Apr 06, 2024
Non-Final Rejection — §103, §112
Jul 12, 2024
Response Filed
Aug 24, 2024
Final Rejection — §103, §112
Oct 30, 2024
Request for Continued Examination
Nov 04, 2024
Response after Non-Final Action
Mar 22, 2025
Non-Final Rejection — §103, §112
Jul 29, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602317
MEMORY DEVICE HARDWARE HOST READ ACTIONS BASED ON LOOKUP OPERATION RESULTS
2y 5m to grant Granted Apr 14, 2026
Patent 12591520
LINEAR TO PHYSICAL ADDRESS TRANSLATION WITH SUPPORT FOR PAGE ATTRIBUTES
2y 5m to grant Granted Mar 31, 2026
Patent 12591382
STORAGE DEVICE OPERATION ORCHESTRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579074
HARDWARE PROCESSOR CORE HAVING A MEMORY SLICED BY LINEAR ADDRESS
2y 5m to grant Granted Mar 17, 2026
Patent 12566712
A RING BUFFER WITH MULTIPLE HEAD POINTERS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
84%
With Interview (+3.5%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month