Prosecution Insights
Last updated: April 19, 2026
Application No. 18/467,061

CONTAINER GROUP SCHEDULING METHODS AND APPARATUSES

Non-Final OA §103
Filed
Sep 14, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6 and 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al. (US 2023/0259409 A1) in view of Wang et al. (CN 112559130 A – English translation provided by USPTO) further in view of Mani et al. (US 10,733,020 B2). As to claim 1, Shi teaches a computer-implemented method for container group scheduling applied to a scheduler running on a master node in a container management cluster, wherein the container management cluster comprises multiple nodes configured to run pods (container groups) created in the container management cluster (The methods may include running a plurality of container groups on one or more node groups of a computing system, wherein each of the container groups comprises one or more containers configured to execute one of a plurality of jobs. The one or more node groups may include a first node group designated to host container groups of a first plurality of sizes and a second node group designated to host container groups of a second plurality of sizes. The plurality of container groups includes a first plurality of container groups running on the first node group; paragraph [0014]), comprising: obtaining multiple to-be-scheduled pods (the mechanisms may schedule deployment of a plurality of container groups for executing the jobs; paragraph [0031]); performing equivalence class partitioning on the multiple to-be-scheduled pods to obtain at least one pod set (the mechanisms may classify the container groups into a plurality of categories based on their container sizes. Each of the categories may correspond to one or more particular container sizes (e.g., a range of container sizes); paragraph [0032], [0052] and [0075]); successively determining each of the at least one pod set as a target pod set (to schedule a container group of a given size, scheduler component 142 may identify one or more node groups that are designated to host container groups of the given size. In some embodiments, to schedule a first container group of a first size; paragraph [0054] and [0076]); and performing scheduling processing on the target pod set to bind each pod in the target pod set to a node configured to run the pod, wherein the scheduling processing comprises (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size. In some embodiments, scheduler component 142 may identify the first node group in view of a determination that the first container group is classified into the first category corresponding to the first class; paragraph [0054]): determining a target schedulable node set corresponding to the target pod set (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size; paragraph [0054]); determining, from the target schedulable node set, a node corresponding to each pod in the target pod set (The processing device may further identify a node of the first node group that is unfilled; paragraph [0076]); and binding each pod in the target pod set to the node corresponding to each pod in the target pod set (and may schedule the first container group on the identified node; paragraph [0076]). Shi does not teach a pod scheduling queue, caching, as cached correspondence, a correspondence between the target pod set and the target schedulable node set; and deleting the cached correspondence. However, Wang teaches a pod scheduling queue that stores pods to be scheduled (wherein the to-be-allocated container set comprises a plurality of to-be-allocated POD. As a possible implementation manner, the embodiment of the invention provides a cache queue, for caching to-be-allocated POD; page 11, 4th paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Wang to the system of Shi because both Shi and Wang are in the same field of endeavor, and Wang teaches a method that stores multiple pods that are waiting for scheduling while selecting a group of pods to be scheduled based on available resources, so it can improve the distribution efficiency and reduce the matching node itself resource imbalance condition (abstract). Mani teaches caching, as cached correspondence, a correspondence between the clients and their resources to be assigned (the apparatuses and methods disclosed herein may store a record of a determined resource allocation in a records store (e.g., an escrow) prior to the determined resource allocation being committed to a placement store; col. 2, lines 4-7 and “The processor 102 may fetch, decode, and execute the instructions 114 to determine a resource allocation for the received allocation request 218. Particularly, the processor 102 may send the allocation request 218 or may otherwise access the allocator function 212 to determine a resource allocation for the allocation request 218 … The parameters may also include records stored in the records store 214, such as records of resources allocations that have been determined but that have not been committed to the placement store 216; col. 5, lines 7-28); and deleting the cached correspondence after allocate the resources to the clients (the processor 102 may, based on receipt of the acknowledgment from the allocator client 204 that the determined resource allocation was successfully placed, delete or clear the record of the determined resource allocation in the records store 214; col. 6, lines 56-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Mani to the system of Shi as modified by Wang because Mani teaches by storing the correspondence between the to be allocated resources with the clients, the allocator function 212 may prevent allocation of the same resource to multiple workloads, e.g., overprovisioning of the resources (col. 5, lines 43-45). As to claim 2, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 1, obtaining multiple to-be-scheduled pods from a pod scheduling queue, comprises: obtaining the multiple to-be-scheduled pods from the pod scheduling queue based on a predetermined time period (see Mani: obtaining the to-be-distributed container set, wherein the to-be-allocated container set comprises a plurality of to-be-allocated POD. As a possible implementation manner, the embodiment of the invention provides a cache queue, for caching to-be-allocated POD. the electronic device in each round distribution period, selecting a plurality of to-be-allocated POD from the buffer queue; page 11, 4th paragraph). As to claim 3, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 1, wherein performing equivalence class partitioning on the multiple to-be-scheduled pods to obtain at least one pod set, comprises: successively determining each of the multiple to-be-scheduled pods as a target pod (see Shi: the mechanisms may schedule deployment of a plurality of container groups for executing the jobs; paragraph [0031]); performing classification processing on the target pod to perform equivalence class partitioning on the multiple to-be-scheduled pods to obtain the at least one pod set (see Shi: the mechanisms may classify the container groups into a plurality of categories based on their container sizes. Each of the categories may correspond to one or more particular container sizes (e.g., a range of container sizes); paragraph [0032]), wherein the classification processing comprises: obtaining feature data of the target pod (see Shi: container sizes; paragraph [0032] and [0075]), and calculating, based on the feature data, a classification index corresponding to the target pod (see Shi: In some embodiments, the first plurality of container sizes, the second plurality of container sizes, and the third plurality of container sizes may correspond to a first range of container sizes, a second range of container sizes, and a third range of container sizes, respectively; paragraph [0074] and “the first plurality of container sizes may include the sizes of a first category of containers and/or container groups”; paragraph [0075]); and determining whether a pod set corresponding to the classification index exists (see Shi: classify containers and/or container groups to be deployed in the computing system into a plurality of categories; paragraph [0075]). As to claim 4, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 3, wherein: if the pod set corresponding to the classification index exists, adding the target pod to the pod set (see Shi: classify containers and/or container groups to be deployed in the computing system into a plurality of categories; paragraph [0075]. The classification module would place the pod set with the container size withing the first range into the first category). As to claim 5, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 4, wherein the feature data comprises at least one or a combination of multiples of general attribute information, resource specifications information, and a scheduling rule (see Shi: Scheduler module 320 may schedule deployment of containers and/or container groups on one or more nodes of the computing system. For example, the container groups may be scheduled based on respective sizes of the plurality of container groups and the container sizes associated with the node groups. For example, scheduling a first container group of a first size may involve identifying one or more of the node groups that are designated to host container groups of the first size (e.g., by determining that the first node group is associated with the first plurality of container sizes and that the first plurality of container sizes includes the first size); paragraph [0076]). As to claim 6, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 5, wherein the feature data comprises the general attribute information (container size), the resource specifications information (job size), and the scheduling rule (see Shi: To execute a workload including a plurality of jobs (e.g., containerized tasks), the mechanisms may schedule deployment of a plurality of container groups for executing the jobs. Each of the container groups may include one or more containers with shared storage and network resources. In some embodiments, each of the container groups may be configured to execute one of the jobs. A job of a given size (e.g., a certain resource demand, such as a CPU demand) may be executed by a container group having a container size corresponding to the size of the job (e.g., a container size equal to or greater than the size of the job); paragraph [0031]). As to claim 8, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 1, wherein determining a target schedulable node set corresponding to the target pod set, comprises: determining a master pod from the target pod set (see Shi: scheduling a first container group of a first size; paragraph [0076]); determining a schedulable node set corresponding to the master pod (see Shi: may involve identifying one or more of the node groups that are designated to host container groups of the first size (e.g., by determining that the first node group is associated with the first plurality of container sizes and that the first plurality of container sizes includes the first size); paragraph [0076]); and determining the schedulable node set as the target schedulable node set corresponding to the target pod set (see Shi: The processing device may further identify a node of the first node group that is unfilled and may schedule the first container group on the identified node.; paragraph [0076]). As to claim 9, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 8, wherein the master pod is a first pod added to the target pod set (a first container group among the container groups; paragraph [0076]). As to claim 10, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 8, wherein the determining a schedulable node set corresponding to the master pod, comprises: filtering out, from the nodes comprised in the container management cluster, a node that is incapable of running the master pod (see Shi: the node groups may include a first node group designated to host container groups of a first plurality of container sizes, a second node group designated to host container groups of a second plurality of container sizes; paragraph [0074]. Thus node(s) associated with the second node group is incapable of running the container group of the first container size; paragraph [0074]); and determining remaining nodes as nodes in the schedulable node set corresponding to the master pod (see Shi: “the nodes may be classified based on computing capacities of the nodes” and “a first node group designated to host container groups of a first plurality of container sizes”; paragraph [0074]. Thus, nodes in the first node group can be used to schedule the first set of container group of the first container size.). As to claim 11, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 10, wherein the determining remaining nodes as nodes in the schedulable node set corresponding to the master pod, comprises: performing running scoring on remaining nodes with respect to the master pod (see Shi: the processing device may determine whether an amount of the available computing resources is equal to or greater than the size of the container group to be scheduled. More particularly, for example, the processing device may determine that the spare capacity of the current node is sufficient to host the container group in response to determining that the amount of the available computing resources of the current node is equal to or greater than the size of the container group to be scheduled; paragraph [0096]); and sorting the remaining nodes in order of values of running scores (see Shi: a processing device may rank a plurality of nodes in a node group based on spare capacities of the plurality of nodes. For example, the processing device may rank a plurality of nodes based on the available computing resources of the plurality of nodes in descending order or ascending order. In some embodiments, the available computing resource may be and/or include storage, processing power, databases, networking, or any other computing resources allocated to the plurality of nodes in the node group. For example, the available computing resources may be and/or include CPU resource; paragraph [0101]). As to claim 12, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 11, comprising: determining, based on a sorting result, a predetermined quantity of nodes with a highest running score; and determining the predetermined quantity of nodes as the nodes in the schedulable node set corresponding to the master pod (As shown in FIG. 5 in the two-dimensional space, there is a resource equalization area 300. Namely the node to be matched in the resource equalization area 300 is in the resource balance state, specifically is the use of the CPU resource is close to the allocation rate of the memory resource, the difference between the two is less than the specific threshold value. Therefore, as shown in FIG. 5, A, C, D is located above the resource equalization area 300, representing the memory allocation rate of A, C, D is higher than the allocation rate of the CPU resource, namely the to-be-matched node A, C, D is in the CPU resource imbalance. Therefore, the electronic device takes the CPU resource as the target resource, the A, C, D are divided into the preferred node set corresponding to the CPU resource imbalance. If desired A, C, D reach the equilibrium state, need to be distributed to the CPU resource requirement of the node to be allocated; page 16, 5th – 6th paragraphs.). As to claim 13, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 11, wherein binding each pod in the target pod set to corresponding to each pod in the target pod set, comprises: successively determining each pod in the target pod set as a target pod; and performing binding processing on the target pod to bind the target pod to a node corresponding to the target pod (see Wang: the distribution module 2103, according to the corresponding relation between the preferred node set and the to-be-distributed container set, the to-be-allocated POD is bound with the node to be matched; based on the binding relation, distributing the POD to be distributed to the corresponding node to be matched; page 26, 2nd paragraph). As to claim 14, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 13, wherein the binding processing, comprises: determining a node that has a highest running score in the target schedulable node set as the node corresponding to the target pod; and binding the target pod to the node corresponding to the target pod (see Shi: The processing device may further identify a node of the first node group that is unfilled and may schedule the first container group on the identified node.; paragraph [0076]) and (see Wang: the distribution module 2103, according to the corresponding relation between the preferred node set and the to-be-distributed container set, the to-be-allocated POD is bound with the node to be matched; based on the binding relation, distributing the POD to be distributed to the corresponding node to be matched; page 26, 2nd paragraph). As to claim 15, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 13, wherein determining, from the target schedulable node set, the node corresponding to each pod in the target pod set, comprises: successively determining each pod in the target pod set as a target pod (see Shi: At block 420, the processing device may schedule deployment of a plurality of container groups on one or more of the classified nodes. The container groups may be scheduled based on respective sizes of the plurality of container groups and the container sizes associated with the node groups. For example, scheduling a first container group of a first size may involve identifying one or more of the node groups that are designated to host container groups of the first size (e.g., by determining that the first node group is associated with the first plurality of container sizes and that the first plurality of container sizes includes the first size). The processing device may further identify a node of the first node group that is unfilled and may schedule the first container group on the identified node; paragraph [0087], [0089- and [0094]). As to claim 16, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 15, comprising: performing binding processing on the target pod to bind the target pod to a node corresponding to the target pod (see Shi: The processing device may then schedule the first container group on the first node in response to determining that the first node is unfilled; paragraph [0089]). As to claim 17, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 14, wherein the binding processing, comprises: determining a node that has a highest running score in the target schedulable node set and that satisfies a resource requirement of the target pod as the node corresponding to the target pod; and binding the target pod to the node corresponding to the target pod (see Shi: in view of a determination that a first size of a first container group matches a first range of container group sizes associated with the first node group, the processing device may determine whether the first node is unfilled. The processing device may then schedule the first container group on the first node in response to determining that the first node is unfilled. In some embodiments, determining whether the first node is unfilled comprises determining whether a threshold number of container groups are running on the first node; paragraph [0089]) and (see Wang: the distribution module 2103, according to the corresponding relation between the preferred node set and the to-be-distributed container set, the to-be-allocated POD is bound with the node to be matched; based on the binding relation, distributing the POD to be distributed to the corresponding node to be matched; page 26, 2nd paragraph). As to claim 18, Shi as modified by Wang and Mani teaches the computer-implemented method of claim 1, wherein the container management cluster comprises a Kubernetes cluster or a Kubernetes-based container management cluster (see Wang: Kubernetes cluster; page 8, 2nd paragraph). As to claim 19, Shi teaches a non-transitory, computer-readable medium storing one or more instructions executable by a computer system (The non-transitory machine-readable storage medium includes instructions that. that, when accessed by a processing device, cause the processing device to: run a plurality of container groups on one or more node groups of a computing system; paragraph [0028]) to perform one or more operations for container group scheduling applied to a scheduler running on a master node in a container management cluster, wherein the container management cluster comprises multiple nodes configured to run pods created in the container management cluster (running a plurality of container groups on one or more node groups of a computing system, wherein each of the container groups comprises one or more containers configured to execute one of a plurality of jobs. The one or more node groups may include a first node group designated to host container groups of a first plurality of sizes and a second node group designated to host container groups of a second plurality of sizes. The plurality of container groups includes a first plurality of container groups running on the first node group; paragraph [0014]), comprising: obtaining multiple to-be-scheduled pods (the mechanisms may schedule deployment of a plurality of container groups for executing the jobs; paragraph [0031]); performing equivalence class partitioning on the multiple to-be-scheduled pods to obtain at least one pod set (the mechanisms may classify the container groups into a plurality of categories based on their container sizes. Each of the categories may correspond to one or more particular container sizes (e.g., a range of container sizes); paragraph [0032], [0052] and [0075]); successively determining each of the at least one pod set as a target pod set (to schedule a container group of a given size, scheduler component 142 may identify one or more node groups that are designated to host container groups of the given size. In some embodiments, to schedule a first container group of a first size; paragraph [0054] and [0076]); and performing scheduling processing on the target pod set to bind each pod in the target pod set to a node configured to run the pod, wherein the scheduling processing comprises (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size. In some embodiments, scheduler component 142 may identify the first node group in view of a determination that the first container group is classified into the first category corresponding to the first class; paragraph [0054]): determining a target schedulable node set corresponding to the target pod set (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size; paragraph [0054]); determining, from the target schedulable node set, a node corresponding to each pod in the target pod set (The processing device may further identify a node of the first node group that is unfilled; paragraph [0076]); and binding each pod in the target pod set to the node corresponding to each pod in the target pod set (and may schedule the first container group on the identified node; paragraph [0076]). Shi does not teach a pod scheduling queue, caching, as cached correspondence, a correspondence between the target pod set and the target schedulable node set; and deleting the cached correspondence. However, Wang teaches a pod scheduling queue that stores pods to be scheduled (wherein the to-be-allocated container set comprises a plurality of to-be-allocated POD. As a possible implementation manner, the embodiment of the invention provides a cache queue, for caching to-be-allocated POD; page 11, 4th paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Wang to the system of Shi because both Shi and Wang are in the same field of endeavor, and Wang teaches a method that stores multiple pods that are waiting for scheduling while selecting a group of pods to be scheduled based on available resources, so it can improve the distribution efficiency and reduce the matching node itself resource imbalance condition (abstract). Mani teaches caching, as cached correspondence, a correspondence between the clients and their resources to be assigned (the apparatuses and methods disclosed herein may store a record of a determined resource allocation in a records store (e.g., an escrow) prior to the determined resource allocation being committed to a placement store; col. 2, lines 4-7 and “The processor 102 may fetch, decode, and execute the instructions 114 to determine a resource allocation for the received allocation request 218. Particularly, the processor 102 may send the allocation request 218 or may otherwise access the allocator function 212 to determine a resource allocation for the allocation request 218 … The parameters may also include records stored in the records store 214, such as records of resources allocations that have been determined but that have not been committed to the placement store 216; col. 5, lines 7-28); and deleting the cached correspondence after allocate the resources to the clients (the processor 102 may, based on receipt of the acknowledgment from the allocator client 204 that the determined resource allocation was successfully placed, delete or clear the record of the determined resource allocation in the records store 214; col. 6, lines 56-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Mani to the system of Shi as modified by Wang because Mani teaches by storing the correspondence between the to be allocated resources with the clients, the allocator function 212 may prevent allocation of the same resource to multiple workloads, e.g., overprovisioning of the resources (col. 5, lines 43-45). As to claim 20, Shi teaches a computer-implemented system (an apparatus paragraph [0127]), comprising: one or more computers (a general purpose computer; paragraph [0127]); and one or more computer memory devices (storage medium paragraph [0127]) interoperably coupled with the one or more computers and having tangible, non-transitory, machine-readable media storing one or more instructions that, when executed by the one or more computers (paragraph [0129]), perform one or more operations for container group scheduling applied to a scheduler running on a master node in a container management cluster, wherein the container management cluster comprises multiple nodes configured to run pods created in the container management cluster (running a plurality of container groups on one or more node groups of a computing system, wherein each of the container groups comprises one or more containers configured to execute one of a plurality of jobs. The one or more node groups may include a first node group designated to host container groups of a first plurality of sizes and a second node group designated to host container groups of a second plurality of sizes. The plurality of container groups includes a first plurality of container groups running on the first node group; paragraph [0014]), comprising: obtaining multiple to-be-scheduled pods (the mechanisms may schedule deployment of a plurality of container groups for executing the jobs; paragraph [0031]); performing equivalence class partitioning on the multiple to-be-scheduled pods to obtain at least one pod set (the mechanisms may classify the container groups into a plurality of categories based on their container sizes. Each of the categories may correspond to one or more particular container sizes (e.g., a range of container sizes); paragraph [0032], [0052] and [0075]); successively determining each of the at least one pod set as a target pod set (to schedule a container group of a given size, scheduler component 142 may identify one or more node groups that are designated to host container groups of the given size. In some embodiments, to schedule a first container group of a first size; paragraph [0054] and [0076]); and performing scheduling processing on the target pod set to bind each pod in the target pod set to a node configured to run the pod, wherein the scheduling processing comprises: (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size. In some embodiments, scheduler component 142 may identify the first node group in view of a determination that the first container group is classified into the first category corresponding to the first class; paragraph [0054]) determining a target schedulable node set corresponding to the target pod set (scheduler component 142 may identify a first node group as a node group that may host the first container group in view of a determination that the first node group is associated with a first plurality of container sizes and that the first plurality of container sizes includes the given size; paragraph [0054]); determining, from the target schedulable node set, a node corresponding to each pod in the target pod set (The processing device may further identify a node of the first node group that is unfilled; paragraph [0076]); and binding each pod in the target pod set to the node corresponding to each pod in the target pod set (and may schedule the first container group on the identified node; paragraph [0076]). Shi does not teach a pod scheduling queue, caching, as cached correspondence, a correspondence between the target pod set and the target schedulable node set; and deleting the cached correspondence. However, Wang teaches a pod scheduling queue that stores pods to be scheduled (wherein the to-be-allocated container set comprises a plurality of to-be-allocated POD. As a possible implementation manner, the embodiment of the invention provides a cache queue, for caching to-be-allocated POD; page 11, 4th paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Wang to the system of Shi because both Shi and Wang are in the same field of endeavor, and Wang teaches a method that stores multiple pods that are waiting for scheduling while selecting a group of pods to be scheduled based on available resources, so it can improve the distribution efficiency and reduce the matching node itself resource imbalance condition (abstract). Mani teaches caching, as cached correspondence, a correspondence between the clients and their resources to be assigned (the apparatuses and methods disclosed herein may store a record of a determined resource allocation in a records store (e.g., an escrow) prior to the determined resource allocation being committed to a placement store; col. 2, lines 4-7 and “The processor 102 may fetch, decode, and execute the instructions 114 to determine a resource allocation for the received allocation request 218. Particularly, the processor 102 may send the allocation request 218 or may otherwise access the allocator function 212 to determine a resource allocation for the allocation request 218 … The parameters may also include records stored in the records store 214, such as records of resources allocations that have been determined but that have not been committed to the placement store 216; col. 5, lines 7-28); and deleting the cached correspondence after allocate the resources to the clients (the processor 102 may, based on receipt of the acknowledgment from the allocator client 204 that the determined resource allocation was successfully placed, delete or clear the record of the determined resource allocation in the records store 214; col. 6, lines 56-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Mani to the system of Shi as modified by Wang because Mani teaches by storing the correspondence between the to be allocated resources with the clients, the allocator function 212 may prevent allocation of the same resource to multiple workloads, e.g., overprovisioning of the resources (col. 5, lines 43-45). Claims 7 are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al. (US 2023/0259409 A1) in view of Wang et al. (CN 112559130 A – English translation provided by USPTO) and Mani et al. (US 10,733,020 B2) further in view of Xu et al. (CN 112905338 A – English translation provided by USPTO). As to claim 7, Shi as modified by Wang and Mani does not teach the computer-implemented method of claim 6, wherein calculating, based on the feature data, a classification index corresponding to the target pod, comprises: separately calculating a hash value of each of the general attribute information, the resource specifications information, and the scheduling rule of the target pod; and splicing the hash value of the general attribute information, the hash value of the resource specifications information, and the hash value of the scheduling rule, and determining a hash value obtained through splicing as the classification index corresponding to the target pod. However, Xu teaches calculating a classification index corresponding to the application, application node type; and splicing the hash value, and determining a hash value obtained through splicing as the classification index corresponding to the target application (step 210: generating the hash value corresponding to the application and application node type, and taking the hash value as the unique identification corresponding to the application and application node type. The step 220 is as follows: each available domain corresponding to the distributed system comprises the group name as the unique identifier of the computing resource group, if not, creating the group name is the unique identifier of the computing resource group. For example, according to the application and application node type obtained by analysis, after splicing the hash value, as the weak anti-affinity group name of openstack, for the application; all virtual machines of the application node type are classified into a group; then calling openstack interface to judge whether the group name in each available domain exists; if not, creating a weak anti-affinity group; then according to the target distribution strategy obtained by analysis, calculating the number of the distribution of each resource domain; page 14, 2nd – 4th paragraphs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the concept taught by Xu to the system of Shi as modified by Wang and Mani because Xu teaches a method that can effectively improve the automation degree and intelligent degree of calculating resource distribution, and can effectively improve the reliability and validity of calculating resource distribution process; so as to effectively improve the operation reliability and stability of the distributed system, reduce the operation and maintenance cost of the distributed system, and can improve the user experience of the maintenance personnel of the distributed system (page 34, 5th paragraph). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shemer et al. (US 2022/0337618 A1) teaches a method includes performing a filtering process that identifies one or more candidate hosts for scheduling of a pod, wherein the candidacy of a host is determined based in part upon an association rule, generating an overall host score for each of the candidate hosts, and scheduling the pod to one of the candidate hosts based on the overall host score of that candidate host. A host risk score and/or pod risk score may be used in the generating of the overall host score. Gamage et al. (US 2022/0075643 A1) teaches a method for unified resource management of containers and virtual machines. A podVM resource configuration for a pod virtual machine (podVM) is determined using container configurations. The podVM comprising a virtual machine (VM) that provides resource isolation for a pod based on the podVM resource configuration. A host selection for the podVM is received from a VM scheduler. The host selection identifies hardware resources for the podVM. A container scheduler is limited to bind the podVM to a node corresponding to the hardware resources of the host selection from the VM scheduler. Natarajan et al. (US 2023/0266997 A1) teaches methods, systems, and computer program products for distributed scheduling in container orchestration engines are provided herein. A computer-implemented method includes: configuring a plurality of entities of a container-based computing environment to perform a distributed scoring process, wherein, for a given one of the entities, the distributed scoring process comprises: (i) obtaining information corresponding to a plurality of workloads from a database that is accessible to the other entities, (ii) generating, based on the information, respective scores for at least a portion of the plurality of workloads, and (iii) publishing the generated scores to the database; and selecting, by a centralized scheduler of the container-based computing environment, at least one of the entities to host at least a given one of workloads based at least in part on the generated scores in the database. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC February 20, 2026
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month