DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to Applicant’s Amendment filed on 9/30/2025.
Claims 1-20 are presented for examination. Claim 14 has been amended.
Applicant’s responses and amendment to the claim have overcome 112 rejections set forth in the non-Final Office Action mailed 6/30/2025.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (CN 112199194 A-IDS recorded, hereafter Lin) in view of Maurya et al. (US 20220035651 A1, hereafter Maurya).
Note: again, Applicant also provided/attached the English translation version of Lin with the IDS submitted. However, the paragraph numbers used by the provided/attached English translation version do not match with original publication version. Such as, paragraph indicated as [0002] from the English translation version is actually indicated as [0001] at the original publication version; the last paragraph of the English translation version is indicated as [0267] while the last paragraph of the original publication version is indicated as [0262]. For this issue, all paragraphs used/cited at this Office Action for reference Lin are based on the paragraph numbers from the original publication version instead of the provided/attached English translation version.
Regarding to claim 1, Lin discloses: A computer-implemented resource allocation method (see [0111]; “a resource scheduling method based on a container cluster provided by Embodiment 2 of the present invention. This embodiment uses the foregoing embodiment as a basis to further refine the operation of the scheduling component to allocate CPU resources”), comprising, in a computing environment comprising a resource management unit and a cluster comprising a cluster management node and a cluster node running an application program (see Fig. 2, [0035]-[0046] and [0097]; the dynamic-hybrid-controller discussed at [0044]-[0046] is mapped to claimed resource management unit; the worker node having pod online discussed at [0037], [0041]-[0042] is mapped to claimed cluster, such worker node comprising vscaled discussed at [0097], i.e., claimed cluster management node, and pod discussed [0041], i.e., claimed cluster node running an application program. Also see [0033] and [0041] for pod is reasonable to considered as a cluster node running containerized software application):
receiving a request for allocating one or more system resources to the application program (see [0112]-[0113]; “Receive an update event of the instance … The resource declared by requests is the basis of the container cluster Kubernetes during scheduling”. Also see [0097]-[0103]; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration, … Cgroup is a resource restriction mechanism provided by Linux. It can be configured and modified in the form of editing files ,,, determine the resources (such as CPU, memory, etc.) allocated to the instance pod, and obtain resource allocation information … write the resource allocation information into the configuration file … Changes in the annotations”. In this way, the update event of the instance discussed at [0112]-[0113] in one of the reasonable embodiments being an event or request to allocate or update system resources allocate to the application program running on pod);
retrieving from the cluster management node, an identifier of the cluster node running the application program (see Fig. 2, [0102]-[0108], [0116]-[0122]. Fig. 2 shows at least one node or cluster are running multiple pods, and thus it is required for the vscaled to know which pod of the given node is associated with the received update event/request. In addition, “$pod_id” from [0105] and [0106] also provide evidence to show it is required to retrieve identifier of the pod or claimed cluster node running the application program for received update event/request in order to modify correct configuration file for correct pod to complete the expansion and shrinking operations discussed at [0103]); and
dynamically updating system physical resources allocated to the cluster node by updating a resource allocation file managed by an operating system of a computing machine on which the cluster is running, based on the identifier of the cluster node and the received request (see [0097]-[0108]; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration, so that they can Modify the configuration file cgroup … Cgroup is a resource restriction mechanism provided by Linux” and “modifies the configuration file cgroup of the corresponding container to complete the expansion and shrinking operations”. Also see [0116] and [0145]; “the request/limit of the container will be converted to the configuration file linux cgroup, and the purpose of limiting the use of container resources is achieved through the configuration file cgroup” and “The configuration file cgroup is a resource restriction mechanism provided by the Linux operating system”).
Note: the claimed feature of “dynamically updating system physical resources allocated to the cluster node” under BRI is performed by claimed “updating a resource allocation file managed by an operating system of a computing machine on which the cluster is running, based on the identifier of the cluster node and the received request”, and thus achieving the feature of “updating a resource allocation file … and the received request” would achieve feature of “dynamically updating system physical resource allocated to the cluster node” (no matter updating resource allocation file is updating system physical resource on the resource allocation file or updating virtual resource information associated with the system physical resource on the resource allocation file). If Applicant intends to interpret the claimed updating a resource allocation file would involve with feature of updating system physical resources in the resource allocation file, then Applicant is suggested to amend the claims to further specify the claimed updating a resource allocation file would include updating the actual allocated system physical resources mapped to the virtual resource allocated to the cluster node on the resource allocation file.
Lin does not disclose: the request is received by the resource management unit and the identifier of cluster node is retrieved by the resource management unit
However, Maurya discloses:
receiving, by the resource management unit, a request for modifying the cluster node running the application program (see claims 5-6; “the compute deployment agent is a master worker node of a Kubernetes worker node cluster that receives requests to modify a set of machines in the Kubernetes worker node cluster … a request to modify a deployed machine, the set of machines comprising at least one of a container, and a pod that requires a connection to the VPC”. Also see [0068]; “if the request … (2) relates to a pod on the host computer of the DNPA instance to be created, removed, or modified”);
retrieving, by the resource management unit, from the cluster management node, an identifier of the cluster node running the application program (see [0071]-[0072]; “retrieves (at 910) metadata associated with the selected request. In some embodiments, retrieving the metadata includes retrieving at least a portion of the metadata from the request queue. Additional metadata, in some embodiments, is retrieved from a worker or master node of a cluster related to the request”, “identifying, at a cluster manager plugin, network elements that are affected by the cluster-level request. For example, a request to implement a load balancer or firewall requires a generation of data identifying Pods, containers, and/or machines for which to provide the load balancing … the generated data includes at least one port identifier for a requested Pod-level construct”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the resource reallocation or modification operations for a requesting pod executing on a worker node from Lin by including a manager component of a master node receives requests for associated worker nodes and identifies associated pods or containers for the received requests from Maurya, and thus the combination of Lin and Maurya would disclose the missing limitations from Lin, since it would provide a central management mechanism to manage requests from all of associated worker nodes.
Regarding to Claim 2, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the cluster node is comprised in a cluster computing node of the cluster (see Fig. 2, [0037]-[0041] from Lin; “A node is a physical machine … The node can run the following components: … Pod (instance)”).
Regarding to Claim 3, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the system physical resources allocated to the cluster node were allocated by the cluster management node to the cluster node (see [0097] from Lin; “vscaled (scheduling component … dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration)”).
Regarding to Claim 5, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the request comprises system resources of the computing machine to be allocated to the application program (see [0097], [0121] and [0137] from Lin; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration”, “If the two instance pods declare the bound processors in turn, 4 logical cores and 2 logical cores are required, respectively”).
Regarding to Claim 6, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the resource allocation file is used by the operating system to allocate resources of the computing machine to the cluster node (see [0097]-[0099], [0145]; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration, so that they can Modify the configuration file cgroup … Cgroup is a resource restriction mechanism provided by Linux”, “modifies the configuration file cgroup of the corresponding container to complete the expansion and shrinking operations” and “The configuration file cgroup is a resource restriction mechanism provided by the Linux operating system”).
Regarding to Claim 7, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the cluster node comprises one or more container nodes, wherein the method further comprises: retrieving, by the resource management unit, from the cluster management node, respective identifiers of the one or more container nodes, and wherein the resource allocation file is updated based on the identifiers of the one or more container nodes (see Fig. 2, [0102]-[0108], [0116]-[0122] from Lin. Fig. 2 shows at least one node or cluster are running multiple pods, and thus it is required for the vscaled to know which pod of the given node is associated with the received update event/request. In addition, “$pod_id” and “$container_id” from [0105] and [0106] also provide evidence to show it is required to retrieve identifier of the pod and claimed identifiers of the one or more container nodes for received update event/request in order to modify correct configuration file for correct pod/container to complete the expansion and shrinking operations discussed at [0103].Also see [0141]-[0145] from Lin; “the first configuration file cgroup of the instance pod … the second configuration file cgroup of the container in the instance pod”).
Regarding to Claim 8, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses:
creating, by the resource management unit, a resource allocation process running on the operating system in a cluster computing node of the cluster (see [0069], [0074] from Maurya and [0097] from Lin; “If the selected request is determined (at 815) to be related to the DNPA instance that received the notification, the DNPA instance stores (at 820) the request to a request queue 486 of the DNPA”, “the request, the retrieved metadata, and any generated data are sent (at 930) to the network manager (e.g., by the CMP 484 or PMP 483 using communication agent 482)”, “vscaled (scheduling component … dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration)”. At the combination system, after the dynamic-hybrid-controller as claimed resource management unit OR central management component of the Kubernetes system receives resource allocation or update request for a particular pod running on a particular worker node and retrieves corresponding pod or container identifiers, such dynamic-hybrid-controller would forward such resource allocation or update request to the corresponding worker node for actual execution, i.e., vscaled of the particular worker node is started to perform the resource allocation or update. Note: vscaled was already existed at the worker node, however the actual execution of the resource allocation or update operation/process is started after the dynamic-hybrid-controller forwards the request to the corresponding worker node, and thus it is reasonable to conclude that the dynamic-hybrid-controller forward, i.e., claimed resource management unit creates/commences the resource allocation or update operation, i.e., claimed resource allocation process, at the particular worker node associated with the request);
receiving, from the resource allocation process, a first resource allocation status of system resources currently allocated to the cluster node (see [0127]-[0128] from Lin; “the scheduling component vscaled can read the topology of the processor …. and maintain the state of the processor in the memory, where the state includes an idle state, Bound state, idle state means unbound, and bound state means bound”); and
determining a system resource allocation update based on the first resource allocation status and the received request; and wherein the resource allocation file is updated based on the system resource allocation update (see [0097], [0139]-[0145] from Lin; “vscaled (scheduling component … dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration” and “if the scheduling component vscaled finds an idle state (that is, unbound) and a logical core that satisfies the configuration information as the target core, the instance pod can be bound to the target core … the sequence number of the target core can be set to the bind mode cpuset in the first configuration file cgroup of the instance pod … to update the serial number of the target core to the binding mode cpuset in the second configuration file cgroup of the container in the pod of this instance”).
Regarding to Claim 9, the rejection of Claim 8 is incorporated and further the combination of Lin and Maurya discloses: transmitting to the resource allocation process a request for the first resource allocation status, wherein the first resource allocation status is received in response to the request for the first resource allocation status (see [0124], [0128] from Lin; “list is the list API that calls resources to list resource” and “the scheduling component vscaled can read the topology of the processor …. and maintain the state of the processor in the memory, where the state includes an idle state, Bound state, idle state means unbound, and bound state means bound”. Requesting to perform list API and then receiving resources’ allocation states).
Regarding to Claim 10, the rejection of Claim 8 is incorporated and further the combination of Lin and Maurya discloses: receiving, from the resource allocation process, a second resource allocation status of system resources that are not currently allocated to the cluster node, wherein the system resource allocation update is further determined based on the second resource allocation status (see [0127]-[0128] and [0139]-[0145] from Lin; “the scheduling component vscaled can read the topology of the processor …. and maintain the state of the processor in the memory, where the state includes an idle state, Bound state, idle state means unbound, and bound state means bound” and “if the scheduling component vscaled finds an idle state (that is, unbound) and a logical core that satisfies the configuration information as the target core, the instance pod can be bound to the target core … the sequence number of the target core can be set to the bind mode cpuset in the first configuration file cgroup of the instance pod … to update the serial number of the target core to the binding mode cpuset in the second configuration file cgroup of the container in the pod of this instance”).
Regarding to Claim 11, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein dynamically updating resource allocation files managed by the operating system comprises: updating respective values of one or more resource allocation parameters configured in the resource allocation file for the cluster node (see [0097], [0141]-[0145] from Lin; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration”, “the sequence number of the target core can be set to the binding mode cpuset in the fist configuration file cgroup of the instance pod … to update the serial number of the target core to the binding mode cpuset in the second configuration file cgroup of the container in the pod of this instance”).
Regarding to Claim 12, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the one or more system resources comprise CPU resources which comprise a CPU quota parameter defining a number of CPU cores, wherein the updating the resource allocation file comprises setting a value of the CPU quota parameter to a value representing a number of CPU cores allocated to the cluster node (see [0105], [0117]-[0119] from Lin; “cpuset.huya.com:4”).
Regarding to Claim 13, the rejection of Claim 12 is incorporated and further the combination of Lin and Maurya discloses: wherein the value represents an integer number of CPU cores (see [0105], [0117]-[0119] from Lin; “cpuset.huya.com:4”).
Regarding to Claim 14, the rejection of Claim 12 is incorporated and further the combination of Lin and Maurya discloses: wherein the cluster node is comprised in a cluster computing node of the cluster, wherein the cluster computing node is executed on a physical machine (see [0037]-[0041] from Lin; “A node is a physical machine … The node can run the following components: … Pod (instance)”), and wherein the value is determined such that a cumulative number of CPU cores allocated to cluster nodes of the cluster computing node does not exceed CPU resources that are available on the physical machine (see [0127]-[0128] and [0137] from Lin; “search for a logical core that is in an idle state and meets the configuration information as a target core”, “it maintains a list of logical cores … If the two instance pods declare the bound processors in turn, 4 logical cores and 2 logical cores are required, respectively … and other instance pods are bound to the logical cores with the sequence number [0, 25]”).
Regarding to Claim 15, the rejection of Claim 12 is incorporated and further the combination of Lin and Maurya discloses: in case a cumulative number of CPU cores allocated to cluster nodes of the cluster computing node exceeds CPU resources that are available on a physical machine, responding to the request for allocating one or more system resources with a message informing that the request cannot be served (see [0138] from Lin; “in the case of many logical cores that have been bound, the logical cores in the idle state … a binding failure event can be generated, Bind the binding failure event to the current instance pod, and notify the upper application of the container cluster Kubernetes”).
Regarding to Claim 16, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the one or more system resources comprise CPU resources which comprise CPU cores (see [0097] and [0131] from Lin; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration”),
wherein the updating the resource allocation file comprises assigning all software threads of the cluster node to one or more CPU cores among the CPU cores (see [0132]-[0138] from Lin; “search the processor in the current node node to find the logic that is in the ideal state (that is, not bound) and meets the configuration information Nuclear, as the target core”, “in the case of many logical cores that have been bound, the logical cores in the idle state … a binding failure event can be generated”. All of the logical cores assigned/bound to the pod, i.e., claimed software threads, are found among the physical CPU cores of the node, and thus the logical cores or the software threads are assigned to the one or more physical CPU cores).
Regarding to Claim 17, the rejection of Claim 16 is incorporated and further the combination of Lin and Maurya discloses: assigning a maximum execution priority to the execution of the cluster node on the one or more CPU cores (see [0143]-[1044] from Lin; “the state of the target core is changed from the idle state to the bound state to prevent subsequent calls by other instance pods”).
Regarding to Claim 19, Claim 19 is a system claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above.
Regarding to Claim 20, Claim 20 is a product claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (CN 112199194 A-IDS recorded, hereafter Lin) in view of Maurya et al. (US 20220035651 A1, hereafter Maurya) and further in view of Merwaday et al. (US 20220038554 A1, hereafter Merwaday).
Regarding to Claim 4, the rejection of Claim 1 is incorporated, the combination of Lin and Maurya does not disclose: wherein the application program is a video processing application program.
However, Merwaday discloses: receiving a request for allocating one or more system resources to the application program, wherein the application program is a video processing application program (see [0074] and [0172]; Some additional or alternative microservices 211 a, 211 b include … Video Transcode Service (e.g., an application and/or microservice that exposes a REST API for transcoding on the edge platform HW 241)” and “a tenant specific pod controller, there will be a shared pod controller that consolidates resource allocation requests”. Also see [0071], [0146], “The OVN/OVS-DPDK is a high-performance Data Plane microservice(s) supporting a Container Network Interface (CNI) that can be managed by a standard software-defined network (SDN) controller” “Applications that have been adapted for edge computing include but are not limited to virtualization of traditional network functions (e.g., to operate telecommunications or Internet services) and the introduction of next-generation features and services (e.g., to support 5G network services). Use-cases which are projected to extensively utilize edge computing include connected self-driving cars, surveillance, Internet of Things (IoT) device data analytics, video encoding and analytics”).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the generic application software running on pod of worker node from the combination of Lin and Maurya by including running certain particular type of video processing service or program on pods or containers from Merwaday, and thus the combination of Lin, Maurya and Merwaday would discloses the missing limitations from the combination of Lin and Maurya, since it would provide a specific type of service to be executed on pod or containers or microservices.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (CN 112199194 A-IDS recorded, hereafter Lin) in view of Maurya et al. (US 20220035651 A1, hereafter Maurya) and further in view of Dong et al. (US 20130167146 A1, hereafter Dong).
Regarding to Claim 18, the rejection of Claim 1 is incorporated and further the combination of Lin and Maurya discloses: wherein the one or more system resources comprise CPU resources which comprise CPU cores (see [0097] and [0131] from Lin; “dynamically adjust the CPU bound to the instance pods according to their declared binding core configuration”),
wherein the updating the resource allocation file comprises assigning all software threads of the cluster node to CPU cores of [a same] physical CPU node of the cluster computing node (see [0132]-[0138] from Lin; “search the processor in the current node node to find the logic that is in the ideal state (that is, not bound) and meets the configuration information Nuclear, as the target core”, “in the case of many logical cores that have been bound, the logical cores in the idle state … a binding failure event can be generated”. All of the logical cores assigned/bound to the pod, i.e., claimed software threads, are found among the physical CPU cores of the node, and thus the logical cores or the software threads are assigned to the one or more physical CPU cores).
The combination of Lin and Maurya does not disclose: assigning all software threads of the cluster node to CPU cores of a same physical CPU node of the cluster computing node.
However, Dong discloses: assigning all software threads of the cluster node to CPU cores of a same physical CPU node of the physical computing node (see [0025], [0037]; “if virtual central processing units of virtual machine A and virtual machine B are scheduled on the same physical processing unit, due to the cache contents loaded by the virtual central processing unit of virtual machine A at execution time being reusable by the virtual central processing unit of virtual machine B” and “schedule virtual central processing units of virtual machines of a group on the same physical processing unit”. Note: although Dong only used terms/objects like “virtual central processing units” instead of claimed “software threads”, it is reasonable to consider the threads of such virtual central processing units as claimed software threads due to it is understood that threads are the smallest execution components at computing fields and the threads of such virtual central processing units are software implemented due to the virtualization technology).
It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the allocations of virtual processors of a requesting pod or container from the combination of Lin and Maurya by including allocating or assigning all of virtual processors of a group of virtual machines on same physical processor from Dong, and thus the combination of Lin, Maurya and Dong would discloses the missing limitations from the combination of Lin and Maurya, since it would provide a mechanism of allocating processor resources efficiently via allocating related processes within same processor (see [0025] from Dong; “the cache contents loaded by the virtual central processing unit of virtual machine A at execution time being reusable by the virtual central processing unit of virtual machine B”).
Response to Arguments
Applicant’s arguments, filed 9/30/2025, with respect to rejections of claims 1-20 under 35 U.S.C. 103 have been full considered but they are not persuasive.
Applicant’s arguments at pages 7-13 are summarized as the following:
For the independent claim 1, “on page 6 the Office Action alleges that the limitation reciting ‘receiving, (…), a request for allocating one or more system resources to the application program is disclosed by paragraphs [0112]-[0113] of Lin” (see 5th paragraph of page 7 from the Remarks),“Applicant notes that paragraphs [0115]-[0118] of Lin does not provide any disclosure as to what is an ‘update event of an instance,’ and which node of Lin receives such ‘update event of an instance’.” Examiner appears to connect the two disclosures (i.e., [0097]-[0103] and [0115]-[0118]) of Lin due to both disclosures uses same word “instance”, and then Examiner “assume that the ‘update even of an instance’ is received by a scheduling component ‘vscaled’ (see 2nd paragraph-3rd paragraph of page 8 from the Remarks).
For the independent claim 1, “it does not necessarily mean that Lin further teaches that the ‘vscaled’ scheduling component having received the ‘update event’ would retrieve an identifier of the pod” (see 2nd paragraph of page 9 from the Remarks). “the Office Action considers an alleged correspondence between the claimed ‘cluster management node’ and the same ‘vscaled’ scheduling component. As a consequence, the correspondences alleged by the Office Action, which would require that the ‘vscaled’ component retrieves a pod identifier from itself, cannot hold” (see 4th paragraph of page 9 from the Remarks). In addition, “$pod_id” from [0105]-[0106] of Lin “merely teaches using a pod identifier, without any link to the actual retrieval of such identifier from a cluster management node as required by claim 1.” (see last 2nd paragraph of page 9 from the Remarks). Furthermore, “$pod_id” from [0108]-[0113] of Lin “is merely cited as part of cgroup paths to be modified for updating different types of resources. As a consequence, Applicant submits that Lin is silent with respect to retrieving a ‘pod_id’ of a pod that would allegedly correspond to the claimed ‘cluster node running the application.’” (see 1st paragraph of page 10 from the Remarks).
For independent claim 1, “Maurya fails to provide that which Lin lacks”. To be more specific, “claim 5 of Maurya merely teaches ‘a master worker node of a Kubernetes worker node cluster’ receiving requests to modify a set of machines in a Kubernets worker node cluster, which cannot correspond to a ‘request for allocating one or more system resources to the application program’ as recited in claim 1” (see first three paragraphs of page 11 from the Remarks). “paragraph [0068] of Lin also does not provide any teaching as to which node receives a request that would correspond to the claimed ‘request for allocating one or more system resource to the application program’” (see 4th paragraph of page 11 from the Remarks). In addition, “paragraphs [0071]-[0072] of Lin teach” (note: at the Remarks, Applicant stated [0071]-[0072] of Lin, however, it should be [0071]-[0072] of Maurya) “that further to selecting a request, ‘metadata associated with the selected request’ be retrieved, and that after retrieving the meta data, ‘data related to the request’ may be generated. There is therefore no teaching in these paragraphs of an identifier being retrieved. As a consequence, these paragraphs are also silent with respect to a node performing such retrieval’” (see 2nd paragraph of page 12 from the Remarks).
The additional references, Merwaday, Dong, fail to provide that which reference Lin or reference Maurya lacks (see pages 13-14 from the Remarks).
The examiner respectively disagrees.
The paragraphs from Lin that Applicant quoted at the Remarks includes such statement of “This embodiment uses the foregoing embodiment as a basis to further redefine the operation of the scheduling component to allocate CPU resources. The method specifically includes the following steps: Step 301: Receive an update event of an instance” (emphasis added by Examiner, see page 7 from the Remarks). Based on such statement, one with ordinary skill in the art would understand that the two disclosures that Applicant argued are connected or linked to support Examiner’s “assume” that Applicant argued. In addition, [0121] from Lin actually states that “the scheduling component vscaled in the node where the instance pod is located may receive an update event for the instance pod, the update event including configuration information for the binding processor”. Examiner did cite this [0121] (the further explanations/descriptions on “Step 301: Receive an update event of an instance”) from Lin and explained at the rejection that “the vscaled to know which pod of the given node is associated with the received update event/request”.
Furthermore, at the rejection of claim 1, Examiner clearly explained that Examiner considered it is not the claimed resource management unit, or the dynamic-hybrid-controller from Lin that Examiner mapped to the claimed resource management unit, to receive the claimed request, to retrieve the claimed identifier of the cluster node (see “Lin does not disclose: the request is received by the resource management unit and the identifier of cluster node is retrieved by the resource management unit” of page 7 from the non-Final Office Action). For reference Lin, Examiner only used reference Lin to teach there is similar claimed “receiving” and similar claimed “retrieving” steps/actions without teaching these “receiving” and “retrieving” steps/actions are performed by claimed resource management units or the dynamic-hybrid-controller from Lin.
First of all, [0103] from Lin (or [0108] of the English translation version. Applicant quoted such paragraph at page 9 from the Remarks) does discuss features of “the scheuding component vscaled” “modifies the configuration file cgroup of the container component docker daeemonet to complete the expansion and reduction operations” and “the cgroup paths that need to be modified for different resources are … /$pod_id/ …”. Based on such description, one with ordinary skill in the art would understand that the vscaled would be required to know the pod_id of associated cgroup file in order to modify correct cgroup file. In addition, Examiner also explained that there are multiple pods are running on the cluster, and thus the vscaled is required to know which pod is associated with the update event or request in order to modify correct cgroup file. In this way, it is not clear the logic that Applicant argued that “it does not necessarily mean that Lin further teaches that the ‘vscaled’ scheduling component having received the ‘update event’ would retrieve an identifier of the pod”; otherwise, the vscaled would not know which pod instance is requesting update event which pod instance is not requesting update event. Note: here Examiner does not consider or assume that “such update event includes a pod identifier” as Applicant argued at the Remarks. The feature considered and explained at the rejection is: the vscaled is required to know pod identifier in order to know which pod instance is related to the update event and then modify the corresponding cgroup file at the correct cgroup path containing the $pod_id information.
Once again, as explained at the response a) above, Examiner only used reference Lin to teach there is similar receiving and similar retrieving limitation without teaching it is the claimed resource management units or the dynamic-hybrid-controller from Lin receive the claimed request and retrieve claimed identifier. The feature of vscaled retrieves pod identifier would show or support that there is some component later is able to retrieve such pod identifier from vscaled since vscaled does know which pod is associated with the update event or associated cgroup of the pod is modified (i.e., know pod identifier or claimed identifier of the cluster node running the application program).
For reference Maurya, claim 5 clearly states that it is “a master worker node of a Kubernets worker node cluster” to receive requests. The reason of citing [0068] is [0068] can be used as supplemental support in addition to claims 5-6 to one with ordinary skill in the art to map the requests received by the master worker node from claim 5 of Maurya are reasonable to be considered as claimed “request for allocating one or more system resources to the application program”. Both of claim 6 and [0068] states that the requests received by the master worker node can be request at pod-level and claim 6 further specifies such request being “a request to modify a deployed machine, the set of machines comprising at least one of a container, and a pod that requires a connection to the VPC”. Furthermore, according to [0004] from Maurya, particularly, “The machines, in some embodiments, are connected to the VPC by the network manager by assigning a set of network addresses (e.g., IP (internet protocol) addresses, MAC (media access control) addresses, ports, etc.) to the machines and updating a set of network elements (e.g. forwarding elements) of the VPC to use at least one network address in the set of network addresses to connect to the machines”, such request from claim 6 or [0068] is at least a request for allocating system resources (like network addresses or MAC) to the application of pod instance (note: it is understood that a pod instance is used to run certain software application). In this way, Maurya does teach a master worker node of a Kubernets worker node cluster to receive a request for allocating system resources to the application running at a cluster node. In addition, the dynamic-hybrid-controller from Lin according to [0044], [0048], [0100] and [0200] are reasonable to be consider as master worker node to manage or control the resources of other worker nodes at the Kubernets worker node cluster. In this way, the combination of Lin and Maurya is reasonable to teach limitation of “receiving, by the resource management unit, a request for allocating one or more system resources to the application program” as required by current claim 1.
For the feature related to claimed retrieving identifier, [0072] from Maurya exactly states that “data identifying Pods” and “one port identifier for a requested Pod-level construct”. The description of “data identifying Pods” is understood to be considered as pod identifier or claimed identifier of the cluster node running the application program. Based on claims 5-6, [0068] from Maurya and the responses above related to how claims 5-6 and [0068] from Maurya teach feature of receiving an request for allocating system resources for an application program running on a pod or cluster node, the “one port identifier for a requested Pod-level construct” is understood as a port identifier associated with the particular pod instance makes the modify request, then it is also require to retrieve the pod identifier to identify which pod instance makes the modify request to for the port identifier since there are multiple pod instances are running (see “The host computers (i.e., nodes) execute a set of Pods … A cluster, in some embodiments, is partitioned into a set of namespaces into which different Pods or containers are deployed” from [0030], “data identifying Pods” from [0072] of Maurya).
In addition, according to [0071], particularly, “a process 900 for processing queued requests at a DNPA instance. In some embodiments, the process is performed by both a Pod manager plugin (e.g., PMP 483) and a cluster manager plugin (e.g., CMP 484) to process pod-level and cluster-level requests, respectively” and “retrieving the metadata includes retrieving at least a portion of the metadata from the request queue. Additional metadata, in some embodiments, is retrieved from a worker or master node of a cluster related to the request”, and Fig. 4 from Maurya, it is also the master worker node or master DNPA instance (particularly PMP 483 and CMP 484 running the master worker node or master DNPA instance) to generate or retrieve the pod identifier based on metadata from worker node running the pod instance makes the modify request.
As explained at the response above, it is reasonable to consider the dynamic-hybrid-controller from Lin as master worker node to manage or control the resources of other worker nodes at the Kubernets worker node cluster. In this way, the combination of Lin and Maurya is reasonable to teach limitation of “retrieving, by the resource management unit, from the cluster management node, an identifier of the cluster node running the application program” as required by current claim 1.
See responses a)-c) above for the combination of Lin and Maurya does teach the limitations that Applicant argued.
Therefore, Claims 1-20 are rejected.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fan et al. (WO2019128540A1-English translation provided by Google Patents) discloses: The application, process, or thread manages the time or proportion of resources, etc., where Cgroup is a physical resource that can be used to limit, record, and isolate applications, processes, or threads (such as CPU, The mechanism of memory, I / O and other resources). When you need to add a new resource group, or modify the resource usage priority and resource scheduling policy of the resource group, you can modify the Cgroup configuration file (see lines 25-37 of page 4).
Kuang (CN 112162827 A-English translation provided by Google Patents) discloses: The load condition of the container affects the utilization of the physical resources, and quota reduction (allocation reduction) is to reduce the upper limit of the physical resources which can be used by the container by modifying the cgroup (control groups) configuration to which the container belongs (see [0003]).
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHI CHEN whose telephone number is (571)272-0805. The examiner can normally be reached on M-F from 9:30AM to 5:30PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y Blair can be reached on 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Zhi Chen/
Patent Examiner, AU2196
/APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196