DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed July 10, 2025 have been fully considered but they are not persuasive. Applicant’s arguments with respect to the 35 USC § 112 amendments are moot in light of Applicant’s amendments. With respect to the 35 USC § 103 rejections, Applicant asserts that in a June 30, 2025 the examiner indicated the claims, with the proposed amendments, are allowable over the prior art. (Applicant’s Remarks, Pgs. 10-11). Examiner respectfully disagrees. Although Applicant’s amendments overcome the prior rejection, the prior art applied in the prior rejection is still applicable to the current set of claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4, 6, 8, 10-11, 13, 15, 17-18, and 20, 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnan et al. (US 2019/0324820) in view of Tsai et al. (US 11461123).
As per claim 1, Krishnan teaches the invention substantially as claimed including a network interface device ([0188], The processor platform 1100 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), or any other type of computing device) comprising:
interface circuitry ([0191], The processor platform 1100 of the illustrated example also includes an interface circuit 1120. The interface circuit 1120 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface);
machine-readable instructions ([0105], machine readable instructions may be one or more executable program(s) or portion(s) of one or more executable program(s) for execution by a computer processor such as the processor 1112 shown in the example processor platform 1100); and
at least one processor circuit ([0109], the processor 1112 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer)to utilize the machine-readable instructions to:
process telemetry data associated with a first resource of the network interface device ([0091], resource status analyzer 420 may obtain a CPU utilization of each of the workload domain servers 312 included in the first workload domain 129);
transfer the first resource from a main pool of resources of the network interface device to an available pool of resources of the network interface device ([0109], workload domain manager 208 generates a free pool of resources based on the policy. For example, the resource pool handler 460 (FIG. 4) may instruct the resource allocator 430 (FIG. 4) to compose and add the plurality of the free pool servers 310 of FIG. 3 to the free pool 302…the resource pool handler 460 determines a quantity of the free pool servers 310 and/or a configuration, a type, etc., of each of the free pool servers 310 to be composed based on a computation cost, historical information, a health status, etc., and/or a combination thereof) after a determination that the first resource meets a resource inactivity threshold for a first resource associated with the network interface of time that the first resource is not utilized ([0091], the resource status analyzer 420 right-sizes a workload domain based on information associated with the workload domain. For example, the resource status analyzer 420 may obtain a CPU utilization of each of the workload domain servers 312 included in the first workload domain 129. The example resource status analyzer 420 may determine that one or more of the workload domain servers 312 can be contracted based on a surplus or an overprovisioning of CPU resources to the first workload domain 129 based on the CPU utilization of each of the workload domain servers 312. The example resource status analyzer 420 may transfer one or more workloads from an underutilized one of the workload domain servers 312 to one or more of the other workload domain servers 312. In response to the transfer(s), the example resource status analyzer 420 may direct the example resource deallocator 440 to move the underutilized one of the workload domain servers 312 to the free pool 302); and
utilizing the available pool of resources, instantiate a virtual platform at the network interface device after the available pool of resources includes a threshold number of resources ([0070], the policy 304 may include one or more availability requirements, capacity requirements, network requirements, etc., associated with an operation of the workload domains 129, 131, 133 of FIG. 1; and [0115], the example workload domain manager 208 determines that one or more of the health statuses satisfy the respective thresholds based on the policy, then, at block 516, the workload domain manager 208 allocates resource(s) in the free pool to the workload domain(s). For example, the resource allocator 430 may allocate one or more of the free pool servers 310 to the first workload domain 129; Examiner Note: Krishman’s workload domains comprise virtual platforms: [0027], the term “workload domain” refers to virtual hardware policies or subsets of virtual resources of a VM), at least one of the main pool of resources to communicate network traffic associated with at least one device in communication with the network interface device ([0029], Cloud computing allows ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., a pool of hardware resources, etc.). A cloud computing customer can request allocations of such resources to support services required by those customers. For example, when a customer requests to run one or more services in the cloud computing environment, one or more workload domains may be created based on resources in the shared pool of configurable computing resources).
Krishnan fails to specifically teach, the virtual platform to execute an edge service workload.
However, Tsai teaches, the virtual platform to execute an edge service workload (Column 8, Lines 59-64, the network orchestrator 132 may migrate virtualized resources throughout the environment 100 in order to balance a load of virtualized resources across various hosts within the environment 100, which may be in the cloud provider substrate(s) 102, the first edge location 108, or the second edge location 110; and Column 57, Lines 35-40, an edge location can be an extension of the substrate of the cloud provider network 1602 including a limited quantity of capacity provided outside of an availability zone (e.g., in an edge location or other facility of the cloud provider that is located close to a customer workload and that may be distant from any availability zones)).
Krishnan and Tsai are analogous because they are each related to resource allocation. Krishnan teaches a method for resource allocation based on resource policies including resource utilization thresholds. (Abstract, a resource status analyzer to determine a health status of a first virtualized server of a workload domain, compare the health status to a decomposition threshold based on a policy, and transfer a workload of the first virtualized server to a second virtualized server of the workload domain when the health status satisfies the decomposition threshold). Tsai teaches a method of resource allocation including virtual machine migration based on resource capabilities in an edge computing environment and virtual machine migration to reduce latency (Abstract, devices, and techniques for live migrating virtualized resources between the main region and edge locations… The disclosed technology provides various techniques for dynamically selecting which virtualized resource data, and how much virtualized resource data, is transferred over each of the pre-copy and post-copy stages; and Column 5, Line 64-Column 6, Line 3, Various implementations of the present disclosure can be used to improve the technological field of cloud-based networking by enabling secure upscaling of cloud-based resources using edge locations. In addition, by enabling the use of edge locations near user devices to host cloud-based services, communication latency between user devices and hosted services can be reduced and possibly eliminated). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, Krishnan’s policy-based resource allocation mechanism would be modified with Tsai’s edge-based virtual machine deployment mechanism in order to provide policy-based resource allocation to edge locations resulting a system that can create/migrate virtual machines based on various considerations including resource availability and distance. Therefore, it would have been obvious to combine the teachings of Krishnan and Tsai.
As per claim 3, Tsai teaches, wherein one or more of the at least one processor circuit is to instantiate the virtual platform based on a resource requirement of an edge orchestrator (Column 8, Lines 11-14, a network orchestrator 132 may be configured to orchestrate migrations of virtualized resources between hosts located throughout the cloud provider substrate(s) 102, the first edge location 108, and the second edge location 110; and Column 8, Lines 40-45, the network orchestrator 132 may determine that a virtualized resource should be migrated from a source host to a target host. Initiation of the migration of the virtualized resource and an optimized placement of the virtualized resource may be based on network use patterns).
As per claim 4, Krishnan teaches, wherein one or more of the at least one processor circuit is to instantiate the virtual platform responsive to an indication that the threshold number of resources have been identified in the available pool of resources ([0070], the policy 304 may include one or more availability requirements, capacity requirements, network requirements, etc., associated with an operation of the workload domains 129, 131, 133 of FIG. 1; and [0115], the example workload domain manager 208 determines that one or more of the health statuses satisfy the respective thresholds based on the policy, then, at block 516, the workload domain manager 208 allocates resource(s) in the free pool to the workload domain(s). For example, the resource allocator 430 may allocate one or more of the free pool servers 310 to the first workload domain 129).
As per claim 6, Krishnan teaches, wherein one or more of the at least one processor circuit is to identify the threshold number of resources for instantiation of the virtual platform based on the … workload ([0086], the policy analyzer 410 maps requirements, specifications, etc., from the data center operator 306, the external client 308, etc., into one or more policy rules used to manage the free pool 302; and [0115], the example workload domain manager 208 determines that one or more of the health statuses satisfy the respective thresholds based on the policy, then, at block 516, the workload domain manager 208 allocates resource(s) in the free pool to the workload domain(s). For example, the resource allocator 430 may allocate one or more of the free pool servers 310 to the first workload domain 129).
Krishnan fails to specifically teach, an edge service workload.
However, Tsai teaches an edge service workload (Column 8, Lines 11-14, a network orchestrator 132 may be configured to orchestrate migrations of virtualized resources between hosts located throughout the cloud provider substrate(s) 102, the first edge location 108, and the second edge location 110; and Column 51, Lines 61-65, a customer may specify the desired resources of an instance type and/or requirements of a workload that the instance will run, and the instance type selection functionality may select an instance type based on such a specification).
The same motivation used in the rejection of claim 1 is applicable to the instant claim.
As per claim 8, this is a “non-transitory computer readable medium claim” that encompassed in the scope of claim 1 and is rejected for the same reasons. The same motivation used in the rejection of claim 1 is applicable to the instant claim.
As per claim 10, this claim is similar to claim 3 and is rejected for the same reasons.
As per claim 11, this claim is similar to claim 4 and is rejected for the same reasons.
As per claim 13, this claim is similar to claim 6 and is rejected for the same reasons.
As per claim 15, this is the “method claim” corresponding to claim 8 and is rejected for the same reasons. The same motivation used in the rejection of claim 8 is applicable to the instant claim.
As per claim 17, this claim is similar to claim 3 and is rejected for the same reasons.
As per claim 18, this claim is similar to claim 4 and is rejected for the same reasons.
As per claim 20, this claim is similar to claim 6 and is rejected for the same reasons.
As per claim 22, Krishnan teaches, wherein the at least one processor circuit includes:
at least one of a central processing unit ([0105], computer processor such as the processor 1112 shown in the example processor platform 1100 discussed below in connection with FIG. 11), a graphic processing unit, or a digital signal processor, the at least one of the central processing unit, the graphic processing unit, or the digital signal processor having control circuitry to control data movement within the at least one processor circuit ([0189], processor 1112 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor 1112 implements the example policy analyzer 410, the example resource status analyzer 420, the example resource allocator 430, the example resource deallocator 440, the example firmware handler 450, the example resource pool handler 460, and the example resource discoverer 470 of FIG. 4), arithmetic and logic circuitry to perform one or more first operations corresponding to the machine-readable instructions ([0189], processor 1112 implements the example policy analyzer 410, the example resource status analyzer 420, the example resource allocator 430, the example resource deallocator 440, the example firmware handler 450, the example resource pool handler 460, and the example resource discoverer 470 of FIG. 4.), and one or more registers to store a first result of the one or more first operations ([0104], the example resource discoverer 470 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory; and [0106], the example processes of FIGS. 5-10 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory, and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information));
a Field Programmable Gate Array (FPGA), the FPGA including first logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the first logic gate circuitry and interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations; or
Application Specific Integrated Circuitry (ASIC) including second logic gate circuitry to perform one or more third operations ([0105], any or all of the blocks may be implemented by one or more hardware circuits (e.g., … an ASIC, …) structured to perform the corresponding operation without executing software or firmware).
As per claim 23, Tsai teaches, wherein the instructions cause one or more of the at least one processor circuit to register the virtual platform with an edge orchestrator (Column 11, Lines 47-49, A controller 150 may indicate, to the network orchestrator 132, utilization data associated with various virtualized resources).
As per claim 24, Tsai teaches, including registering the virtual platform with an edge orchestrator (Column 11, Lines 47-49, A controller 150 may indicate, to the network orchestrator 132, utilization data associated with various virtualized resources).
Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Krishnan-Tsai as applied to independent claims 1, 8, and 15 and in further view of Abraham et al. (US 2020/0059420).
As per claim 5, the combination of Krishnan-Tsai fails to specifically teach, wherein one or more of the at least one processor circuit is to perform zero- touch provisioning of the virtual platform utilizing the available pool of resources. utilizing the available pool of resources.
However, Abraham teaches wherein one or more of the at least one processor circuit is to perform zero- touch provisioning of the virtual platform utilizing the available pool of resources. utilizing the available pool of resources ([0011], Automatically creating each template needed for each computing infrastructure based on an intended multi-cloud topology expressed using a high-level language, such as YAML Ain′t Markup Language (YAML), may reduce the time to provision the topology and permit, at least in some cases, zero-touch provisioning).
The combination of Krishnan-Tsai and Abraham are analogous because they are each related to resource allocation. Krishnan teaches a method for resource allocation based on resource policies including resource utilization thresholds. Tsai teaches a method of resource allocation including virtual machine migration based on resource capabilities in an edge computing environment and virtual machine migration to reduce latency. Abraham teaches a method of resource allocation during virtual machine provisioning using zero-touch provisioning. (Abstract, obtaining, by a computing device, a high-level topology description for a virtual computing environment to be provisioned in a plurality of computing infrastructures; and [0011], Automatically creating each template needed for each computing infrastructure based on an intended multi-cloud topology expressed using a high-level language, such as YAML Ain′t Markup Language (YAML), may reduce the time to provision the topology and permit, at least in some cases, zero-touch provisioning). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the combination of Krishnan-Tsai would be modified with Abraham’s zero-touch provisioning mechanism in order to provision virtual machines resulting a system that can create/migrate virtual machines based on various considerations including resource availability and distance. Therefore, it would have been obvious to combine the teachings of the combination of Krishnan-Tsai and Abraham.
As per claim 12, this claim is similar to claim 5 and is rejected for the same reasons. The same motivation used in the rejection of claim 5 is applicable to the instant claim.
As per claim 19, this claim is similar to claim 5 and is rejected for the same reasons. The same motivation used in the rejection of claim 5 is applicable to the instant claim.
Claims 7, 14, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Krishnan-Tsai as applied to independent claims 1, 8, and 15 and in further view of Abodunrin et al. (US 2019/0042741).
As per claim 7, Krishnan teaches, wherein the determination is a first determination ([0091], the resource status analyzer 420 right-sizes a workload domain based on information associated with the workload domain. For example, the resource status analyzer 420 may obtain a CPU utilization of each of the workload domain servers 312 included in the first workload domain 129. The example resource status analyzer 420 may determine that one or more of the workload domain servers 312 can be contracted based on a surplus or an overprovisioning of CPU resources to the first workload domain 129 based on the CPU utilization of each of the workload domain servers 312. The example resource status analyzer 420 may transfer one or more workloads from an underutilized one of the workload domain servers 312 to one or more of the other workload domain servers 312. In response to the transfer(s), the example resource status analyzer 420 may direct the example resource deallocator 440 to move the underutilized one of the workload domain servers 312 to the free pool 302).
The combination of Krishnan-Tsai fails to specifically teach, one or more of the at least one processor circuit is to: determine if the network interface device includes resource a management capabilities; and in response to a second determination that the network interface device includes the resource management capabilities, enable the resource management capabilities.
However, Abodunrin teaches, one or more of the at least one processor circuit is to:
determine if the network interface device includes resource a management capabilities ([0048], The physical function 308 is configured to be discovered, managed, and manipulated like any other peripheral device (e.g., a PCIe device) …the physical function 308 is configured to have full configuration access to resources such that the physical function 308 can configure, assign, or otherwise control a physical resource of the NIC 120); and
in response to a second determination that the network interface device includes the resource management capabilities, enable the resource management capabilities ([0048], the physical function 308 is configured to have full configuration access to resources such that the physical function 308 can configure, assign, or otherwise control a physical resource of the NIC 120. As such, depending on embodiment, the NIC 120 can present multiple virtual instances of itself to multiple hosts (e.g., to a VM 302, a container, a hypervisor, a processor core, etc.)).
The combination of Krishnan-Tsai and Abodunrin are analogous because they are each related to resource allocation. Krishnan teaches a method for resource allocation based on resource policies including resource utilization thresholds. Tsai teaches a method of resource allocation including virtual machine migration based on resource capabilities in an edge computing environment and virtual machine migration to reduce latency. Abodunrin teaches a method of resource allocation in an edge computing environment including a network card capable of resource provisioning and virtual machine management. ([0018], the physical functions may be used by a virtual machine manager (e.g., the VMM 202 of FIGS. 2 and 3) to manage the NIC 120 and any virtual functions (see, e.g., the virtual functions 306 of FIG. 3) associated therewith; [0040], The virtual machine manager 202, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to create and run virtual machines (VMs); and [0048], The physical function 308 is configured to be discovered, managed, and manipulated like any other peripheral device (e.g., a PCIe device). For example, the physical function 308 may be embodied as a virtualized PCI function that is capable of performing a given functionality of the NIC 120…the physical function 308 is configured to have full configuration access to resources such that the physical function 308 can configure, assign, or otherwise control a physical resource of the NIC 120). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention that based on the combination, the combination of Krishnan-Tsai would be modified with Abodunrin’ s NIC “physical function 308” in order to discover and implement available management capabilities resulting a system that can create/migrate virtual machines using a NIC. Therefore, it would have been obvious to combine the teachings of the combination of Krishnan-Tsai and Abodunrin.
As per claim 14, this claim is similar to claim 7 and is rejected for the same reasons. The same motivation used in the rejection of claim 7 is applicable to the instant claim.
As per claim 21, this claim is similar to claim 7 and is rejected for the same reasons. The same motivation used in the rejection of claim 7 is applicable to the instant claim.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is as follows:
Bockelmann et al. (US 11171834): Discusses registering and monitoring virtual computing infrastructure (Column 23, Lines 43-47, Scheduler 322 monitors for newly created or requested virtual execution elements (e.g., virtual machines) and selects a host on which the virtual execution elements are to run. Scheduler 322 may select a host based on resource requirements, hardware constraints, software constraints, policy constraints, locality, etc.; and Claim 6, the computing device receives an indication that a virtual machine is registered and notifies the orchestration agent that the virtual machine is registered, wherein the orchestration agent is configured to receive network configuration data for a virtual network interface of the virtual machine).
Applicant's amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELISSA A HEADLY whose telephone number is (571)272-1972. The examiner can normally be reached Monday- Friday 9-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MELISSA A. HEADLY/
Examiner Art Unit 2197
/BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197