DETAILED ACTION This office action is in response to application filed on 9/30/2023. Claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim (s) 1, 4 – 8, 10, 11, 13, 14, 17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et el (US 20190173841, hereinafter Wang) , in view of Gombert et al (US 20150143363, hereinafter Gombert) , and further in view of Tsirkin (US 20160124763) . As per claim 1, Wang discloses: In a computing device having a hypervisor for managing configurations of at least one virtual machine, a method for detecting pinning misconfigurations, the method comprising: receiving a plurality of packets to be processed by computing resources of the computing device, the computing resources including a plurality of physical cores; assigning the plurality of packets to a virtual core of a virtual machine, the virtual core being associated with a physical core of the plurality of physical cores on the computing device; (Wang [0028]: “Each of the VNIC RSS queues 227 may be associated with a virtual CPU (e.g., a different virtual CPU) from one or more virtual CPUs 225… a virtual CPU may correspond to different resources (e.g., physical CPU or execution core, time slots, compute cycles, etc.) of one or more physical CPUs 203 of host machine 200. When receiving incoming packets (e.g., not including encapsulated ESP encrypted packets) VNIC 226 may compute a hash value based on header attributes of the incoming packets and distribute the incoming packets among the VNIC RSS queues 227 associated with VNIC 226… The incoming packets stored in each VNIC RSS queue 227 are then processed by the corresponding virtual CPU 225 associated with the VNIC RSS queue 227.”; [0030]: “for an incoming packet, the RSS may select a VNIC RSS queue 227 and, consequently, a corresponding virtual CPU 225 that begins processing the hardware interrupt handler, while the RPS may select the same or another virtual CPU for performing protocol processing. In some cases, RPS provides an additional software mechanism to ensure load balancing in the event that the hashing functions used by the RSS are not fairly distributing the incoming traffic among all virtual CPUs 225.”) Wang did not explicitly disclose: tracking a duration of time associated with processing the plurality of packets by running a machine loop on the plurality of packets; comparing the duration of time to an estimated duration of time of processing the plurality of packets; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference, that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. However, Gombert teaches: tracking a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference; (Gombert [0065]: “the forecast module 216 also determines a tolerance value for each job type… the forecast module 216 determines the tolerance value based on the retrieved historical statistics data 240.” [0076]: “the job statistics module 222 may identify a virtual machine as an under-performing virtual machine if the standard deviations of the actual execution times for jobs of a particular type executed on the virtual machine exceeds the tolerance value for the job type. For example, if the standard deviation of actual execution time for jobs of job type T.sub.1 is 9 ms, and the tolerance value for the job type T.sub.1 is 8.08 ms, the virtual machine being used to execute the jobs of job type T.sub.1 may be identified as under-performing.”; [0079]: “a calibration is performed on the one or more under-performing virtual machines.”; [0082]: “the one or more under-performing virtual machines are released.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Gombert into that of Wang in order to track a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference. Wang [0030] further teaches distribute incoming network traffic for processing to a number of different vCPUs for load balancing purpose. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. One of ordinary skill can easily see that combining the teaching of Gombert into Wang is merely combining the known parts in the field to achieve a predictable results of a more accurate load balancing mechanism and is therefore rejected under 35 USC 103. Tsirkin teaches: that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. (Tsirkin [0015]: “the predefined threshold condition may be set such that for a VM with two virtual processors, any virtual processor that exerts a load of less than 25% of the total capacity of the physical processor may be reassigned. Thus, if the hypervisor determines that each of the virtual processors of the VM only places a load of 10% of the capacity of the physical processor, the hypervisor may determine that that the threshold condition for reassignment has been met. The hypervisor may then identify a single host processor capable of processing the load for the plurality of virtual processors of the virtual machine and assign the virtual processors to the identified host processor. Subsequently, the hypervisor can enable polling of the shared device and notify the guest OS that polling has been enabled.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Tsirkin into that of Wang and Gombert in order to determine that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. Although Gombert uses releasing the underperforming VM as an example for his optimization, one of ordinary skill in the art would easily know that other variations of load-balancing and optimization can easily be adapted here without deviating from the established teachings of the prior arts. Tsirkin teaches reassigning virtual core to different physical core in response to detecting the virtual core low is under performing, one of ordinary skill can easily see that combining the teaching of Tsirkin into Wang and Gombert is merely combining the known parts in the field to achieve a predictable results of load balancing by remapping virtual core to physical cores and is therefore rejected under 35 USC 103. As per claim 4, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 1, further comprising polling a source of the plurality of packets at polling cycles to determine availability of the plurality of packets, wherein assigning the plurality of packets to the virtual core is based on a response to the polling. (Wang [0028]) As per claim 5, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 1, further comprising receiving a burst of packets at the computing device, wherein the plurality of packets is a first subset of packets from the burst of packets, and wherein assigning the plurality of packets to the virtual core includes causing a load balancing entity to provide the first subset of packets from the burst of packets to the virtual core while one or more additional subsets of packets from the burst of packets are assigned to one or more additional virtual cores of the virtual machine. (Wang [0028] – [0030]) As per claim 6, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 5, wherein the load balancing entity is a network interface controller (NIC) configured to selectively deliver packets to requesting virtual cores on the virtual machine. (Wang [0028] – [0030]) As per claim 7, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 5, wherein the load balancing entity is a load balancer that receives the burst of packets from one or more input/output (I/O) virtual cores. (Wang figure 2 and [0028] – [0030]) As per claim 8, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 1, wherein the estimated duration of time is based at least in part on a number of packets within the plurality of packets. (Gombert [0075] – [0076]) As per claim 10, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 1, wherein the computing device is a server node hosting the virtual machine on a cloud computing system. (Wang figure 2.) As per claim 1 1 , Wang discloses: A computing system having a hypervisor for managing configurations of at least one virtual machine, the computing system comprising: at least one processor; memory in electronic communication with the at least one processor; and instructions stored in the memory (Wang [0056]) , the instructions being executable by the at least one processor to: receive a plurality of packets to be processed by computing resources of the computing device, the computing resources including a plurality of physical cores; assign the plurality of packets to a virtual core of a virtual machine, the virtual core being associated with a physical core of the plurality of physical cores on the computing device; (Wang [0028]: “Each of the VNIC RSS queues 227 may be associated with a virtual CPU (e.g., a different virtual CPU) from one or more virtual CPUs 225… a virtual CPU may correspond to different resources (e.g., physical CPU or execution core, time slots, compute cycles, etc.) of one or more physical CPUs 203 of host machine 200. When receiving incoming packets (e.g., not including encapsulated ESP encrypted packets) VNIC 226 may compute a hash value based on header attributes of the incoming packets and distribute the incoming packets among the VNIC RSS queues 227 associated with VNIC 226… The incoming packets stored in each VNIC RSS queue 227 are then processed by the corresponding virtual CPU 225 associated with the VNIC RSS queue 227.”; [0030]: “for an incoming packet, the RSS may select a VNIC RSS queue 227 and, consequently, a corresponding virtual CPU 225 that begins processing the hardware interrupt handler, while the RPS may select the same or another virtual CPU for performing protocol processing. In some cases, RPS provides an additional software mechanism to ensure load balancing in the event that the hashing functions used by the RSS are not fairly distributing the incoming traffic among all virtual CPUs 225.”) Wang did not explicitly disclose: track a duration of time associated with processing the plurality of packets by running a machine loop on the plurality of packets; comparing the duration of time to an estimated duration of time of processing the plurality of packets; determine, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference, that a pinning misconfiguration exists between the virtual core and the computing resources; causing the hypervisor to be reconfigured to prevent pinning misconfigurations between virtual cores on the computing device and the plurality of physical cores. However, Gombert teaches: tracking a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference; (Gombert [0065]: “the forecast module 216 also determines a tolerance value for each job type… the forecast module 216 determines the tolerance value based on the retrieved historical statistics data 240.” [0076]: “the job statistics module 222 may identify a virtual machine as an under-performing virtual machine if the standard deviations of the actual execution times for jobs of a particular type executed on the virtual machine exceeds the tolerance value for the job type. For example, if the standard deviation of actual execution time for jobs of job type T.sub.1 is 9 ms, and the tolerance value for the job type T.sub.1 is 8.08 ms, the virtual machine being used to execute the jobs of job type T.sub.1 may be identified as under-performing.”; [0079]: “a calibration is performed on the one or more under-performing virtual machines.”; [0082]: “the one or more under-performing virtual machines are released.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Gombert into that of Wang in order to track a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference. Wang [0030] further teaches distribute incoming network traffic for processing to a number of different vCPUs for load balancing purpose. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. One of ordinary skill can easily see that combining the teaching of Gombert into Wang is merely combining the known parts in the field to achieve a predictable results of a more accurate load balancing mechanism and is therefore rejected under 35 USC 103. Tsirkin teaches: that a pinning misconfiguration exists between the virtual core and the computing resources; causing the hypervisor to be reconfigured to prevent pinning misconfigurations between virtual cores on the computing device and the plurality of physical cores. (Tsirkin [0015]: “the predefined threshold condition may be set such that for a VM with two virtual processors, any virtual processor that exerts a load of less than 25% of the total capacity of the physical processor may be reassigned. Thus, if the hypervisor determines that each of the virtual processors of the VM only places a load of 10% of the capacity of the physical processor, the hypervisor may determine that that the threshold condition for reassignment has been met. The hypervisor may then identify a single host processor capable of processing the load for the plurality of virtual processors of the virtual machine and assign the virtual processors to the identified host processor. Subsequently, the hypervisor can enable polling of the shared device and notify the guest OS that polling has been enabled.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Tsirkin into that of Wang and Gombert in order to determine that a pinning misconfiguration exists between the virtual core and the computing resources; and the hypervisor to be reconfigured to prevent pinning misconfigurations between virtual cores on the computing device and the plurality of physical cores. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. Although Gombert uses releasing the underperforming VM as an example for his optimization, one of ordinary skill in the art would easily know that other variations of load-balancing and optimization can easily be adapted here without deviating from the established teachings of the prior arts. Tsirkin teaches reassigning virtual core to different physical core in response to detecting the virtual core low is under performing, one of ordinary skill can easily see that combining the teaching of Tsirkin into Wang and Gombert is merely combining the known parts in the field to achieve a predictable results of load balancing by remapping virtual core to physical cores and is therefore rejected under 35 USC 103. As per claim 13, the combination of Wang, Gombert and Tsirkin further teach: The computing system of claim 11, wherein the instructions are further executable to cause the at least one processor to receive a burst of packets at the computing device, wherein assigning the plurality of packets to the virtual core includes causing a load balancing entity to provide a first subset of packets from the burst of packets to the virtual core while one or more additional subsets of packets from the burst of packets are assigned to one or more additional virtual cores of the virtual machine. (Wang [0028] – [0030]) As per claim 14, the combination of Wang, Gombert and Tsirkin further teach: The computing system of claim 13, wherein the load balancing entity is a network interface controller (NIC) configured to selectively deliver packets to requesting virtual cores on the virtual machine, and wherein the load balancing entity is a load balancer that receives the burst of packets from one or more input/output (I/O) virtual cores. (Wang [0028] – [0030]) As per claim 17, Wang discloses: In a computing device having a hypervisor for managing configurations of at least one virtual machine, a method for detecting pinning misconfigurations, the method comprising: polling, by a virtual machine, a source of packets to determine if any packets are available for processing; receiving a plurality of packets to be processed by computing resources of a computing device, the computing resources including a plurality of physical cores, and wherein the plurality of packets are assigned to a virtual core of the virtual machine, the virtual core being associated with a physical core of the plurality of physical cores on the computing device; (Wang [0028]: “Each of the VNIC RSS queues 227 may be associated with a virtual CPU (e.g., a different virtual CPU) from one or more virtual CPUs 225… a virtual CPU may correspond to different resources (e.g., physical CPU or execution core, time slots, compute cycles, etc.) of one or more physical CPUs 203 of host machine 200. When receiving incoming packets (e.g., not including encapsulated ESP encrypted packets) VNIC 226 may compute a hash value based on header attributes of the incoming packets and distribute the incoming packets among the VNIC RSS queues 227 associated with VNIC 226… The incoming packets stored in each VNIC RSS queue 227 are then processed by the corresponding virtual CPU 225 associated with the VNIC RSS queue 227.”; [0030]: “for an incoming packet, the RSS may select a VNIC RSS queue 227 and, consequently, a corresponding virtual CPU 225 that begins processing the hardware interrupt handler, while the RPS may select the same or another virtual CPU for performing protocol processing. In some cases, RPS provides an additional software mechanism to ensure load balancing in the event that the hashing functions used by the RSS are not fairly distributing the incoming traffic among all virtual CPUs 225.”) Wang did not explicitly disclose: tracking a duration of time associated with processing the plurality of packets by running a machine loop on the plurality of packets; comparing the duration of time to an estimated duration of time of processing the plurality of packets; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference, that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. However, Gombert teaches: tracking a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference; (Gombert [0065]: “the forecast module 216 also determines a tolerance value for each job type… the forecast module 216 determines the tolerance value based on the retrieved historical statistics data 240.” [0076]: “the job statistics module 222 may identify a virtual machine as an under-performing virtual machine if the standard deviations of the actual execution times for jobs of a particular type executed on the virtual machine exceeds the tolerance value for the job type. For example, if the standard deviation of actual execution time for jobs of job type T.sub.1 is 9 ms, and the tolerance value for the job type T.sub.1 is 8.08 ms, the virtual machine being used to execute the jobs of job type T.sub.1 may be identified as under-performing.”; [0079]: “a calibration is performed on the one or more under-performing virtual machines.”; [0082]: “the one or more under-performing virtual machines are released.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Gombert into that of Wang in order to track a duration of time associated with processing the plurality of [jobs] by running a machine loop on the plurality of [jobs]; comparing the duration of time to an estimated duration of time of processing the plurality of [jobs]; determining, based on a difference between the duration of time and the estimated duration of time exceeding a threshold difference. Wang [0030] further teaches distribute incoming network traffic for processing to a number of different vCPUs for load balancing purpose. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. One of ordinary skill can easily see that combining the teaching of Gombert into Wang is merely combining the known parts in the field to achieve a predictable results of a more accurate load balancing mechanism and is therefore rejected under 35 USC 103. Tsirkin teaches: that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. (Tsirkin [0015]: “the predefined threshold condition may be set such that for a VM with two virtual processors, any virtual processor that exerts a load of less than 25% of the total capacity of the physical processor may be reassigned. Thus, if the hypervisor determines that each of the virtual processors of the VM only places a load of 10% of the capacity of the physical processor, the hypervisor may determine that that the threshold condition for reassignment has been met. The hypervisor may then identify a single host processor capable of processing the load for the plurality of virtual processors of the virtual machine and assign the virtual processors to the identified host processor. Subsequently, the hypervisor can enable polling of the shared device and notify the guest OS that polling has been enabled.”) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Tsirkin into that of Wang and Gombert in order to determine that a pinning misconfiguration exists between the virtual core and the computing resources; and performing a mediation action for resolving the pinning misconfiguration between the virtual core and the computing resources. Gombert figure 3B has shown a commonly known methods for load balancing and optimizing the scheduling and processing of task in VM environment by identifying underperforming VM through comparing the actual execution times for jobs vs a precomputed tolerance value, and perform optimization on the underperforming VM. Although Gombert uses releasing the underperforming VM as an example for his optimization, one of ordinary skill in the art would easily know that other variations of load-balancing and optimization can easily be adapted here without deviating from the established teachings of the prior arts. Tsirkin teaches reassigning virtual core to different physical core in response to detecting the virtual core low is under performing, one of ordinary skill can easily see that combining the teaching of Tsirkin into Wang and Gombert is merely combining the known parts in the field to achieve a predictable results of load balancing by remapping virtual core to physical cores and is therefore rejected under 35 USC 103. As per claim 19, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 17, further comprising: receiving a burst of packets at the computing device, wherein the plurality of packets is a first subset of packets from the burst of packets; and causing a load balancing entity to provide the first subset of packets from the burst of packets to the virtual core while one or more additional subsets of packets from the burst of packets are assigned to one or more additional virtual cores of the virtual machine. (Wang [0028] – [0030]) As per claim 20, the combination of Wang, Gombert and Tsirkin further teach: The method of claim 17, wherein the computing device is a server node hosting the virtual machine on a cloud computing system. (Wang figure 2.) Claim (s) 2, 12 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Gombert and Tsirkin , and further in view of Chari et al (US 20210360319, hereinafter Chari) . As per claim 2, the combination of Wang, Gombert and Tsirkin did not teach: The method of claim 1, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. However, Chari teaches: The method of claim 1, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. (Chari [0088].) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Chari into that of Wang, Gombert and Tsirkin in order to have the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. The claimed VM running DPDK application with specific core mapping is merely claiming an intended use for the VM, it would be obvious for an applicant to try and apply the teachings of prior art to their specific requirements without deviating from the general teachings of the prior art and is therefore rejected under 35 USC 103. As per claim 12, the combination of Wang, Gombert and Tsirkin did not teach: The computing system of claim 11, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. However, Chari teaches: The computing system of claim 11, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. (Chari [0088].) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Chari into that of Wang, Gombert and Tsirkin in order to have the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. The claimed VM running DPDK application with specific core mapping is merely claiming an intended use for the VM, it would be obvious for an applicant to try and apply the teachings of prior art to their specific requirements without deviating from the general teachings of the prior art and is therefore rejected under 35 USC 103. As per claim 18, the combination of Wang, Gombert and Tsirkin did not teach: The method of claim 17, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. However, Chari teaches: The method of claim 17, wherein the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. (Chari [0088].) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Chari into that of Wang, Gombert and Tsirkin in order to have the virtual machine is configured to run a data plane development kit (DPDK) application in which each virtual core of the virtual machine is assumed to have access to a dedicated physical core of the plurality of physical cores. The claimed VM running DPDK application with specific core mapping is merely claiming an intended use for the VM, it would be obvious for an applicant to try and apply the teachings of prior art to their specific requirements without deviating from the general teachings of the prior art and is therefore rejected under 35 USC 103. Claim (s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Gombert and Tsirkin , and further in view of Choi et al (US 20150277954, hereinafter Choi) . As per claim 3, the combination of Wang, Gombert and Tsirkin did not teach: The method of claim 1, wherein assigning the plurality of packets to the virtual core includes assigning a number of packets to the virtual core based on an upper limit of packets that the virtual core is configured to process within an iteration of the machine loop . However, Choi teaches: The method of claim 1, wherein assigning the plurality of packets to the virtual core includes assigning a number of packets to the virtual core based on an upper limit of packets that the virtual core is configured to process within an iteration of the machine loop. (Choi [0051]) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Choi into that of Wang, Gombert and Tsirkin in order to assign the plurality of packets to the virtual core includes assigning a number of packets to the virtual core based on an upper limit of packets that the virtual core is configured to process within an iteration of the machine. Choi [0051] teaches that whether a virtual processor reaches threshold of packet traffic is a determining factor for load balancing purpose, applicants have thus merely claimed the combination of known parts in the field to achieve a predictable results of scheduling and execution of packets and performing appropriate load balancing to ensure the performance of the system and is therefore rejected under 35 USC 103. Allowable Subject Matter Claims 9 and 15 – 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Warner et al (US 20220070277) teaches “ a head end is connected to a plurality of customer devices through a transmission network includes a remote fiber node that converts received data to analog data suitable to be provided on a coaxial cable for the plurality of customer devices. The head end includes vCore instantiated on one of the servers of the head end configured to provide services to the plurality of customer devices through the transmission network. ”; Kim et al (US 20160085571) teaches “ Examples perform selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. Some examples contemplate monitoring system characteristics and rescheduling the vCPUs when other placements may provide improved performance and/or efficiency. ”; Chen et al (USPAT 7802073) teaches “ The methods and systems include a virtual core management component adapted to map one or more virtual cores to at least one of the physical cores to enable execution of one or more programs by the at least one physical core. The one or more virtual cores include one or more logical states associated with the execution of the one or more programs. The methods and systems may include a memory component adapted to store the one or more virtual cores. The virtual core management component may be adapted to transfer the one or more virtual cores from the memory component to the at least one physical core. ”. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT CHARLES M SWIFT whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7756 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday: 9:30 AM - 7PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT April Blair can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712701014 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES M SWIFT/ Primary Examiner, Art Unit 2196