Prosecution Insights
Last updated: April 18, 2026
Application No. 18/156,584

DYNAMIC MIGRATION BETWEEN RECEIVE SIDE SCALING (RSS) ENGINE STATES

Non-Final OA §103§112
Filed
Jan 19, 2023
Examiner
DASCOMB, JACOB D
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
3 (Non-Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
379 granted / 440 resolved
+31.1% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
43 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 440 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9 March 2026 has been entered. Response to Arguments Applicant's arguments filed 9 March 2026 have been fully considered but they are not persuasive. Applicant contends that: The Office Action has not established that "remapping data flows between hardware queues," as described in the cited passage of Holla, teaches or suggests migrating a virtual machine from a shared RSS engine state to a dedicated RSS engine state in which a dedicated RSS engine is dedicated to the virtual machine, or to a second shared RSS engine state associated with a second shared RSS engine, as recited in amended claim 1. Remarks at 11. The Examiner respectfully disagrees that the currently cited prior art does not teach the features of claim 1. Holla (US 2019/0334829) teaches that RSS hardware queues are used for managing traffic of VMs (¶ 3, “Filters belonging to VM kernel NICs (management or infrastructure traffic) are applied to this RSS queue” and ¶ 51, “The data packets that include a MAC address of a VTEP flow, which is destined to a VTEP 204 and subsequently to VM1 120 and/or VM2 122, may be mapped onto logical queue 230”). The RSS hardware queues in Holla are disclosed as being dedicated and/or shared (¶¶ 23 and 24, “VM could request for RSS pool upfront for its filter . . . or VM could reserve an exclusive RSS pool”). Holla teaches in response to an RSS hardware queue exceeding a threshold of CPU usage (FIG. 6, step 620), an indirection table is remapped (Step 660), which causes “remapping some the data flows from the particular hardware queue to other hardware queues that are underutilized and/or experience light loads” (¶ 96). Remapping data flows of a VM from one hardware queue to another is functionally equivalent to the disclosed “migrating a first VM . . . from a shared RSS engine state . . . to either a dedicated RSS engine . . . or to a . . . shared RSS engine,” as recited in claim 1. Accordingly, the Examiner maintains that claim 1 was obvious to a person having ordinary skill in the art in view of Holla, Agarwal, and Luo. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5, 12, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation “the shared RSS engine” in line 4. There is insufficient antecedent basis for this limitation in the claim. “A shared RSS engine” has not been defined; only “a first shared RSS engine” and “second shared RSS engine” have been defined. Appropriate correction is required. Claims 12 and 19 are indefinite for the same reason as claim 5. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Holla (US 2019/0334829) and further in view of Agarwal (US 9,571,426) and further in view of Luo (US 11,025,547). Regarding claim 1, Holla teaches: A method comprising: monitoring a traffic load of a first shared Receive Side Scaling (RSS) engine of a physical network interface card (PNIC) of a host machine (¶ 91, “In step 610, a CPU usage of hardware queues in RSS pools implemented in a PNIC are monitored”), the first shared RSS engine being shared among a first plurality of virtual machines (VMs) running on the host machine (¶ 33, “By following above mentioned process, isolation is provided to different infrastructure traffics, VMs, so that each is guaranteed with minimum number of hardware queues and its own indirection table in the RSS pool”); determining the traffic load of the first shared RSS engine exceeds a threshold (¶ 92, “In step 620, it is determined whether a CPU usage of any RSS hardware queue has increased above a threshold value”); and in response to determining that the traffic load of the first shared RSS engine exceeds the threshold (¶ 93, “If in step 630, it is determined that a CPU usage of a particular hardware queue in a particular RSS pool has increased above the threshold value, then step 640 is performed”), migrating a first VM of the first plurality of VMs from a shared RSS engine state (¶ 26, “VM can also share same RSS pool for its filters”) associated with the first shared RSS engine to either a dedicated RSS engine state (¶ 24, “VM could reserve an exclusive RSS pool”) in which a dedicated RSS engine of the PNIC is dedicated to the first VM (¶ 33, “isolation is provided to different infrastructure traffics, VMs, so that each is guaranteed with minimum number of hardware queues and its own indirection table in the RSS pool”) or to a second shared RSS engine state associated with a second shared RSS engine of the PNIC (¶ 96, “In step 660, an indirection table associated with the particular logical queue is modified to reduce the load carried by the particular hardware queue. The indirection table may be modified by for example, remapping some the data flows from the particular hardware queue to other hardware queues that are underutilized and/or experience light loads”). Holla does not teach as clearly as Agarwal teaches: the first shared RSS engine being shared among a first plurality of virtual machines (VMs) (col. 4:61-67 and col. 5:1-3, “some or all of the VMs upon their initialization (in other embodiments, the default pool includes . . . (3) hardware-feature pool that includes queues associated with a particular hardware feature, such as LRO and RSS;”). It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of the first shared RSS engine being shared among a first plurality of virtual machines (VMs), as taught by Agarwal, in the same way to the first shared RSS, as taught by Holla. Both inventions are in the field of processing VM transmissions using RSS engines, and combining them would have predictably resulted in a method that “efficiently and dynamically manages multiple queues that process traffic to and from multiple virtual machines (VMs) executing on a host,” as indicated by Agarwal (abstract). Holla and Agarwal do not teach; however, Luo discloses: monitoring the traffic load (col. 4:5-9, “the dedicated kernel threads execute on dedicated CPU cores (e.g., one kernel thread per CPU core) to proactively poll physical NICs (PNICs) of the host computer and virtual NICs (VNICs) of the machines (e.g., VMs)”) comprises monitoring kernel thread activity (col. 4:1-3, “one or more dedicated kernel threads to process network traffic on a host computer executing multiple machines (such as virtual machines or containers)”) indicating the traffic load (col. 6:5-9, “to achieve a high throughput across the enhanced network stack, the SFE in some embodiments uses the load balancer 200 to distribute the polled network devices across multiple Lcores in a load balanced manner”). It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of monitoring the traffic load comprises monitoring kernel thread activity indicating the traffic load, as taught by Luo, in the same way to the monitoring the traffic load, as taught by Holla and Agarwal. Both inventions are in the field of load balancing kernel threads, and combining them would have predictably resulted in avoiding the problem of leaving resources underutilized, as indicated by Luo (col. 1:43-48). Regarding claim 2, Agarwal teaches: The method of claim 1, further comprising: determining a traffic load of a second VM of the first plurality of VMs drops below a second threshold (col. 5:44, “When the VM's traffic falls below a threshold”); and in response to determining that the traffic load of the second VM of the first plurality of VMs drops below the second threshold, migrating the second VM to a no RSS engine state (col. 5:44-46, “When the VM's traffic falls below a threshold, the queue management system of some embodiments moves the VM back to a default queue” and col. 4:58-67 and col. 5:1-3, “the queue management system groups the queues into four types of pools. These are: (1) a default pool that includes in some embodiments one default queue . . . (3) hardware-feature pool that includes queues associated with a particular hardware feature, such as LRO and RSS”). Regarding claim 3, Holla teaches: The method of claim 1, further comprising, in response to determining that the traffic load of the first shared RSS engine exceeds the threshold: selecting the first VM for migration (¶ 96, “In step 660, an indirection table associated with the particular logical queue is modified to reduce the load carried by the particular hardware queue. The indirection table may be modified by for example, remapping some the data flows from the particular hardware queue to other hardware queues that are underutilized and/or experience light loads.”). Agarwal teaches: selecting the first VM for migration based on a traffic load of the first VM (col. 5:14-18, “When a VM's traffic exceeds a pre-set threshold, the system determines if there is a pool matching VM's traffic requirement (e.g., if there is an LLR pool for an LLR VM that is exceeding its threshold), and if so, the system assigns the VM to that pool”). Regarding claim 4, Holla teaches: The method of claim 1, further comprising, in response to determining that the traffic load of the first shared RSS engine exceeds the threshold: selecting the first VM for migration (¶ 96, “In step 660, an indirection table associated with the particular logical queue is modified to reduce the load carried by the particular hardware queue. The indirection table may be modified by for example, remapping some the data flows from the particular hardware queue to other hardware queues that are underutilized and/or experience light loads.”). Agarwal teaches: selecting the first VM for migration based on a static criteria (col. 5:14-18, “When a VM's traffic exceeds a pre-set threshold, the system determines if there is a pool matching VM's traffic requirement (e.g., if there is an LLR pool for an LLR VM that is exceeding its threshold), and if so, the system assigns the VM to that pool,” the pre-set threshold corresponds to the static criteria). Regarding claim 5, Holla teaches: The method of claim 1, further comprising: determining a traffic load of a second VM exceeds a second threshold (¶ 4, “The isolation may be implemented by dynamically assigning different traffic flows to separate hardware queues, and dynamically reassigning the traffic flows from some queues to other queues if loads of some flows increase above a threshold”); and in response to determining that the traffic load of the second VM exceeds the second threshold, migrating the second VM to use the shared RSS engine (¶ 5, “if loads computed for some data flows have exceeded a particular threshold, then the mapping table and the corresponding assignments may be dynamically modified to rebalance the loads,” a particular VM (data flow) mapping being dynamically modified to another RSS engine corresponds to the recited migrating the second VM to the shared RSS engine). Regarding claim 6, Holla teaches: The method of claim 1, wherein monitoring the traffic load of the first shared RSS engine comprises monitoring a traffic load of a plurality of PNIC receive queues of the first shared RSS engine (¶ 48, “FIG. 2 is a block diagram depicting an example PNIC 182 that is configured to implement logical queues and RSS engines. The logical queues are also referred to as RSS queues. Each logical queue is associated with its own RSS engine, also referred to as an RSS pool”). Regarding claim 7, Luo teaches: The method of claim 1, wherein monitoring kernel thread activity (col. 2:34-37, “the load balancing process collects the dispatch statistics from the ENS, and constructs a communication graph among the ports”) comprises monitoring utilization of a first one or more kernel threads at a PNIC-kernel interface (col. 4:5-9, “the dedicated kernel threads execute on dedicated CPU cores (e.g., one kernel thread per CPU core) to proactively poll physical NICs (PNICs) of the host computer”) and monitoring utilization of a second one or more kernel threads at a virtual network interface (VNIC)-kernel interface (col. 4:5-9, “the dedicated kernel threads execute on dedicated CPU cores (e.g., one kernel thread per CPU core) to proactively poll . . . virtual NICs (VNICs) of the machines (e.g., VMs)”). Claims 8-20 recite commensurate subject matter as claims 1-7. Therefore, they are rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tsirkin (US 8,745,237). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB D DASCOMB whose telephone number is (571)272-9993. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB D DASCOMB/Primary Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Jan 19, 2023
Application Filed
Jul 31, 2025
Non-Final Rejection — §103, §112
Nov 05, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103, §112
Mar 09, 2026
Request for Continued Examination
Mar 15, 2026
Response after Non-Final Action
Apr 02, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591462
INFERENCE SERVICE DEPLOYMENT METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585487
CANCELLATION OF A MIGRATION-BASED UPGRADE USING A NETWORK SWAP WORKFLOW
2y 5m to grant Granted Mar 24, 2026
Patent 12578906
STORAGE VIRTUALIZATION DEVICE SUPPORTING VIRTUAL MACHINE, OPERATION METHOD THEREOF, AND OPERATION METHOD OF SYSTEM HAVING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12578985
HYBRID VIRTUAL MACHINE ALLOCATION OPTIMIZATION SYSTEM AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566645
PREDICTED-TEMPERATURE-BASED VIRTUAL MACHINE MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+20.5%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 440 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month