Prosecution Insights
Last updated: April 19, 2026
Application No. 17/930,740

NETWORK INTERFACE CARD HAVING VARIABLE SIZED VIRTUAL FUNCTIONS

Final Rejection §103
Filed
Sep 09, 2022
Examiner
KAMRAN, MEHRAN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
434 granted / 484 resolved
+34.7% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
510
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the amendment filed 02/13/2026. Claims 1-20 are pending in this application. Claims 1, 8 and 14 are independent claims. Claims 1, 8 and 14 are currently amended. This Office Action is made final. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 8-9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Gasparakis (US 2019/0044828 A1) in view of Kim (US 2016/0328342 A1) in further view of Skerry (US 2017/0093677 A1). As per claim 1, Gasparakis teaches A network interface card comprising: a processor; (Gasparakis [0032] In some embodiments, the NIC 126 may be embodied as part of a SoC that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 126 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 126. In such embodiments, the local processor of the NIC 126 may be capable of performing one or more of the functions of a processor 108 described herein) a set of resources; (Gasparakis [0042] The kernel 210 is configured to handle start-up of the compute node 106, as well as I/O requests (e.g., from the NIC 216, from software applications executing on the compute node 106, etc.) and translate the received I/O requests into data-processing instructions for a processor core. The resource management daemon 212 is configured to respond to network requests, hardware activity, or other programs by performing some task. In particular, the resource management daemon 212 is configured to perform resource allocation, including cache (e.g., the cache memory 112 of FIG. 1) of the compute node 106. For example, resource management daemon 212 is configured to determine the allocation of cache resources for each processor core of the (e.g., each of the processor cores 110 of FIG. 1)). a plurality of virtual functions, each virtual function configured to provide network access to a workload (Gasparakis [0047] In an illustrative embodiment in which the NIC 216 is embodied as an SR-IOV enabled NIC, as network packets arrive at virtual functions (VFs), the KPI monitor 220 may keep track of pre-programmed KPIs, such as packet per second for each destination of the respective VFs. In another illustrative embodiment in which the NIC 216 is embodied as a smart NIC, wherein processor cores or an accelerator would have offloaded components of a virtual switch which could keep track of destination addresses of workloads, the KPI monitor 220 could track the statistics of KPIs, such as packets per second received for each destination [0056] In block 416, the NIC 216 calculates a recommended amount of cache ways for a workload associated with the received network packet based on the updated KPI values. To do so, in block 418, the NIC 216 may perform the calculation based on data received in regard to shared resources (i.e., shared resource data). Additionally or alternatively, in block 420, the NIC 216 may calculate the recommended amount of cache ways based on received heuristic data. In block 422, the NIC 216 may additionally or alternatively perform the calculation based on the total amount of available shared cache ways). wherein the processor is configured to allocate the set of resources among the plurality of virtual functions, and wherein the allocation of the set of resources is non-uniform across the plurality of virtual functions. (Gasparakis [0049] The cache ways predictor 224, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to provide proactive low latency recommendations of cache way associations and direct to hardware I/O cache way scale for particular destination addresses associated with a particular workload. To do so, the cache ways predictor 224 is configured to determine the recommendations, or hints, and update the cache QoS register (e.g., via the cache QoS register manager 222) to reflect the determined recommendations.[0050] Additionally, depending on the embodiment, the cache ways predictor 224 may be configured to use heuristics to determine the cache requirement recommendations for a particular workload. For example, a particular night of the week may see more video streaming workloads than other nights of the week. As such, network traffic characteristics, such time of the day, packet payload type, destination headers, etc., could be used by the cache ways predictor 224 for determining heuristics that help suggest the cache requirements (e.g., the amount of direct to hardware I/O cache ways) for that workload type) The fact that the resource is allocated based on workload type implies resource allocation not being equal. Gasparakis does not teach wherein each virtual function functions as a virtual network interface card for the associated virtual machine and is allocated ring buffer slots. However, Kim teaches wherein each virtual function functions as a virtual network interface card for the associated virtual machine and is allocated ring buffer slots (Kim [0043] According to an exemplary embodiment of the present disclosure, a memory space having a special structure is allocated to the memory 162 of the computer in order to utilize the space and structure as virtual NI Cs. In order to utilize the space and structure as multiple virtual NICs, different MAC addresses are assigned to the space allocated as the virtual NIC space, so that the space can be divided into multiple virtual NICs and Fig 4 S310 (Copy Packet Data in output ring buffer of corresponding virtual nic onto output buffer)) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Kim with the system of Gasparakis to use ring buffer memory for vnics. One having ordinary skill in the art would have been motivated to use Kim into the system of Gasparakis for the purpose of updating output bandwidth information of the virtual NICs (Kim paragraph 09) Gasparaskis and Kim do not teach flow table entries of the NIC. However, Skerry teaches flow table entries of the NIC (Skerry [0051] The flow IDs are used as lookups into flow table 448, which is depicted as being part of virtual switch 409. In one embodiment, the flow table contains a column of flow ID's and a column of vNIC Rx port IDs such that given an input flow ID, the lookup will return a corresponding vNIC Rx port ID). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Skerry with the system of Gasparakis to use flow tables. One having ordinary skill in the art would have been motivated to use Skerry into the system of Gasparakis for the purpose of measuring quality of service end to end in a network (Skerry paragraph 16) As per claim 3, Gasparakis teaches wherein the workload requirement associated with each workload is obtained from a connection request received from a corresponding workload. (Gasparakis Fig 4 Blocks 402, 414 and 416 and [0054] updating a cache QoS register is shown which may be executed by a NIC (e.g., the NIC 216 of FIGS. 1 and 2) ofa compute device (e.g., the compute node 106 of FIGS. 1 and 2). The method 400 begins with block 402, in which the NIC 216 determines whether a network packet has been received. And [0055] For example, in block 410, the NIC 216 may read a total amount of available shared cache ways per NUMA node on the host platform. Additionally or alternatively, in block 412, the NIC 216 reads the available shared cache ways using a corresponding identifier of a respective processor (e.g., via a CPUID) to identify an amount of available shared cache memory. In block 414, the NIC 216 identifies a destination address associated with the received network packet. [0056] In block 416, the NIC 216 calculates a recommended amount of cache ways for a workload associated with the received network packet based on the updated KPI values. To do so, in block 418, the NIC 216 may perform the calculation based on data received in regard to shared resources (i.e., shared resource data). Additionally or alternatively, in block 420, the NIC 216 may calculate the recommended amount of cache ways based on received heuristic data. In block 422, the NIC 216 may additionally or alternatively perform the calculation based on the total amount of available shared cache ways. In block 424, the NIC 216 updates the cache QoS register to include the calculated amount of cache ways for the workloads and the identified destination address. In block 426, the NIC 216 generates an interrupt for a kernel (e.g., the kernel 210 of FIG. 2) that is usable to indicate that the cache QoS register has been updated.) As per claim 5, Gasparakis teaches wherein adjusting the allocation of the set of resources among the plurality of virtual functions is further based on one or more of an amount of unallocated resources of the set of resources and on a usage of resources among the plurality of virtual functions.(Fig 4 and [0055] In block 406, the NIC 216 updates a value corresponding to each of the identified set of KPIs based on data associated with the received network packet. In block 408, the NIC reads a total amount of available shared cache ways on the host platform (e.g., the compute and storage resources of the compute node 106). For example, in block 410, the NIC 216 may read a total amount of available shared cache ways per NUMA node on the host platform. Additionally or alternatively, in block 412, the NIC 216 reads the available shared cache ways using a corresponding identifier of a respective processor (e.g., via a CPUID) to identify an amount of available shared cache memory. In block 414, the NIC 216 identifies a destination address associated with the received network packet. [0056] In block 416, the NIC 216 calculates a recommended amount of cache ways for a workload associated with the received network packet based on the updated KPI values. To do so, in block 418, the NIC 216 may perform the calculation based on data received in regard to shared resources (i.e., shared resource data). Additionally or alternatively, in block 420, the NIC 216 may calculate the recommended amount of cache ways based on received heuristic data. In block 422, the NIC 216 may additionally or alternatively perform the calculation based on the total amount of available shared cache ways. In block 424, the NIC 216 updates the cache QoS register to include the calculated amount of cache ways for the workloads and the identified destination address. In block 426, the NIC 216 generates an interrupt for a kernel (e.g., the kernel 210 of FIG. 2) that is usable to indicate that the cache QoS register has been updated) As per claim 9, Gasparakis teaches wherein the network interface card includes a set of resources and wherein the size of the virtual function corresponds to an amount of the set of resources allocated to the virtual function. (Gasparakis Fig 4 and [0055] In block 406, the NIC 216 updates a value corresponding to each of the identified set of KPIs based on data associated with the received network packet. In block 408, the NIC reads a total amount of available shared cache ways on the host platform (e.g., the compute and storage resources of the compute node 106). For example, in block 410, the NIC 216 may read a total amount of available shared cache ways per NUMA node on the host platform. Additionally or alternatively, in block 412, the NIC 216 reads the available shared cache ways using a corresponding identifier of a respective processor (e.g., via a CPUID) to identify an amount of available shared cache memory. In block 414, the NIC 216 identifies a destination address associated with the received network packet. [0056] In block 416, the NIC 216 calculates a recommended amount of cache ways for a workload associated with the received network packet based on the updated KPI values. To do so, in block 418, the NIC 216 may perform the calculation based on data received in regard to shared resources (i.e., shared resource data). Additionally or alternatively, in block 420, the NIC 216 may calculate the recommended amount of cache ways based on received heuristic data. In block 422, the NIC 216 may additionally or alternatively perform the calculation based on the total amount of available shared cache ways. In block 424, the NIC 216 updates the cache QoS register to include the calculated amount of cache ways for the workloads and the identified destination address. In block 426, the NIC 216 generates an interrupt for a kernel (e.g., the kernel 210 of FIG. 2) that is usable to indicate that the cache QoS register has been updated) As per claim 11, Gasparakis teaches wherein assigning the virtual function to the workload incudes selecting the virtual function from a plurality of virtual functions, wherein at least two of the plurality of virtual functions have unequal sizes. (Gasparis paragraph 50 and [0072] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the kernel is to read, subsequent to having received the generated interrupt, a state of the cache QoS register on the NIC to retrieve the recommended amount of cache ways for each workload type; and determine, based on the read state of the cache QoS register, an optimal allocation set of cache ways, wherein the optimal allocation set of cache ways includes an amount of hardware I/O LLC cache ways that are to be allocated to each of the plurality of VMs and an amount of isolated LLC cache ways that are to be allocated to each of the plurality of VMs.) As to claim 4, it is rejected based on the same reason as claim 5. As to claims 2 and 8, they are rejected based on the same reason as claim 1. Claims 6 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Gasparakis (US 2019/0044828 A1) in view of Kim (US 2016/0328342 A1) in further view of Skerry (US 2017/0093677 A1) and Bernat (US 2022/0114032 A1). As per claim 6, Gasparakis and Kim and Skerry do not teach wherein the set of resources includes one or more of a transmission buffer, a flow table entry, and a bandwidth of the network interface card. However, Bernat teaches wherein the set of resources includes one or more of a transmission buffer, a flow table entry, and a bandwidth of the network interface card. (Bernat [0024] The telemetry circuitry 160 is configured to receive monitoring of available resources in system A 105. Such resources may include hardware (e.g., bus, memory bandwidth per channel, available power, etc.), software (e.g., applications, operating systems, queues, scheduling priorities, etc.), or infrastructure policy enforcement rules (e.g., security, quality of service (QoS), SLA, etc.). Thus, the IPU 140 constantly estimates local resources, SLAs, or recovery mechanisms for failure scenarios) It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Bernat with the system of Gasparakis and Kim and Skerry to handle a set of resources. One having ordinary skill in the art would have been motivated to use Bernat into the system of Gasparakis and Kim and Skerry for the purpose of enforcing separation of concerns between different applications. Network interfaces ( e.g., network interface controllers (NICs)) (Bernat paragraph 03) As to claim 10, it is rejected based on the same reason as claim 6. Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gasparakis (US 2019/0044828 A1) in view of Kim (US 2016/0328342 A1) in further view of Skerry (US 2017/0093677 A1) and Valanicus (US 2022/0182328 A1). As per claim 7, Gasparakis and Kim and Skerry do not teach wherein the set of resources are divided into a plurality of discrete units and wherein allocation the set of resources among the plurality of virtual functions comprising assigning one or more of the plurality of discrete units to each of the plurality of virtual functions. However, Valancius teaches wherein the set of resources are divided into a plurality of discrete units and wherein allocation the set of resources among the plurality of virtual functions comprising assigning one or more of the plurality of discrete units to each of the plurality of virtual functions. (Valancius [0028] A network node can include one or more discrete units of physical or virtual computing resources assigned to perform a service or application. The discrete units can be specified on the computing platform as physical and/or virtual computing resources. As a physical computing resource, a node can include one or more processors and/or one or more storage devices across the one or more datacenters 115A-N of the computing platform 105. As a virtual computing resource, a node can include one or more virtual machines (VM) 155A-N, each virtual machine operating using physical processors and storage devices of the computing platform 105). It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Valancius with the system of Gasparakis and Kim and Skerry to divide the resources. One having ordinary skill in the art would have been motivated to use Valancius into the system of Gasparakis and Kim and Skerry for the purpose of reducing memory consumption on the NIC. (Valancius paragraph 05) Claims 12 is rejected under 35 U.S.C. 103 as being unpatentable over Gasparakis (US 2019/0044828 A1) in view of Kim (US 2016/0328342 A1) in further view of Skerry (US 2017/0093677 A1) and Yao (US 2020/0044919 A1). As per claim 12, Gasparakis and Kim and Skerry do not teach wherein the plurality of virtual functions includes a first group of virtual functions having a first size, a second group of virtual functions having a second size, a third group of virtual functions having a third size, and a fourth group of virtual functions having a fourth size. However, Yao teaches wherein the plurality of virtual functions includes a first group of virtual functions having a first size, a second group of virtual functions having a second size, a third group of virtual functions having a third size, and a fourth group of virtual functions having a fourth size. (Yao [0043] The attributes of a DF may include information related to: description of the DF; additional instantiation data for the VDUs used in this flavour; internal virtual link descriptor along with additional data which is used in this DF; various levels of resources that can be used to instantiate the VNF using this flavour (for example, small, medium, large); default instantiation level for this DF if multiple instantiation levels are present; operations are available for this DF via the VNF LCM interface; configuration parameters for the VNF LCM operations) The fourth group is just another flavor. It would have been obvious to a person in the ordinary skill in the art before the filing date of the claimed invention to combine Yao with the system of Gasparakis and Kim and Skerry to use virtual functions of different sizes. One having ordinary skill in the art would have been motivated to use Yao into the system of Gasparakis and Kim and Skerry for the purpose of implementing lifecycle management parameter modeling for virtual network functions. (Yao paragraph 02) Allowable Subject Matter Claims 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 14-20 are allowed. Response to Arguments Applicant's arguments filed on 02/13/2026 have been fully considered but they are not persuasive. Applicant’s arguments with respect to claims 1 and 8 have been considered but are moot because the arguments do not apply because of the introduction of new art by Gasparakis, Kim and Skerry. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAN KAMRAN whose telephone number is (571)272-3401. The examiner can normally be reached on 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor April Blair, can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAN KAMRAN/Primary Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Sep 09, 2022
Application Filed
Nov 02, 2023
Response after Non-Final Action
Nov 05, 2025
Non-Final Rejection — §103
Feb 05, 2026
Interview Requested
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Response Filed
Feb 11, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591444
Hardware Virtual Machine for Controlling Access to Physical Memory Space
2y 5m to grant Granted Mar 31, 2026
Patent 12585486
SYSTEMS AND METHODS FOR DEPLOYING A CONTAINERIZED NETWORK FUNCTION (CNF) BASED ON INFORMATION REGARDING THE CNF
2y 5m to grant Granted Mar 24, 2026
Patent 12585497
AMBIENT COOPERATIVE CANCELLATION WITH GREEN THREADS
2y 5m to grant Granted Mar 24, 2026
Patent 12572394
METHODS, SYSTEMS AND APPARATUS TO DYNAMICALLY FACILITATE BOUNDARYLESS, HIGH AVAILABILITY SYSTEM MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12561158
DEPLOYMENT OF A VIRTUALIZED SERVICE ON A CLOUD INFRASTRUCTURE BASED ON INTEROPERABILITY REQUIREMENTS BETWEEN SERVICE FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month