Prosecution Insights
Last updated: April 19, 2026
Application No. 17/937,358

APPARATUS WITH DYNAMIC ARBITRATION MECHANISM AND METHODS FOR OPERATING THE SAME

Non-Final OA §103
Filed
Sep 30, 2022
Examiner
YUN, CARINA
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
160 granted / 322 resolved
-5.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
25 currently pending
Career history
347
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103
DETAILED ACTION Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web (PTO/SB/439). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (U.S. PG PUB 2018/0217951). Regarding claim 1, Benisty teaches a memory device (see ¶[0008] memory device), comprising: a set of buffers for receiving commands and/or data from a host (see Fig. 8b, 112 and 114 are sets of submission queue that store commands, see, ¶ [0075] and ¶ [0078]), wherein the set of buffers are configured to queue the commands and/or data associated with multiple virtual machines (VMs) implemented by the host with each VM corresponding to a function implemented at the memory device to write the data and/or provide previously stored read data (see ¶[0075] “Each of the virtual machines VM_1-N of the processes 809B-N may be configured to issue commands directly to the submission queues 112 associated with the secondary IOV function(s) 830B-N provisioned thereto, bypassing the host manager 801 (e.g., virtual machine VM_1 may be configured to submit commands directly to the submission queue(s) 112 of the secondary IOV function 830B, and so on, with VM_N being configured to submit commands directly to the submission queue(s) 112 associated with the secondary IOV function 830N)” and see ¶[0078] “Each submission queue 112 may correspond to a respective IOV function 830A-N implemented by the device controller 108 and/or may have a respective priority, which may, inter alio, determine the type(s) and/or priority classification(s) of the commands to be queued therein.”); and a queue arbiter configured to control flows or execution timings of the commands according to corresponding functions, wherein controlling the flows or execution timings include identifying one or more functions associated with the queued commands (see ¶[0078] “FIG. 8B is a schematic block diagram illustrating further embodiments of a nonvolatile storage device 106 configured to arbitrate command fetching between a plurality of IOV functions 830, as disclosed herein. The command fetch logic 840 of the device controller 108 may be configured to fetch commands for execution by the command processing logic 410. The command processing logic 410 may execute the fetched commands by use of, inter alio, the memory 109”); determining a host policy for each of the identified functions, wherein the host policy describes one or more targeted bandwidths (BWs) for the corresponding function (see ¶[0089] “The arbitration criteria 849 may define data size and/or bandwidth threshold(s) pertaining to the specified submission queues 112 (data and/or bandwidth criteria). The arbitration manager 844 may implement a data criteria for a currently-selected submission queue 841 (queue-level criteria) by, inter alio, monitoring command(s) fetched from the currently-selected submission queue 841 (by use of the command monitor 846), estimating an amount of data and/or bandwidth to be consumed during execution of the monitored commands (e.g., estimating an amount of data to be transferred to/from the nonvolatile storage device 106 via the interconnect 806 during execution of the monitored commands), preventing interruption until an estimated amount of data and/or bandwidth satisfies a minimum data size and/or bandwidth threshold, enabling interruption in response to determining that the minimum threshold has been satisfied, and/or interrupting command fetching in response to exceeding a maximum data size and/or bandwidth threshold.”); receiving a feedback for each function, wherein the feedback represents a resource consumption measurement associated with executions of preceding commands for the corresponding function (see ¶ [0064] “In steps 702 and 704, the device collects submission queue command statistics and monitors storage device resource state. Steps 702 and 704 may be performed continually, whether static command fetching, dynamic command fetching, or a combination of static and dynamic command fetching is being implemented. In step 706, it is determined whether to switch to dynamic mode. Switching to dynamic mode may be implemented, for example, when storage device resource state information indicates that one or more storage device resources are over- or under-utilized. In another example, dynamic mode may be implemented continually and step 706 may be omitted.”); and controlling a release timing for releasing each of the commands from the set of buffers for backend storage and/or access operations, wherein the release timing is controlled per each function by comparing the corresponding host policy and the corresponding feedback (see ¶[0088] “The arbitration criteria 849 may comprise time-based criteria, which may designate the time for which the command fetch logic 840 is to fetch commands from specified submission queue(s) 112 and/or submission queues 112 of specified IOV functions 830 (e.g., the time for which the specified submission queue(s) 112 and/or IOV function(s) 830 are to remain selected). The arbitration manager 844 may implement a time-based criteria for a currently-selected submission queue 841 (queue-level arbitration criteria) by, inter alio, monitoring the time for which the currently-selected submission queue 841 has remained selected (e.g., the time for which the command fetch logic 840 has been fetching commands therefrom), preventing interruption until a minimum time threshold has been reached, enabling interruption after the minimum time threshold has been satisfied, and/or interrupting fetching in response to exceeding a maximum time threshold.”). Because Benisty discloses multiple embodiments and implementations, and all the findings may be disclosed in different embodiments/implementations, obviousness rejection is made. One of ordinary skill in the art at the time of the invention would be able to combine different embodiments adjacent to each other in the prior art and does not require a leap of inventiveness. Benisty discloses that these embodiments/implementations are used in order to perform command arbitration which comprises allocating credits to each of a plurality of virtual functions associated with a nonvolatile storage device, such that each of the plurality of virtual functions comprises a respective number of credits; and fetching commands from submission queues of selected virtual functions, of the plurality of virtual functions (see ¶ [0013] of Bensity). Regarding claim 2, Benisty teaches wherein the queue arbiter is configured to control the flows or execution timings based on: identifying an initial credit for each function (see ¶[0090] “The arbitration criteria 849 may be configured to selectively interrupt command fetching from specified submission queues 112 based on, inter alio, an amount of remaining fetch credits allocated to the specified submission queues 112 (credit-based criteria)”), wherein the initial credit represents an initial value for the release timing as estimated or classified according to characteristics of received commands associated with the corresponding function (see ¶ [0122] “Fetching a command from a submission queue 112 of an IOV function 830 may consume a determined amount of the credits. The amount of credits consumed by a command may be determined based on one or more determined credit characteristics of the command, which may include, but are not limited to: an opcode of the command, the command type (e.g., read, write, admin), the command priority classification, estimated amount of data to be transferred to/from the nonvolatile storage device 106 during execution of the command, estimated amount of bandwidth to be consumed during execution of the command, command attributes (e.g., cache enabled, fused operation, metadata parameters, data set management, etc.), a namespace associated with the command, a stream identifier associated with the command, an address range of the command (e.g., a logical address range, physical address range, and/or the like), a host buffer method used by the command, a buffer location for data pertaining to the command, and/or the like.”); controlling the release timing includes -- increasing or decreasing the initial credit according to the feedback (see ¶[0126] “The arbitration manager 844 may use the determined command characteristics of the fetched commands to determine the amount of credits 847 consumed by each fetched command, and to decrement the remaining credits 847 of the corresponding IOV function 830 accordingly. For example, if the currently-selected submission queue 841 corresponds to IOV function 830B, the arbitration manager 844 may decrement the remaining credits 847 of IOV function 830B in response to monitoring commands fetched therefrom.”); incrementally updating a timer according to the increased or decreased credit; and releasing each of the queued commands for implementation when the timer reaches an end). Regarding claim 3, Benisty teaches further comprising a traffic classification engine configured to: receive policies associated with the queued commands (see ¶[0123] “The credits 847 may be allocated to respective IOV functions 830A-N by the primary IOV function 830A and/or in accordance with QoS and/or arbitration policy settings of the respective IOV functions 830A-N (as defined in the arbitration metadata 845A-N). Credits 847 may be periodically provisioned to the IOV functions 830A-N (e.g., in accordance with a time-based refresh scheme, when an average number of credits of the IOV functions 830 falls below a threshold, and/or the like).”); and generate the initial credit for each of the queued commands according to one or more characteristics of the queued commands and/or the policies, wherein the initial credit is a default value for the release timing according to the characteristics of the incoming commands and without adjusting for actual backend data flows for the corresponding function (see ¶[0088] “The arbitration manager 844 may implement a time-based criteria for a currently-selected IOV function 830 (IOV-level arbitration criteria) by, inter alio, monitoring the time for which the currently-selected IOV function 830 has been selected (e.g., the time for which the command fetch logic 840 has been fetching commands from submission queue(s) 112 of the currently-selected IOV function 830), preventing interruption until a minimum time threshold has been reached (e.g., preventing selection of a next IOV function 830), enabling interruption after the minimum time threshold has been satisfied, and/or interrupting fetching in response to exceeding a maximum time threshold (e.g., triggering selection of a next IOV function 830). The arbitration manager 844 may be configured to record a timestamp when an IOV function 830 is selected by the arbitration logic 842, and may determine the time for which the IOV function has remained selected by comparing a current time to the recorded timestamp.”). Regarding claim 4, Benisty teaches wherein the queue arbiter is configured to adjust the initial credit according to one or more rules for prioritizing (1) read operations over write operations, (2) random transfers over sequential transfers, wherein the random and sequential transfers are distinguished according to transfer sizes and/or received timings for the queued commands (see ¶[0124] “Alternatively, the credit-based arbiter 882 may be configured to implement a weighted and/or prioritized arbitration scheme. The credit-based arbiter 882 may assign weights and/or priorities of the respective IOV functions 830, as disclosed herein (e.g., in accordance with weights and/or priorities assigned to the IOV functions 830 in the arbitration metadata 845 thereof). In some embodiments, the credit-based arbiter 882 may weight and/or prioritize respective IOV functions 830A-N based on, inter alio, the amount of the remaining credits 847 of the respective IOV functions 830A-N. The credit-based arbiter 882 may be configured to adjust predetermined weights and/or priorities of the respective IOV functions 830A-N in accordance with the amount of credits 847 of the respective IOV functions 830A-N. The credits 847 of one or more IOV functions 830 may be compared to an average or a mean of the credits 847 held by the IOV functions 830. The weight and/or priority of IOV functions 830 having less than the average and/or mean may be reduced, and the weight and/or priority of IOV functions 830 having more than the average and/or mean may be increased. In some embodiments, the amount of adjustment may be proportional to deviation from the average and/or mean. The weight and/or priority (P) of an IOV function 830 having the remaining credits (Cr) 847 may be adjusted as follows: P.sub.adj=P.sub.orig*Cr/C.sub.av, where P.sub.orig is the original weight and/or priority of the IOV function 830, P.sub.adj is the adjusted weight and/or priority, which is calculated by scaling P.sub.orig by a ratio of the remaining credits (Cr) 847 of the IOV function 830 to the average amount of the remaining credits (Cr) 847 of the IOV functions 830A-N. Although particular examples for determining and/or adjusting the weights and/or priorities of IOV functions 830 are described, the disclosure is not limited in this regard and could be adapted to use any suitable mechanism and/or technique.”), and/or (3) transfers with smaller block sizes over transfers with larger block sizes as defined according to one or more block size thresholds. Regarding claim 5, Benisty teaches wherein the queue arbiter is configured to increase or decrease the initial credit according to one or more comparisons between the resource consumption measurement and one or more thresholds associated with the one or more prioritization rules (see ¶[0124] “Alternatively, the credit-based arbiter 882 may be configured to implement a weighted and/or prioritized arbitration scheme. The credit-based arbiter 882 may assign weights and/or priorities of the respective IOV functions 830, as disclosed herein (e.g., in accordance with weights and/or priorities assigned to the IOV functions 830 in the arbitration metadata 845 thereof). In some embodiments, the credit-based arbiter 882 may weight and/or prioritize respective IOV functions 830A-N based on, inter alio, the amount of the remaining credits 847 of the respective IOV functions 830A-N. The credit-based arbiter 882 may be configured to adjust predetermined weights and/or priorities of the respective IOV functions 830A-N in accordance with the amount of credits 847 of the respective IOV functions 830A-N. The credits 847 of one or more IOV functions 830 may be compared to an average or a mean of the credits 847 held by the IOV functions 830. The weight and/or priority of IOV functions 830 having less than the average and/or mean may be reduced, and the weight and/or priority of IOV functions 830 having more than the average and/or mean may be increased. In some embodiments, the amount of adjustment may be proportional to deviation from the average and/or mean. The weight and/or priority (P) of an IOV function 830 having the remaining credits (Cr) 847 may be adjusted as follows: P.sub.adj=P.sub.orig*Cr/C.sub.av, where P.sub.orig is the original weight and/or priority of the IOV function 830, P.sub.adj is the adjusted weight and/or priority, which is calculated by scaling P.sub.orig by a ratio of the remaining credits (Cr) 847 of the IOV function 830 to the average amount of the remaining credits (Cr) 847 of the IOV functions 830A-N. Although particular examples for determining and/or adjusting the weights and/or priorities of IOV functions 830 are described, the disclosure is not limited in this regard and could be adapted to use any suitable mechanism and/or technique.”). Regarding claim 6, Benisty teaches further comprising: a data placement engine coupled downstream from the queue arbiter and configured to provide an interface with a memory array for implementing the queued commands (see ¶[0320] “By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible, e.g., a NOR memory array. NAND and NOR memory configurations are exemplary, and memory elements may be otherwise configured.”), wherein the data placement engine generates the feedback based on actual implementation of preceding commands associated with the function represented by the feedback (see ¶[0057] “In FIG. 4, device controller 108 includes a command monitor 400 that collects submission queue statistics and a storage device resource monitor 402 that monitors storage device resource state. Examples of submission queue statistics that may be collected are illustrated in FIG. 5. In FIG. 5, the submission queue statistics include, for each submission queue, the number of pending commands, the number of commands fetched from the queue, the number of read commands fetched from the queue, the ratio of read commands to write commands fetched from the queue, the average command size, the smallest command size, and the largest command size.”). Regarding claim 7, Benisty teaches further comprising: a traffic classification engine coupled to and between the set of buffers and the queue arbiter and configured to classify the queued commands and generate the initial credits accordingly, wherein the queue arbiter is implemented as a hardware state machine that is configured to control the release timing for passing the queued commands to the data placement engine for implementation (see ¶[0088] “The arbitration criteria 849 may comprise time-based criteria, which may designate the time for which the command fetch logic 840 is to fetch commands from specified submission queue(s) 112 and/or submission queues 112 of specified IOV functions 830 (e.g., the time for which the specified submission queue(s) 112 and/or IOV function(s) 830 are to remain selected). The arbitration manager 844 may implement a time-based criteria for a currently-selected submission queue 841 (queue-level arbitration criteria) by, inter alio, monitoring the time for which the currently-selected submission queue 841 has remained selected (e.g., the time for which the command fetch logic 840 has been fetching commands therefrom), preventing interruption until a minimum time threshold has been reached, enabling interruption after the minimum time threshold has been satisfied, and/or interrupting fetching in response to exceeding a maximum time threshold. The arbitration manager 844 may be configured to record a timestamp when a submission queue 112 is selected by the arbitration logic 842 (e.g., designated as the currently-selected submission queue 841), and may determine the time for which the submission queue 112 has remained selected by comparing a current time to the recorded timestamp. The arbitration manager 844 may implement a time-based criteria for a currently-selected IOV function 830 (IOV-level arbitration criteria) by, inter alio, monitoring the time for which the currently-selected IOV function 830 has been selected (e.g., the time for which the command fetch logic 840 has been fetching commands from submission queue(s) 112 of the currently-selected IOV function 830), preventing interruption until a minimum time threshold has been reached (e.g., preventing selection of a next IOV function 830), enabling interruption after the minimum time threshold has been satisfied, and/or interrupting fetching in response to exceeding a maximum time threshold (e.g., triggering selection of a next IOV function 830). The arbitration manager 844 may be configured to record a timestamp when an IOV function 830 is selected by the arbitration logic 842, and may determine the time for which the IOV function has remained selected by comparing a current time to the recorded timestamp.”).. Regarding claim 10, Benisty teaches wherein the queue arbiter is configured to limit a resource consumption of the corresponding function according to the host policy for reducing or preventing the function from consuming an uneven majority of command implementation resources (see ¶[0013] “Fetching a command from a submission queue of a selected virtual function may comprise reducing the number of credits allocated to the selected virtual function. In some embodiments, selecting the virtual function comprises arbitrating between the plurality of virtual functions based on the number of credits allocated to each of the respective virtual functions. Alternatively, or in addition, arbitrating between the plurality of virtual functions may comprise assigning a respective weight to each of the virtual functions, wherein the weight assigned to a virtual function corresponds to the number of credits allocated to the virtual function.”). Regarding claim 11, Benisty teaches a method of operating a memory device (see ¶[0008] memory device) configured to implement multiple functions that each correspond to a virtual machine (VM) implemented at a host, the method comprising: using a set of buffers, receiving commands and/or data provided by multiple VMs at the host; identifying the functions associated with the received commands (see Fig. 2, command queue, see ¶ [0055]); implementing an initial portion of the commands for each function according to a timing value initially assigned to the corresponding function, wherein implementing the commands include writing data to a backend storage or reading data from the backend storage according to the timing value (see ¶[0078] “FIG. 8B is a schematic block diagram illustrating further embodiments of a nonvolatile storage device 106 configured to arbitrate command fetching between a plurality of IOV functions 830, as disclosed herein. The command fetch logic 840 of the device controller 108 may be configured to fetch commands for execution by the command processing logic 410. The command processing logic 410 may execute the fetched commands by use of, inter alio, the memory 109”); determining a feedback for each function based on implementing the initial portion of the commands, wherein the feedback represents an amount of resource consumed by the corresponding function; and adjusting the timing value to a new value based on the feedback (see ¶ [0064] “In steps 702 and 704, the device collects submission queue command statistics and monitors storage device resource state. Steps 702 and 704 may be performed continually, whether static command fetching, dynamic command fetching, or a combination of static and dynamic command fetching is being implemented. In step 706, it is determined whether to switch to dynamic mode. Switching to dynamic mode may be implemented, for example, when storage device resource state information indicates that one or more storage device resources are over- or under-utilized. In another example, dynamic mode may be implemented continually and step 706 may be omitted.”), wherein the timing value is independently adjusted for one or more or each of the functions for providing function- specific quality of service (QoS) control (see ¶[0248] “. Step 1210 may comprise allocating command fetching credits to each of a plurality of virtual functions of a nonvolatile storage device 106 (e.g., IOV functions 830A-N). The amount of credits allocated to each IOV function 830A-N may correspond to a priority and/or QoS assigned to the respective IOV functions 830A-N. The credits may be allocated by, inter alio, primary or physical virtual functions (e.g., primary IOV function 830A).”). Because Benisty discloses multiple embodiments and implementations, and all the findings may be disclosed in different embodiments/implementations, obviousness rejection is made. One of ordinary skill in the art at the time of the invention would be able to combine different embodiments adjacent to each other in the prior art and does not require a leap of inventiveness. Benisty discloses that these embodiments/implementations are used in order to perform command arbitration which comprises allocating credits to each of a plurality of virtual functions associated with a nonvolatile storage device, such that each of the plurality of virtual functions comprises a respective number of credits; and fetching commands from submission queues of selected virtual functions, of the plurality of virtual functions (see ¶ [0013] of Bensity). Regarding claim 12, Benisty teaches further comprising: generating a classification for each of the queued commands according to the identified function, a command type, a timestamp, or a combination thereof (see ¶ [0013] “The amount of credits consumed by the command may be based on one or more of: a type of the command; an opcode of the command; an estimated data transfer size; an attribute of the command; a namespace of the command; a stream identifier associated with the command; an address of the command; a host buffer method used by the command; a buffer location for data pertaining to the command; and/or the like.”); determining the initial timing value based on the classification; and wherein the timing value is subsequently adjusted according to a policy provided by the host for establishing an overall performance for the corresponding function (see ¶[0124] “The credit-based arbiter 882 may be configured to adjust predetermined weights and/or priorities of the respective IOV functions 830A-N in accordance with the amount of credits 847 of the respective IOV functions 830A-N. The credits 847 of one or more IOV functions 830 may be compared to an average or a mean of the credits 847 held by the IOV functions 830. The weight and/or priority of IOV functions 830 having less than the average and/or mean may be reduced, and the weight and/or priority of IOV functions 830 having more than the average and/or mean may be increased. In some embodiments, the amount of adjustment may be proportional to deviation from the average and/or mean. The weight and/or priority (P) of an IOV function 830 having the remaining credits (Cr) 847 may be adjusted as follows: P.sub.adj=P.sub.orig*Cr/C.sub.av, where P.sub.orig is the original weight and/or priority of the IOV function 830, P.sub.adj is the adjusted weight and/or priority, which is calculated by scaling P.sub.orig by a ratio of the remaining credits (Cr) 847 of the IOV function 830 to the average amount of the remaining credits (Cr) 847 of the IOV functions 830A-N. Although particular examples for determining and/or adjusting the weights and/or priorities of IOV functions 830 are described, the disclosure is not limited in this regard and could be adapted to use any suitable mechanism and/or technique.”). Regarding claim 13, Benisty teaches wherein the timing value is adjusted based on prioritizing (1) read operations over write operations, (2) random transfers over sequential transfers, wherein the random and sequential transfers are distinguished according to transfer sizes and/or received timings for the queued commands, and/or (3) transfers with smaller block sizes over transfers with larger block sizes as defined according to one or more block size thresholds (see ¶[0138] “The arbitration manager 844 may be configured to capture any suitable statistical information pertaining to the fetched commands, including statistical information pertaining to one or more characteristics of the fetched commands, which may include, but are not limited to: command type (e.g., admin, I/O, read, write, ratio(s) of different command types, and/or the like), priority classification (e.g., admin, urgent, high, medium, low, ratio(s) of different priority classifications, and/or the like), command address (e.g., logical address range(s) of fetched commands), command sequentially (e.g., random I/O commands, sequential I/O commands, ratio(s) of random and/or sequential I/O commands, and/or the like), command size, command bandwidth, and so on. In some embodiments, the arbitration manager 844 may be further configured to monitor credit characteristics pertaining to the fetched commands (e.g., the amount(s) of credits consumed by the fetched commands, and/or the like).”). Regarding claim 14, Benisty teaches wherein the timing value is adjusted to limit a resource consumption of the corresponding function according to the host policy for reducing or preventing the function from consuming an uneven majority of command implementation resources (see ¶[0123] “he credits 847 may be allocated to respective IOV functions 830A-N by the primary IOV function 830A and/or in accordance with QoS and/or arbitration policy settings of the respective IOV functions 830A-N (as defined in the arbitration metadata 845A-N). Credits 847 may be periodically provisioned to the IOV functions 830A-N (e.g., in accordance with a time-based refresh scheme, when an average number of credits of the IOV functions 830 falls below a threshold, and/or the like).”). Regarding claim 15, Benisty teaches wherein the timing value is adjusted using a hardware state machine (see ¶[0110] “The arbitration logic 842 may be configured to maintain arbitration state metadata 867, which may comprise, inter alio, information pertaining to arbitration between the submission queues 112 of each IOV function 830 (e.g., information pertaining to each arbitration scheme 866 used to arbitrate between the submission queues 112 of the respective IOV functions 830). The arbitration state metadata 867 may indicate, for example, the arbitration scheme 866 used for each IOV function 830, a last winner of the arbitration scheme 866 of each IOV function 830, and so on.”). Regarding claim 16, is an independent memory claim corresponding to claim 1 above, further, Benisty teaches a memory array configured to store write data and read stored data. Regarding claim 17, is a memory claim corresponding to claim 2 above, and are rejected for the same reasons. Regarding claim 18, is a memory claim corresponding to claim 6 above, and are rejected for the same reasons. Regarding claim 19, is a memory claim corresponding to claim 7 above, and are rejected for the same reasons. Regarding claim 20, is a memory claim corresponding to claim 10 above, and are rejected for the same reasons. Claim(s) 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty (U.S. PG PUB 2018/0217951) in view of Tremblay et al. (U.S. PG PUB 2015/0334045). Regarding claim 8, Benisty does not express disclose, however Tremblay teaches wherein: the memory device has an architecture that includes a centralized port for implementing functions that correspond to VMs implemented by a centralized module at the host; and the queue arbiter is configured to control quality of service (QoS) for implementing the functions associated with the centralized port (see ¶[0058] “The packets entering the flow network (e.g. from the VM nics) can be tagged to identify the source port of the flow network and forwarded to the centralized switch where the flow rules are applied.”). Hence, it would have been obvious to one or ordinary skill in the art before the effective filing date to modify the teachings of Benisty by adapting Tremblay for distributing flow rules (see ¶[0015] of Tremblay). Regarding claim 9, Benisty teaches wherein: the queue arbiter is configured to control quality of service (QoS) for implementing the functions associated with the multiple ports (see ¶ [0123] “The credits 847 may be allocated to respective IOV functions 830A-N by the primary IOV function 830A and/or in accordance with QoS and/or arbitration policy settings of the respective IOV functions 830A-N (as defined in the arbitration metadata 845A-N).”). Benisty does not express disclose, however Tremblay teaches the memory device has an architecture that includes multiple ports each configured for implementing a set of functions, the architecture reflective of the host having multiple host modules each configured for implementing a set of VMs (see ¶[0057] “In an alternative embodiment, the approach is to redirect all traffic from the tenant VMs to a centralized node that applies all of the rules. The virtual switches on the physical servers add the information about the ingress port (e.g. the vnic) on which the packet is received into the packet header and forwards the packet to the centralized switch. The centralized switch can apply the flow rules as if the all the ports were connected to that switch. The centralized switch can either be a physical switch or a virtual switch including software running on a physical server.” See ¶[0047] “It is noted that a virtual inline service can be hosted in a VM, or multiple VMs, hosted on a server on top of a hypervisor. As shown in FIG. 2, multiple virtual inline services S1-Sn can share the resources of the same physical server 114a-114d. For example, S1 is deployed on VM 112b and S4 is deployed on VM 112a which are both hosted by server 114a. An instance of a virtual software switch (vSwitch) 116a-116d can be included on each server 114a-114d for handling the communication between multiple VMs running on the same server. Other methods can be similarly used for this purpose, for example, remote port is a method to offload the switching to a hardware switch. A particular virtual inline service can be running on a dedicated server, either bare metal or using an operating system.”). Hence, it would have been obvious to one or ordinary skill in the art before the effective filing date to modify the teachings of Benisty by adapting Tremblay for distributing flow rules (see ¶[0015] of Tremblay). Interview Requests In accordance with 37 CFR 1.133(a)(3), requests for interview must be made in advance. Interview requests are to be made by telephone (571-270-7848) call or FAX (571-270-8848). Applicants must provide a detailed agenda as to what will be discussed (generic statement such as “discuss §102 rejection” or “discuss rejections of claims 1-3” may be denied interview). The detail agenda along with any proposed amendments is to be written on a PTOL-413A or a custom form and should be faxed (or emailed, subject to MPEP 713.01.I / MPEP 502.03) to the Examiner at least 5 business days prior to the scheduled interview. Interview requests submitted within amendments may be denied because the Examiner was not notified, in advance, of the Applicant Initiated Interview Request and due to time constraints may not be able to review the interview request to prior to the mailing of the next Office Action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hussain et al. (U.S. PG PUB 2015/0317088) teaches to virtualize a physical NVMe controller associated with a computing device or host so that every virtual machine running on the host can have its own dedicated virtual NVMe controller. First, a plurality of virtual NVMe controllers are created on a single physical NVMe controller, which is associated with one or more storage devices. Once created, the plurality of virtual NVMe controllers are provided to VMs running on the host in place of the single physical NVMe controller attached to the host, and each of the virtual NVMe controllers organizes the storage units to be accessed by its corresponding VM as a logical volume. As a result, each of the VMs running on the host has its own namespace(s) and can access its storage devices directly through its own virtual NVMe controller. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARINA YUN whose telephone number is (571)270-7848. The examiner can normally be reached Mon, Tues, Thurs, 9-4 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to call. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Carina Yun Patent Examiner Art Unit 2194 /CARINA YUN/Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
Jan 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578996
ADAPTIVE HIGH-PERFORMANCE TASK DISTRIBUTION FOR MANAGING COMPUTING RESOURCES ON CLOUD
2y 5m to grant Granted Mar 17, 2026
Patent 12572398
CONSOLE COMMAND COMPOSITION
2y 5m to grant Granted Mar 10, 2026
Patent 12554562
INTERSYSTEM PROCESSING EMPLOYING BUFFER SUMMARY GROUPS
2y 5m to grant Granted Feb 17, 2026
Patent 12498996
HYBRID PAGINATION FOR RETRIEVING DATA
2y 5m to grant Granted Dec 16, 2025
Patent 12474974
SYSTEMS AND METHODS FOR POWER MANAGEMENT FOR MODERN WORKSPACES
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.5%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month