Prosecution Insights
Last updated: April 19, 2026
Application No. 18/087,575

QUEUE UTILIZATION FOR OPTIMIZED STORAGE ACCESS

Non-Final OA §103
Filed
Dec 22, 2022
Examiner
HUYNH, KIM T
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Inc.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
580 granted / 703 resolved
+27.5% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
24 currently pending
Career history
727
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
37.1%
-2.9% vs TC avg
§112
4.0%
-36.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 703 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114 was filed in this application after a decision by the Patent Trial and Appeal Board, but before the filing of a Notice of Appeal to the Court of Appeals for the Federal Circuit or the commencement of a civil action. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/10/2025 has been entered. Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Vyshetsky et al. (Pub. No. US2017/0308298) in view of Ravindran (US Patent No. US9,880,750) As per claim 8, Vyshetsky discloses a method comprising: transmitting a command directly (paragraph 28, read and write command to the set of queues directly via user) from a user application (fig.1, user application process 131) to a submission queue (fig.1, I/O submission queue ISQ1), the submission queue reserved for direct access by an initiator device via the user application. (paragraph 28, read and write to the second set of queues 132 directly via user mode access and thereby perform I/O operations on the NVMe device 200 while bypassing kernel processes) Vyshetsky discloses all the limitations as the above but does not explicitly disclose the command bypassing a kernel space. However, Ravindran discloses this (paragraph 74, command is issued directly from user space to the SPU 520 to move data (again bypassing kernel context switches.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Ravindran with the teaching of Vyshetsky so as to make system more efficient to achieve higher performance and lower latency, thus enhance the system performance. As per claim 15, Vyshetsky discloses a non-transitory computer readable storage medium storing instructions, which when executed, cause a processing device (fig.1, host 100) of a storage controller to: transmit a command directly (paragraph 28, read and write command to the set of queues directly via user) from a user application (fig.1, user application process 131) to a submission queue (fig.1, I/O submission queue ISQ1), the submission queue reserved for direct access by an initiator device via the user application. (paragraph 34, memory device driver creates more queues by reserving a second range of memory addresses mapped for use by the user application process in response to receiving the request for user mode access) Vyshetsky discloses all the limitations as the above but does not explicitly disclose the command bypassing a kernel space. However, Ravindran discloses this (paragraph 74, command is issued directly from user space to the SPU 520 to move data (again bypassing kernel context switches.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Ravindran with the teaching of Vyshetsky so as to make system more efficient to achieve higher performance and lower latency, thus enhance the system performance. 4. Claims 1-7, 9-14, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vyshetsky et al. (Pub. No. US2017/0308298) in view of Ravindran (US Patent No. US9,880,750) and further in view of Chou et al. (Pub. No. US2015/0254088) As per claim 1, Vyshetsky discloses a system comprising: a storage controller (fig.2, NVMe controller 201) comprising a processing device (fig.1, host 100) to: send a Non-Volatile Memory Express (fig. 1, NVMe device 200) (paragraph 38, The communication device 340 allows for access to other computers (e.g., servers or clients) via a network (e.g., Ethernet) from a user application (fig.1, user application process 131) to a submission queue (fig.1, I/O submission queue ISQ1), the NVMe/FC command bypassing (paragraph 28, bypassing kernel processes in the block I/O layer) a kernel space (fig.1, kernel space 110), wherein the submission queue is reserved for direct access by an initiator device via the user application. (paragraph 34, memory device driver creates more queues by reserving a second range of memory addresses mapped for use by the user application process in response to receiving the request for user mode access) Discloses all the limitation as the above but does not explicitly disclose send a non-Volatile memory express over Fibre channel (NVMe/FC) command. However, Chou discloses this. (paragraph 28, wherein the storage methods including providing NVMe over Ethernet as the protocol used implementing the network storage.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Chou with the teaching of Greeene so as to utilize NVMe over Ethernet protocols to implementing storage such as NVMe compliant commands to yield the predictable result so as to control efficiently, thus enhance the system performance. In addition, Vyshetsky in view of Chou disclose all the limitations as the above but does not explicitly disclose the command bypassing a kernel space. However, Ravindran discloses this (paragraph 74, command is issued directly from user space to the SPU 520 to move data (again bypassing kernel context switches.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Ravindran with the teaching of Vyshetsky in view Chou so as to make system more efficient to achieve higher performance and lower latency, thus enhance the system performance. As per claim 2, Vyshetsky discloses wherein the processing device is to instruct a Fibre Channel Host Bus Adapter (FC HBA) to route the NVMe/FC command to the submission queue. (paragraph 35, lines 6-10, write operation of an I/O command to the I/O submission queue in the first set of queues, and update the value of a submission tail doorbell register corresponding to the I/O submission queue in the first set of queues) As per claim 3, Vyshetsky discloses wherein the NVMe/FC command is received by the FC HBA via a fiber-channel non-volatile memory express (FC NVMe) interface of a storage controller. (paragraph 4, lines 29-32, network functions may be offloaded into a network interface controller 118 (or NIC) or the network fabric switch, such as via an ethernet connection 120, in turn leading to a network (with various switches, routers and the like)) As per claim 4, Vyshetsky discloses wherein the processing device is further to: send a second NVMe/FC command that is to be routed through a kernel space to a second submission queue, the second submission queue being reserved for use by a kernel space (fig.1, kernel space 110) of a storage controller. (paragraph 34, memory device driver creates more queues by reserving a second range of memory addresses mapped for use by the user application process in response to receiving the request for user mode access) As per claim 5, Vyshetsky discloses wherein the submissions queue and the second submission queue are to communicate concurrently with the initiator device. (paragraph 35, lines 10-14, the user application and the kernel mode process may perform, respectively and in parallel, the virtual memory write operation to the I/O submission queue in the second set of queues and the virtual memory write operation to the I/O submission queue in the first set of queues.) As per claim 6, Vyshetsky discloses wherein the NVMe/FC command is to be routed through an NVMe/FC stack. (paragraph 80, lines 14-17, provides high performance via by-passing the software stacks used in conventional systems, while avoiding the need to translate from NVMe (as used by the OS stack 108) and the traffic tunneled over Ethernet to other devices.) As per claim 7, Vyshetsky discloses wherein the processing device is further to: transmit a command to a kernel space device driver to allocate the submission queue for the direct access to user space. (paragraph 8, The memory device driver is configured to create a first set of one or more queues by at least reserving a first range of memory addresses in the kernel space, provide a location address and size of the first set of queues to a controller of the NVMe device) As per claim 9, Vyshetsky discloses all the limitation as the above but does not explicitly disclose wherein the command corresponds to a Non-Volatile Memory Express over Fibre Channel (NVMe/FC) command. However, Chou discloses this. (paragraph 28, wherein the storage methods including providing NVMe over Ethernet as the protocol used implementing the network storage.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Chou with the teaching of Greeene so as to utilize NVMe over Ethernet protocols to implementing storage such as NVMe compliant commands to yield the predictable result so as to control efficiently, thus enhance the system performance. As per claim 10, Vyshetsky discloses the method further comprising: instructing a Fibre Channel Host Bus Adapter (FC HBA) to route the NVMe/FC command to the submission queue. (paragraph 35, lines 6-10, write operation of an I/O command to the I/O submission queue in the first set of queues, and update the value of a submission tail doorbell register corresponding to the I/O submission queue in the first set of queues) As per claim 11, Vyshetsky discloses wherein the NVMe/FC command is to be routed through an NVMe/FC stack in user space. (paragraph 80, lines 14-17, provides high performance via by-passing the software stacks used in conventional systems, while avoiding the need to translate from NVMe (as used by the OS stack 108) and the traffic tunneled over Ethernet to other devices.) As per claim 12, Vyshetsky discloses wherein the NVMe/FC command is received by the FC HBA via a fiber-channel non-volatile memory express (FC NVMe) interface of a storage controller. (paragraph 4, lines 29-32, network functions may be offloaded into a network interface controller 118 (or NIC) or the network fabric switch, such as via an ethernet connection 120, in turn leading to a network (with various switches, routers and the like)) As per claim 13, Vyshetsky discloses the method further comprising: sending a second NVMe/FC command that is to be routed through a kernel space to a second submission queue, the second submission queue being reserved for use by a kernel space of a storage controller. (paragraph 34, memory device driver creates more queues by reserving a second range of memory addresses mapped for use by the user application process in response to receiving the request for user mode access) As per claim 14, Vyshetsky discloses wherein the submissions queue and the second submission queue are to communicate concurrently with the initiator device. (paragraph 35, lines 10-14, he user application and the kernel mode process may perform, respectively and in parallel, the virtual memory write operation to the I/O submission queue in the second set of queues and the virtual memory write operation to the I/O submission queue in the first set of queues.) As per claim 16, Vyshetsky discloses all the limitation as the above but does not explicitly disclose wherein the command corresponds to a Non-Volatile Memory Express over Fibre Channel (NVMe/FC) command. However, Chou discloses this. (paragraph 28, wherein the storage methods including providing NVMe over Ethernet as the protocol used implementing the network storage.) It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Chou with the teaching of Greeene so as to utilize NVMe over Ethernet protocols to implementing storage such as NVMe compliant commands to yield the predictable result so as to control efficiently, thus enhance the system performance. As per claim 17, Vyshetsky discloses wherein the processing device is to instruct a Fibre Channel Host Bus Adapter (FC HBA) to route the NVMe/FC command to the submission queue. (paragraph 35, lines 6-10, write operation of an I/O command to the I/O submission queue in the first set of queues, and update the value of a submission tail doorbell register corresponding to the I/O submission queue in the first set of queues) As per claim 18, Vyshetsky discloses wherein the NVMe/FC command is to be routed through an NVMe/FC stack. (paragraph 80, lines 14-17, provides high performance via by-passing the software stacks used in conventional systems, while avoiding the need to translate from NVMe (as used by the OS stack 108) and the traffic tunneled over Ethernet to other devices.) As per claim 19, Vyshetsky discloses wherein the NVMe/FC command is received by the FC HBA via a fiber-channel non-volatile memory express (FC NVMe) interface of the storage controller. (paragraph 4, lines 29-32, network functions may be offloaded into a network interface controller 118 (or NIC) or the network fabric switch, such as via an ethernet connection 120, in turn leading to a network (with various switches, routers and the like)) As per claim 20, Vyshetsky discloses wherein the processing device is further to: send a second NVMe/FC command that is to be routed through a kernel space to a second submission queue, the second submission queue being reserved for use by a kernel space of a storage controller. (paragraph 34, memory device driver creates more queues by reserving a second range of memory addresses mapped for use by the user application process in response to receiving the request for user mode access) 5. The prior art made of record and not relied upon is considered pertinent to applicant' s disclosure. Bandic et al. [Pub. No. US2016/0098227] discloses The Non-Volatile Memory express (NVMe) Specification defines a register interface, a command set, and memory structures including a single set of administrative command and completion queues. Hahn et al. [Pub. No. US2015/0134857) discloses read/write requests with the appropriate queue based on usage, and the IOKit driver (or Storport driver in Windows) can route the corresponding NVMe commands into the appropriate submission queue. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T HUYNH whose telephone number is (571)272-3635 or via e-mail addressed to [kim.huynh3@uspto.gov]. The examiner can normally be reached on M-F 7.00AM- 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tsai Henry can be reached at (571)272-4176 or via e-mail addressed to [Henry.Tsai@USPTO.GOV]. The fax phone numbers for the organization where this application or proceeding is assigned are (571)273-8300 for regular communications and After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (571)272-2100. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K. T. H./ Examiner, Art Unit 2184 /HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Aug 03, 2023
Non-Final Rejection — §103
Nov 10, 2023
Response Filed
Feb 11, 2024
Final Rejection — §103
May 13, 2024
Notice of Allowance
Jul 10, 2024
Response after Non-Final Action
Jul 15, 2024
Response after Non-Final Action
Nov 11, 2024
Response after Non-Final Action
Jan 22, 2025
Response after Non-Final Action
Jan 22, 2025
Response after Non-Final Action
Jan 23, 2025
Response after Non-Final Action
Jan 23, 2025
Response after Non-Final Action
Oct 15, 2025
Response after Non-Final Action
Dec 10, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Jan 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602342
SMALL FORM FACTOR PC WITH BMC AND EXTENDED FUNCTIONALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12591490
SEMICONDUCTOR DEVICE AND LINK CONFIGURING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12585608
ARCHITECTURE TO ACHIEVE HIGHER THROUGHPUT IN SYMBOL TO WIRE STATE CONVERSION
2y 5m to grant Granted Mar 24, 2026
Patent 12585607
BUS MODULE AND SERVER
2y 5m to grant Granted Mar 24, 2026
Patent 12579087
IN-BAND INTERRUPT SIGNAL FOR A COMMUNICATION INTERFACE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.2%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 703 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month