Prosecution Insights
Last updated: April 19, 2026
Application No. 19/034,138

SEPARATE COMMAND ADDRESS (SCA) BASED MEMORY CONTROLLER

Non-Final OA §103
Filed
Jan 22, 2025
Examiner
MACKALL, LARRY T
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
661 granted / 779 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
810
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
24.8%
-15.2% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 779 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Information Disclosure Statement The Information Disclosure Statement filed on 22 Jan 2025 has been considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 5-8, 11, 12, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kavala et al. (Pub. No. US 2024/0295989) in view of Benisty (Pub. No. US 2018/0321864). Claim 1: Kavala et al. disclose a method, comprising: receiving, by a separate command address (SCA)-based memory controller, a plurality of commands [pars. 0113-0118 – Commands are received. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; converting the scheduled commands into SCA-based commands [pars. 0113-0118 – Commands are converted. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; selecting, from the scheduled SCA-based commands, one or more scheduled SCA-based commands for a sequence execution [pars. 0113-0118 – Commands are selected. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; and executing, in sequence, the selected one or more SCA-based commands using one or more logical units (LUNs) of a channel in a memory component [pars. 0006-0007, 0044-0050, 0113-0119 – The commands are executed. Per the ONFI specification, a LUN is the minimum unit that can independently execute commands and report status. There are one or more LUNs per NAND Target. (“A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.” … “The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.” … “The buffer chip 1110 may communicate with other NAND flash memory chips 1120 connected to other channels.”)]. However, Kavala et al. do not specifically disclose, scheduling, by the SCA-based memory controller, the plurality of commands using two or more data path scheduler (DPS) request queues; In the same field of endeavor, Benisty discloses, scheduling, by the SCA-based memory controller, the plurality of commands using two or more data path scheduler (DPS) request queues [fig. 4; par. 0069 – “In addition, command queuing 432 is configured to queue part or all of the fetched NVMe commands for further processing. Command scheduler 434 is configured to select the next pending command for further execution from command queuing 432. As shown in FIG. 4, there may be several queues from which to select from. Data path scheduler 438 is configured to schedule one or more types of data transfers. As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.”]; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kavala et al. to include command queueing, as taught by Benisty, in order to improve performance. Claim 2 (as applied to claim 1 above): Benisty discloses, wherein the scheduling of the plurality of commands further comprises: scheduling a first command that is associated with a first operation using a first DPS request queue [pars. 0069-0071 – “As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.” … “NVMe may include support for parallel operation by supporting up to 65,535 I/O Queues with up to 64K outstanding commands per I/O Queue.”]; and scheduling, in parallel, a second command that is associated with a second operation using a second DPS request queue, wherein the first operation is independent from the second operation [pars. 0069-0071 – “As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.” … “NVMe may include support for parallel operation by supporting up to 65,535 I/O Queues with up to 64K outstanding commands per I/O Queue.”]. Claim 5 (as applied to claim 1 above): Benisty discloses the method, further comprising: executing in sequence the selected one or more scheduled SCA-based commands via a plurality of NAND flash controller (NFC) queues [pars. 0069-0071 – “NVMe may include support for parallel operation by supporting up to 65,535 I/O Queues with up to 64K outstanding commands per I/O Queue.”]. Claim 6 (as applied to claim 1 above): Benisty discloses, the method further comprising: sending notification to a DPS, wherein the notification includes status of execution in the one or more LUNs [pars. 0069-0070 – “As shown in FIG. 4, there may be several queues from which to select from. Data path scheduler 438 is configured to schedule one or more types of data transfers. As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.” … “Completion queue manager 440 is configured to post completion entries to the completion queues 406, while also handling the relevant pointers. Error correction 442 is configured to correct the data that is fetched from the memory arrays 450. Flash interface module 446 is configured to control and access the memory arrays 450.”]. Claim 7 (as applied to claim 1 above): Kavala et al. disclose, wherein the memory component includes a NAND flash memory [fig. 1; pars. 0044-0050 – “A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.”]. Claim 8 (as applied to claim 1 above): Kavala et al. disclose, wherein the received plurality of commands includes high-level commands that are translated to observe an SCA protocol [par. 0115 – “The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”]. Claim 11: Kavala et al. disclose an apparatus, comprising: a memory component [fig. 1; pars. 0044-0050 – “A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.”]; and a separate command address (SCA)-based memory controller coupled to the memory component and configured to: receive a plurality of commands [pars. 0113-0118 – Commands are received. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; convert the scheduled plurality of commands into SCA-based commands [pars. 0113-0118 – Commands are converted. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; select from the scheduled SCA-based commands one or more scheduled SCA-based commands for a sequence execution [pars. 0113-0118 – Commands are selected. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; and execute in sequence the selected one or more SCA-based commands using one or more logical units (LUNs) of a channel in the memory component [pars. 0006-0007, 0044-0050, 0113-0119 – The commands are executed. Per the ONFI specification, a LUN is the minimum unit that can independently execute commands and report status. There are one or more LUNs per NAND Target. (“A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.” … “The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.” … “The buffer chip 1110 may communicate with other NAND flash memory chips 1120 connected to other channels.”)]. However, Kavala et al. do not specifically disclose, the memory controller configured to: schedule the plurality of commands using two or more data path scheduler (DPS) request queues; In the same field of endeavor, Benisty discloses, the memory controller configured to: schedule the plurality of commands using two or more data path scheduler (DPS) request queues [fig. 4; par. 0069 – “In addition, command queuing 432 is configured to queue part or all of the fetched NVMe commands for further processing. Command scheduler 434 is configured to select the next pending command for further execution from command queuing 432. As shown in FIG. 4, there may be several queues from which to select from. Data path scheduler 438 is configured to schedule one or more types of data transfers. As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.”]; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kavala et al. to include command queueing, as taught by Benisty, in order to improve performance. Claim 12 (as applied to claim 11 above): Benisty discloses, wherein the SCA-based memory controller is further configured to: schedule in parallel the commands using corresponding DPS request queues, wherein the commands are associated with operations that are independent from one another [pars. 0069-0071 – “As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.” … “NVMe may include support for parallel operation by supporting up to 65,535 I/O Queues with up to 64K outstanding commands per I/O Queue.”]. Claim 17 (as applied to claim 11 above): Kavala et al. disclose, wherein the SCA-based memory controller is further configured to: generate a sequence of command executions for an ONFI-based memory component [pars. 0006-0007, 0044-0050, 0113-0119 – “A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.” … “The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.” … “The buffer chip 1110 may communicate with other NAND flash memory chips 1120 connected to other channels.”]. Claim 18: Kavala et al. disclose an apparatus, comprising: a storage device comprising [fig. 1; par. 0045]: a memory component [fig. 1; pars. 0044-0050 – “A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.”]; and a memory controller that is further configured to: receive a plurality of commands [pars. 0113-0118 – Commands are received. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; select from the scheduled commands one or more scheduled commands for a sequence execution [pars. 0113-0118 – Commands are selected. (“The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.”)]; and execute in sequence the selected one or more using one or more logical units (LUNs) of a channel in the memory component [pars. 0006-0007, 0044-0050, 0113-0119 – The commands are executed. Per the ONFI specification, a LUN is the minimum unit that can independently execute commands and report status. There are one or more LUNs per NAND Target. (“A nonvolatile memory package, a storage device including the same, and a method of operating the same may include an interface chip (or a buffer chip) having a bidirectional command address (CA) pin and a chip enable (CE) pin between different protocols (e.g. legacy protocol and new protocol; joint electron device engineering council (JEDEC) protocol and open NAND flash interface (ONFI) protocol), thereby improving protocol compatibility.” … “The command interface converter 1111 may receive a legacy protocol command (DQ/ALE/CLE/CE) from the controller 1200, may convert the received legacy protocol command (DQ/ALE/CLE/CE) to an SCA protocol command, and may output the SCA protocol command corresponding to the legacy protocol command to the NAND flash memory chip 1120 through CA_CE and CA pins.” … “The buffer chip 1110 may communicate with other NAND flash memory chips 1120 connected to other channels.”)]. However, Kavala et al. do not specifically disclose, the memory controller configured to: schedule the plurality of commands using two or more data path scheduler (DPS) request queues; In the same field of endeavor, Benisty discloses, the memory controller configured to: schedule the plurality of commands using two or more data path scheduler (DPS) request queues [fig. 4; par. 0069 – “In addition, command queuing 432 is configured to queue part or all of the fetched NVMe commands for further processing. Command scheduler 434 is configured to select the next pending command for further execution from command queuing 432. As shown in FIG. 4, there may be several queues from which to select from. Data path scheduler 438 is configured to schedule one or more types of data transfers. As one example, read data may arrive from different memory arrays in parallel. Data path scheduler 438 may arbitrate from amongst the different data transfers.”]; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Kavala et al. to include command queueing, as taught by Benisty, in order to improve performance. Claim(s) 3, 4, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kavala et al. (Pub. No. US 2024/0295989) in view of Benisty (Pub. No. US 2018/0321864) as applied to claims 1 and 11 above, respectively, and further in view of Bellows et al. (Pub. No. US 2006/0129764). Claim 3 (as applied to claim 1 above): Kavala et al. and Benisty disclose all the limitations above but do not specifically disclose, wherein the scheduling of the plurality of commands further comprises: tracking a DPS request queue that is associated with a processing of a first command; and scheduling a second command in the tracked DPS request queue, wherein the second command is associated with an operation that is dependent upon an operation of the first command. In the same field of endeavor, Bellows et al. disclose, tracking a DPS request queue that is associated with a processing of a first command [pars. 0011-0016 – “In contrast, according to the present methods and apparatus, an entry of a read or write queue may include a bit (e.g., a dependency valid bit) or an encoding of a state indicating whether the command stored in the entry is dependent on another command (e.g., whether such command requires another command to complete execution before the command may execute), and a pointer for storing the number of the queue entry storing such other command. Further, the entry of the read or write queue may include a bit (e.g., a linked bit) for indicating whether another command depends on the command stored in the entry. According to the present methods and apparatus, when a new command is received, logic may be employed to set such bits and pointer. For example, when a new command is received and the read and/or write queues include multiple previously-received commands on which the new command depends, the logic may set (e.g., assert) the dependency valid bit and set the pointer to the number of the queue entry storing the most-recently received command of such previously-received commands on which the new command depends. The logic may employ linked bits associated with respective commands already stored in the queue to identify the most-recently received command on which the new command depends. In this manner, command dependency may be tracked by forming a single linked list.”]; and scheduling a second command in the tracked DPS request queue, wherein the second command is associated with an operation that is dependent upon an operation of the first command [pars. 0011-0016 – “In contrast, according to the present methods and apparatus, an entry of a read or write queue may include a bit (e.g., a dependency valid bit) or an encoding of a state indicating whether the command stored in the entry is dependent on another command (e.g., whether such command requires another command to complete execution before the command may execute), and a pointer for storing the number of the queue entry storing such other command. Further, the entry of the read or write queue may include a bit (e.g., a linked bit) for indicating whether another command depends on the command stored in the entry. According to the present methods and apparatus, when a new command is received, logic may be employed to set such bits and pointer. For example, when a new command is received and the read and/or write queues include multiple previously-received commands on which the new command depends, the logic may set (e.g., assert) the dependency valid bit and set the pointer to the number of the queue entry storing the most-recently received command of such previously-received commands on which the new command depends. The logic may employ linked bits associated with respective commands already stored in the queue to identify the most-recently received command on which the new command depends. In this manner, command dependency may be tracked by forming a single linked list.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Kavala et al. and Benisty to include tracking dependent operations, as taught by Bellows et al., in order to maintain data integrity. Claim 4 (as applied to claim 3 above): Bellows et al. disclose the, further comprising: using a look-up table (LUT) to track the operations between the first command and the second command [pars. 0011-0016 – “The logic may employ linked bits associated with respective commands already stored in the queue to identify the most-recently received command on which the new command depends. In this manner, command dependency may be tracked by forming a single linked list.”]. Claim 16 (as applied to claim 11 above): Claim 16, directed to an apparatus, is rejected for the same reasons set forth in the rejection of claim 4 above, mutatis mutandis. Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kavala et al. (Pub. No. US 2024/0295989) in view of Benisty (Pub. No. US 2018/0321864) as applied to claim 18 above, and further in view of Ellis et al. (Pub. No. US 2016/0306553). Claim 19 (as applied to claim 18 above): Kavala et al. and Benisty disclose all the limitations above but do not specifically disclose, wherein the memory controller selects the one or more scheduled commands for the sequence execution based on a request priority associated with each of the scheduled commands. In the same field of endeavor, Ellis et al. disclose, wherein the memory controller selects the one or more scheduled commands for the sequence execution based on a request priority associated with each of the scheduled commands [pars. 0014-0030 – “(A4) In some embodiments of the method of any one of A1 to A3, the method further includes: at the first die, in response to receiving the memory operation command corresponding to the first memory operation, suspending performance of the blocking low-priority memory operation. After the suspending, the method further includes performing the first memory operation. After performing the first memory operation, the method further includes resuming performance of the blocking low-priority memory operation.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Kavala et al. and Benisty to include selecting commands according to priority, as taught by Ellis et al., in order to improve performance by allowing high priority commands to bypass waiting for a low priority command to complete. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kavala et al. (Pub. No. US 2024/0295989) in view of Benisty (Pub. No. US 2018/0321864) as applied to claim 18 above, and further in view of Hashimoto (Pub. No. US 2017/0180477). Claim 20 (as applied to claim 18 above): Kavala et al. and Benisty disclose all the limitations above but do not specifically disclose, wherein the memory controller selects the one or more scheduled commands for the sequence execution based on a workload in the one or more LUNs. In the same field of endeavor, Hashimoto discloses, wherein the memory controller selects the one or more scheduled commands for the sequence execution based on a workload in the one or more LUNs [par. 0058 – “According to this sharing of the bus, a plurality of flash memory chips 17 that belong to the same bank group can be accessed in parallel through driving of the plurality of channels. Also, the plurality of banks can be operated in parallel through an interleave access. The controller 14 fetches, from the submission queue 50, a command to access a bank in an idle state in priority to a command to access a busy bank, in order to perform a more efficient parallel operation. Physical blocks 36 that belong to the same bank and are associated with the same physical block address belong to the same physical block group 36G, and assigned a physical block group address corresponding to the physical block address.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Kavala et al. and Benisty to include selecting commands based on a workload, as taught by Hashimoto, in order to improve performance. Allowable Subject Matter Claims 9-10, 13-15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not disclose the limitations of the listed claims in conjunction with the limitations of the base claim and intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Richter et al. (Pub. No. US 2024/0402927) disclose, “The controller 206 is configured to receive read commands (and write commands) from the host device 202 via the PCIe/MAC/PHY interface 208. The NVMe inbound controller 210 may be configured to pass commands received at the PCIe/MAC/PHY interface 208 to the one or more queues 212. The scheduler 214 retrieves the commands queued in the one or more queues 212 and schedules the commands to be executed to the NVM 216. The scheduler 214 may arbitrate between the new commands placed in each input queue, effectively creating an arbitration between all of the tenants working with the data storage device 204. In other words, the scheduler 214 may retrieve and execute commands from the one or more queues 212 in a round robin order or any other applicable order. Data read from the NVM 216 as part of a read command received from one of the tenants of the host device 202 is provided back to the host device 202. While the scheduler 214 is arbitrating between the commands from the different tenants to enable some bandwidth sharing, receiving a large command from a tenant may cause a bottleneck in the data storage device 204.” [par. 0031] Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LARRY T. MACKALL Primary Examiner Art Unit 2131 4 March 2026 /LARRY T MACKALL/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Jan 22, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §103
Mar 23, 2026
Interview Requested
Mar 30, 2026
Examiner Interview Summary
Mar 30, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591389
MEMORY CONTROLLER AND OPERATION METHOD THEREOF FOR PERFORMING AN INTERLEAVING READ OPERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12572308
STORAGE DEVICE SUPPORTING REAL-TIME PROCESSING AND METHOD OF OPERATING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12561065
PROVIDING ENDURANCE TO SOLID STATE DEVICE STORAGE VIA QUERYING AND GARBAGE COLLECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12555170
TRANSFORMER STATE EVALUATION METHOD BASED ON ECHO STATE NETWORK AND DEEP RESIDUAL NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12554400
METHOD OF OPERATING STORAGE DEVICE USING HOST REQUEST BYPASS AND STORAGE DEVICE PERFORMING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
93%
With Interview (+8.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 779 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month