DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Note
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP § 2123.
Claim Status
Claims 1-20 are currently pending. Claim 1 is amended as per Applicant’s amendment filed on 17 February 2026. This office action is in response to a request for continued examination and amendments filed on 17 February 2026.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 February 2026 has been entered.
Information Disclosure Statement
An information disclosure statement (IDS) was submitted on 27 February 2026. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant's arguments filed 17 February 2026 with regards to claim 1 have been fully considered and are persuasive.
As no arguments or amendments are made regarding the remaining claims, their rejections are maintained.
Allowable Subject Matter
Claims 1-9 are allowed.
The following is an examiner’s statement of reasons for allowance: the prior art made of record teaches a method for scalable queue processing but fails to teach the combination including the limitations of:
(Claim(s) 1) “determining, by the memory sub-system, priorities of the submission queues based on the analyzing the queue statuses that are obtained without accessing the submission queues and configured to provide information about commands to be retrieved from the submission queues for execution; selecting, by the memory sub-system, one or more submission queues based the priorities determined based on the analyzing of the queue statuses; retrieving, by the memory sub-system and from the one or more submission queues selected based on the priorities determined based on the analyzing of the queue statuses, a subset of storage access commands in the plurality of submission queues; and executing, by the memory sub-system, the subset of storage access commands”
As dependent claims 2-9 depend from an allowable base claim; they are at least allowable for the same reasons as noted supra. Support for the above noted limitations can be found in at least paragraphs [0028-0034, 0064-0076, Figs. 6, 12] of Applicant' s specification.
The prior art made of record, Horspool (US 20220261183 A1), neither anticipates nor renders obvious the above recited combinations for at least the reasons specified.
The prior art made of record, on the 892 and/or 1449 forms, in the case does not fairly teach or suggest the claimed limitations, nor does it render the claimed invention obvious.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 10-20 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Horspool (US 20220261183 A1).
Referring to claims 10 and 18, taking claim 10 as exemplary, Horspool teaches
A memory sub-system, comprising: non-volatile memory cells configured to provide a storage capacity of the memory sub-system; ([Horspool abstract, 0005-0007, 0025-0027, Fig. 1] controller of an SSD, the controller coupled to a non-volatile semiconductor memory device. SoC controller 130 is communicatively coupled to nonvolatile semiconductor-based storage devices 140 (such as NAND-based flash memory devices) as the storage medium.) and at least one processor ([Horspool 0024, 0027, Fig. 1] computing system that comprises processors or cores, a controller, a memory, and other components as is generally known in the art. Each of the NVMe controller 132, the memory controller 136 and the NAND controller 138 may have a processor/core to perform dedicated functions.) configured via instructions to: perform an analysis of queue statuses of a plurality of submission queues without accessing the submission queues; ([Horspool 0003, 0029-0030, 0036, 0042-0043] The NVMe controller 132 comprises a core (or processor) that utilizes a high priority command queue 412 (hereinafter “HPQ”) and a low priority command queue 414 (hereinafter “LPQ”) for further processing of commands received from the host 110. The read performance and read data latency (i.e. the time from the command being added to the submission queue to the time the data is returned to the host memory and the command status is added to the completion queue) are critical elements for the operation of the SSD. A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110. When the SSD 120 completes processing of a command, it inserts an entry on a completion queue of a specific queue group 211-214 (that has been previously associated with the submission queue of the same queue group from which the command was initially retrieved by the SSD 120), and generates an interrupt signal MSI-X. After the driver completes processing a group of completion queue entries, it signals this to the SSD 120 by writing the completion queue's updated head pointer to the respective completion queue 226-229. According to embodiments of the present disclosure, the core of the NVMe controller 132 manages the submission and completion queues of the host 110 by tracking the number of in-flight commands in each of the HPQ 412 and LPQ 414.) determine priorities of the submission queues based on a result of the analysis; ([Horspool 0030-0034, 0036-0037, Figs. 3A, 3B, 4] As previously mentioned, when there is a plurality of submission queues from a host 110 to the SSD 120, the NVMe controller 132 adopts an arbitration scheme to determine the order in which a submission queue is to be selected for processing of the host commands contained therein. Arbitration is the method used to determine the submission queue from which the controller starts processing the next command(s). The NVMe controller 132 comprises a core (or processor) that utilizes a high priority command queue 412 (hereinafter “HPQ”) and a low priority command queue 414 (hereinafter “LPQ”) for further processing of commands received from the host 110.) select one or more submission queues based the priorities; ([Horspool 0030-0031] The NVMe controller 132 of the SSD 120 then selects a command from a submission queue of the plurality of submission queues 221-225 for processing using an arbitration scheme, which will be detailed below.) retrieve, from the one or more submission queues, a subset of storage access commands in the plurality of submission queues; ([Horspool 0038, 0062, Figs. 4, 6] After receiving commands from a submission queue of the host 110, the NVMe controller 132 core may place the received commands into the HPQ 412 or the LPQ 414 for further processing. If the number of existing in-flight commands from the selected submission queue in the HPQ 412 does exceed a threshold in the HPQ 412, i.e. ‘Y’ in step 620, the at least one command received from the host 110 is added to a low priority command queue (LPQ) 414 of the NVMe controller 132 (step 630). However if the number of in-flight commands from the selected submission queue in the HPQ 412 does not exceed a threshold in the HPQ 412, i.e. ‘N’ in step 620, the at least one command received from the host 110 is added to the HPQ 412 of the NVMe controller 132, as shown in step 640. The commands in the HPQ 412 and LPQ 414 of the NVMe controller 132 are then relayed to the firmware control and configuration circuit 134 which maps the logical block addresses contained in the commands to physical NAND addresses.) and execute the subset of storage access commands ([Horspool 0062,Fig. 6] The commands are then processed by the remaining sub-controllers 136, 138 of the SoC controller 130, and the respective actions are then performed on the NAND devices 140 (step 650).).
As per the non-exemplary claim(s), this/these claim(s) has/have similar limitations and is/are rejected based on the reasons given above.
Referring to claim 11, Horspool teaches
The memory sub-system of claim 10, wherein the at least one processor is further configured to: retrieve the queue statuses from a status array configured in a random access memory accessible to a host system that provides storage access commands in the plurality of submission queues ([Horspool 0029-0030, Fig. 2] FIG. 2 shows further details of the host 110 and the NVMe controller 132 of the SSD 120 according to setup 200, according to an embodiment of the present disclosure. The host 110 may comprise a plurality of submission and completion queue groups 211-214 (i.e. status array) where some queue groups may be associated with a specific processor core such as Core 0, Core 1, . . . Core N, respectively, as shown in FIG. 2, where N is an integer. The controller management queue group 211 may be associated to any core, or may share the core with another core group. In general this is done to provide a form of isolation between the activities of different queue groups, which may be allocated by different application programs running on the host. One of the queue groups 211 may be allocated specifically for controller management and the handling of administrative commands. A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110).
Referring to claim 12, Horspool teaches
The memory sub-system of claim 11, wherein the plurality of submission queues are also configured in the random access memory ([Horspool 0003, 0025, 0030] SSD 120 also includes a memory external to the SoC controller 130, such as a dynamic random access memory (“DRAM”) 150. DRAM 150 comprises several buffers (not shown) used to buffer data during read and write operations between the host 110 and the storage elements 140 upon receipt of commands from the host 110. In some implementations, the whole or a part of the external memory DRAM 150 may be located within the SoC controller 130. When located within the SoC controller 130, at least a portion of the external memory may be implemented using a fast memory technology, such as static random access memory (RAM).).
Referring to claim 13, Horspool teaches
The memory sub-system of claim 12, wherein the status array is configured in a cyclic buffer allocated from the random access memory and has a plurality of slots; and each respective slot among the slots is configured to store data configured to identify: a particular submission queue among the plurality of submission queues; and a status of the particular submission queue ([Horspool 0003, 0029-0030, Fig. 2 ] A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110. The SSDs support multiple requests or commands using submission and completion queues of the host so that multiple applications and host programs can access the data stored in the NAND devices of the SSD. The read performance and read data latency (i.e. the time from the command being added to the submission queue to the time the data is returned to the host memory and the command status is added to the completion queue) are critical elements for the operation of the SSD. Examiner notes as seen in Fig. 2 more submission queues 221-225 and a completion queue 226-229 are circular buffers with pointers and slots for entries.).
Referring to claim 14, Horspool teaches
The memory sub-system of claim 11, wherein the status array includes a plurality of slots corresponding to the plurality of submission queues respectively; and each respective slot among the slots is configured to store data indicative of a status of a corresponding submission queue among the plurality of submission queues ([Horspool 0029-0030, Fig.2] A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110. The host 110 comprises a driver which sets up at least one submission queue and a completion queue for each queue group 211-214. Once the submission queues and completion queues are configured, they are used for almost all communication between the host 110 and the SSD 120. When new host commands are placed on a submission queue of a respective queue group 211-214, the driver informs the SSD 120 about the new commands by writing a new tail pointer to the submission queue 221-225 of the respective queue group 211-214. After the driver completes processing a group of completion queue entries, it signals this to the SSD 120 by writing the completion queue's updated head pointer to the respective completion queue 226-229.).
Referring to claim 15, Horspool teaches
The memory sub-system of claim 14, wherein the status of the corresponding submission queue includes a count of commands in the corresponding submission queue ([Horspool 0023, 0031] the controller which takes into account the number of in-flight commands (and the size of the data being processed for those commands, also known as the in-flight data), that are currently being processed by the SSD controller. Once a submission queue is selected using arbitration, an Arbitration Burst setting determines the maximum number of commands that the controller may start processing from that submission queue before arbitration shall again take place.).
Referring to claim 16, Horspool teaches
The memory sub-system of claim 15, wherein the slots are configured to allow the host system to write queue status data to the slots directly without going through the memory sub-system ([Horspool 0030] The host 110 comprises a driver which sets up at least one submission queue and a completion queue for each queue group 211-214. Once the submission queues and completion queues are configured, they are used for almost all communication between the host 110 and the SSD 120.).
Referring to claim 17, Horspool teaches
The memory sub-system of claim 11, wherein the at least one processor is further configured to: receive a request to write to a register at a predetermined address, wherein the request is configured to identify: a particular submission queue among the plurality of submission queues; and a status of the particular submission queue; and update the status array based on the request ([Horspool 0029, 0035, Fig. 2] FIG. 2 shows further details of the host 110 and the NVMe controller 132 of the SSD 120 according to setup 200, according to an embodiment of the present disclosure. The host 110 may comprise a plurality of submission and completion queue groups 211-214 where some queue groups may be associated with a specific processor core such as Core 0, Core 1, . . . Core N, respectively, as shown in FIG. 2, where N is an integer. The controller management queue group 211 may be associated to any core, or may share the core with another core group. In general this is done to provide a form of isolation between the activities of different queue groups, which may be allocated by different application programs running on the host. One of the queue groups 211 may be allocated specifically for controller management and the handling of administrative commands. A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110. Incoming commands from the host 110 to the SSD 120 comprise controller administrative and management commands and data I/O commands containing instructions for the SSD 120 and logical block address information for the target data in the NAND devices 140.).
Referring to claim 19, Horspool teaches
The non-transitory computer storage medium of claim 18, wherein the priorities are first priorities; the one or more submission queues are one or more first submission queues; and the subset of storage access commands is a first subset of storage access commands; and the method further comprises, after the retrieving of the subset of storage access commands: performing, by the memory sub-system, an analysis of current queue statuses of the plurality of submission queues without accessing the plurality of submission queues; ([Horspool 0003, 0029-0030, 0036, 0042-0043] The NVMe controller 132 comprises a core (or processor) that utilizes a high priority command queue 412 (hereinafter “HPQ”) and a low priority command queue 414 (hereinafter “LPQ”) for further processing of commands received from the host 110. The read performance and read data latency (i.e. the time from the command being added to the submission queue to the time the data is returned to the host memory and the command status is added to the completion queue) are critical elements for the operation of the SSD. A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110. When the SSD 120 completes processing of a command, it inserts an entry on a completion queue of a specific queue group 211-214 (that has been previously associated with the submission queue of the same queue group from which the command was initially retrieved by the SSD 120), and generates an interrupt signal MSI-X. After the driver completes processing a group of completion queue entries, it signals this to the SSD 120 by writing the completion queue's updated head pointer to the respective completion queue 226-229. According to embodiments of the present disclosure, the core of the NVMe controller 132 manages the submission and completion queues of the host 110 by tracking the number of in-flight commands in each of the HPQ 412 and LPQ 414.) determining, by the memory sub-system, second priorities of the plurality of submission queues based on a result of the analysis of the current queue statues; ([Horspool 0030-0034, 0036-0037, Figs. 3A, 3B, 4] As previously mentioned, when there is a plurality of submission queues from a host 110 to the SSD 120, the NVMe controller 132 adopts an arbitration scheme to determine the order in which a submission queue is to be selected for processing of the host commands contained therein. Arbitration is the method used to determine the submission queue from which the controller starts processing the next command(s). The NVMe controller 132 comprises a core (or processor) that utilizes a high priority command queue 412 (hereinafter “HPQ”) and a low priority command queue 414 (hereinafter “LPQ”) for further processing of commands received from the host 110.) selecting, by the memory sub-system, one or more second submission queues based the second priorities; ([Horspool 0030-0031] The NVMe controller 132 of the SSD 120 then selects a command from a submission queue of the plurality of submission queues 221-225 for processing using an arbitration scheme, which will be detailed below.) retrieving, by the memory sub-system and from the one or more second submission queues, a second subset of storage access commands in the plurality of submission queues; ([Horspool 0038, 0062, Figs. 4, 6] After receiving commands from a submission queue of the host 110, the NVMe controller 132 core may place the received commands into the HPQ 412 or the LPQ 414 for further processing. If the number of existing in-flight commands from the selected submission queue in the HPQ 412 does exceed a threshold in the HPQ 412, i.e. ‘Y’ in step 620, the at least one command received from the host 110 is added to a low priority command queue (LPQ) 414 of the NVMe controller 132 (step 630). However if the number of in-flight commands from the selected submission queue in the HPQ 412 does not exceed a threshold in the HPQ 412, i.e. ‘N’ in step 620, the at least one command received from the host 110 is added to the HPQ 412 of the NVMe controller 132, as shown in step 640. The commands in the HPQ 412 and LPQ 414 of the NVMe controller 132 are then relayed to the firmware control and configuration circuit 134 which maps the logical block addresses contained in the commands to physical NAND addresses.) and executing, by the memory sub-system, the second subset of storage access commands after execution of the first subset of storage access commands ([Horspool 0062,Fig. 6] The commands are then processed by the remaining sub-controllers 136, 138 of the SoC controller 130, and the respective actions are then performed on the NAND devices 140 (step 650).).
Referring to claim 20, Horspool teaches
The non-transitory computer storage medium of claim 18, wherein the method further comprises: retrieving, by the memory sub-system, the queue statuses from a status array configured in a random access memory accessible to a host system that provides storage access commands in the plurality of submission queues; ([Horspool 0029-0030, Fig. 2] FIG. 2 shows further details of the host 110 and the NVMe controller 132 of the SSD 120 according to setup 200, according to an embodiment of the present disclosure. The host 110 may comprise a plurality of submission and completion queue groups 211-214 (i.e. status array) where some queue groups may be associated with a specific processor core such as Core 0, Core 1, . . . Core N, respectively, as shown in FIG. 2, where N is an integer. The controller management queue group 211 may be associated to any core, or may share the core with another core group. In general this is done to provide a form of isolation between the activities of different queue groups, which may be allocated by different application programs running on the host. One of the queue groups 211 may be allocated specifically for controller management and the handling of administrative commands. A queue group may comprise one or more submission queues 221-225 and a completion queue 226-229. Associated with each queue 221-229 there is a hardware doorbell register to facilitate the signaling of incoming commands and outgoing completion notifications from and to the host 110) wherein the plurality of submission queues are also configured in the random access memory ([Horspool 0003, 0025, 0030] SSD 120 also includes a memory external to the SoC controller 130, such as a dynamic random access memory (“DRAM”) 150. DRAM 150 comprises several buffers (not shown) used to buffer data during read and write operations between the host 110 and the storage elements 140 upon receipt of commands from the host 110. In some implementations, the whole or a part of the external memory DRAM 150 may be located within the SoC controller 130. When located within the SoC controller 130, at least a portion of the external memory may be implemented using a fast memory technology, such as static random access memory (RAM).).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCISCO A GRULLON whose telephone number is (571)272-8318. The examiner can normally be reached Monday - Friday, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571)272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANCISCO A GRULLON/Primary Examiner, Art Unit 2132