Prosecution Insights
Last updated: April 19, 2026
Application No. 18/659,266

SLICE-BASED MEMORY CHANNEL POWER CONTROL

Non-Final OA §103
Filed
May 09, 2024
Examiner
DUDEK JR, EDWARD J
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
983 granted / 1102 resolved
+34.2% vs TC avg
Moderate +5% lift
Without
With
+5.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
32 currently pending
Career history
1134
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1102 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 28 October 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 10 and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 5, 6, 10, 11, 14, 15, 19, 20 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over DAI (U.S. Patent Application Publication #2022/0188016) in view of BRANDL (U.S. Patent Application Publication #2018/0019006). 1. DAI discloses An apparatus comprising: a plurality of memory channels to a memory (see [0046]: first through eighth memory channels), the plurality of memory channels including a first slice and at least a second slice that is distinct from the first slice (see [0053]-[0054]: address hashing policy creates separate groups of memory channels); and power control circuitry coupled to the plurality of memory channels (see [0058]: power control circuitry to define power states for the memory channels) and configured to adjust, in accordance with a power collapse trigger condition associated with the first slice, operation of the first slice from a first mode of operation to a second mode of operation independently of the second slice (see [0060]: in response to the address hashing policy, some of the memory channels will operate in a low power state, while others will operate in a high power state), wherein the first mode of operation is associated with a first power consumption level that is greater than a second power consumption level associated with the second mode of operation (see [0058]: multiple power states including a high power state and a low power state); and circuitry coupled to the plurality of memory channels and configured to perform interleaving of memory access operations to the second slice on a per-slice basis that excludes the first slice (see BRANDL below). BRANDL discloses the following limitations that are not disclosed by DAI: circuitry coupled to the plurality of memory channels (see [0066]-[0067]: memory controller that contains an address decoder to implement chip select interleaving) and configured to perform interleaving of memory access operations to the second slice on a per-slice basis that excludes the first slice (see [0068]: chip select interleaving that interleaves the address space over multiple DIMM ranks on a channel). Interleaving access over multiple ranks on a channel is done using chip select interleaving. This type of interleaving reduces page conflicts and makes more DRAM banks available (see [0068]). Since the interleaving is done within a channel, interleaving on one channel would exclude the other channel. The interleaving disclosed by BRANDL would be compatible with the system disclosed by DAI as both systems are accessing DRAM memory (see DAI [0040]; BRANDL [0068]). When one channel in the system of DAI is in a low power state, another channel that is being accessed could perform the chip select interleaving disclosed by DAI. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to perform interleaving on one slice and exclude the other, as disclosed by BRANDL. One of ordinary skill in the art would have been motivated to make such a modification to reduce page conflicts in a memory channel, as taught by BRANDL. DAI and BRANDL are analogous/in the same field of endeavor as both references are directed to accessing DRAM memory systems. 2. The apparatus of claim 1, wherein the power control circuitry is further configured to: detect a power resume trigger condition associated with the first slice (see [0061]: exceeding a frequency threshold); and based on detecting the power resume trigger condition, adjust operation of the first slice from the second mode to the first mode independently of the second slice (see [0061]: after a threshold number of times exceeding a frequency threshold, the power state of the channel will be adjusted from the low power state to the high power state). 5. The apparatus of claim 1, further comprising a power collapse manager that is coupled to the power control circuitry and that is configured to detect the power collapse trigger condition (see [0062]: power control circuitry detecting an idle period threshold and transitioning the channel from the high power state to the low power state; [0056]: a change in the address hashing policy will also cause certain memory channels to transition to a low power state). 6. The apparatus of claim 5, wherein the power collapse manager is further configured to detect the power collapse trigger condition based on one or more of a software workload of a processor that is associated with the first slice, a hardware usage level associated with the first slice, or a vote metric associated with one or more processors including the processor (see [0049]: telemetry data includes bandwidth consumption {hardware or software load}, memory capacity usage data {software workload or hardware usage level}, etc.). 10. DAI discloses A method comprising: accessing a memory using a plurality of memory channels (see [0046]: first through eighth memory channels), the plurality of memory channels including a first slice and at least a second slice that is distinct from the first slice (see [0053]-[0054]: address hashing policy creates separate groups of memory channels), wherein accessing the memory includes performing interleaving of memory access operations to the second slice on a per-slice basis that excludes the first slice (see BRANDL below); and in accordance with a power collapse trigger condition associated with the first slice, adjusting operation of the first slice from a first mode of operation to a second mode of operation independently of the second slice (see [0060]: in response to the address hashing policy, some of the memory channels will operate in a low power state, while others will operate in a high power state), wherein the first mode of operation is associated with a first power consumption level that is greater than a second power consumption level associated with the second mode of operation (see [0058]: multiple power states including a high power state and a low power state). BRANDL discloses the following limitations that are not disclosed by DAI: performing interleaving of memory access operations to the first slice and the second slice on a per-slice basis (see [0068]: chip select interleaving that interleaves the address space over multiple DIMM ranks on a channel). Interleaving access over multiple ranks on a channel is done using chip select interleaving. This type of interleaving reduces page conflicts and makes more DRAM banks available (see [0068]). Since the interleaving is done within a channel, interleaving on one channel would exclude the other channel. The interleaving disclosed by BRANDL would be compatible with the system disclosed by DAI as both systems are accessing DRAM memory (see DAI [0040]; BRANDL [0068]). When one channel in the system of DAI is in a low power state, another channel that is being accessed could perform the chip select interleaving disclosed by DAI. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to perform interleaving on one slice and exclude the other, as disclosed by BRANDL. One of ordinary skill in the art would have been motivated to make such a modification to reduce page conflicts in a memory channel, as taught by BRANDL. DAI and BRANDL are analogous/in the same field of endeavor as both references are directed to accessing DRAM memory systems. 11. The method of claim 10, further comprising: detecting a power resume trigger condition associated with the first slice (see [0061]: exceeding a frequency threshold); and based on detecting the power resume trigger condition, adjusting operation of the first slice from the second mode to the first mode independently of the second slice (see [0061]: after a threshold number of times exceeding a frequency threshold, the power state of the channel will be adjusted from the low power state to the high power state). 14. The method of claim 10, further comprising detecting the power collapse trigger condition using a power collapse manager (see [0062]: power control circuitry detecting an idle period threshold and transitioning the channel from the high power state to the low power state; [0056]: a change in the address hashing policy will also cause certain memory channels to transition to a low power state). 15. The method of claim 14, wherein the power collapse trigger condition is detected based on one or more of a software workload of a processor that is associated with the first slice, a hardware usage level associated with the first slice, or a vote metric associated with one or more processors including the processor (see [0049]: telemetry data includes bandwidth consumption {hardware or software load}, memory capacity usage data {software workload or hardware usage level}, etc.). 19. DAI discloses A non-transitory computer-readable medium storing instructions executable by one or more processors to initiate (see [0106]: programs embodied in software stored on one or more computer readable media), perform, or control operations, the operations comprising: accessing a memory using a plurality of memory channels (see [0046]: first through eighth memory channels), the plurality of memory channels including a first slice and at least a second slice that is distinct from the first slice (see [0053]-[0054]: address hashing policy creates separate groups of memory channels), wherein accessing the memory includes performing interleaving of memory access operations to the second slice on a per-slice basis that excludes the first slice (see BRANDL below); and in accordance with a power collapse trigger condition associated with the first slice, adjusting operation of the first slice from a first mode of operation to a second mode of operation independently of the second slice (see [0060]: in response to the address hashing policy, some of the memory channels will operate in a low power state, while others will operate in a high power state), wherein the first mode of operation is associated with a first power consumption level that is greater than a second power consumption level associated with the second mode of operation (see [0058]: multiple power states including a high power state and a low power state). BRANDL discloses the following limitations that are not disclosed by DAI: performing interleaving of memory access operations to the first slice and the second slice on a per-slice basis (see [0068]: chip select interleaving that interleaves the address space over multiple DIMM ranks on a channel). Interleaving access over multiple ranks on a channel is done using chip select interleaving. This type of interleaving reduces page conflicts and makes more DRAM banks available (see [0068]). Since the interleaving is done within a channel, interleaving on one channel would exclude the other channel. The interleaving disclosed by BRANDL would be compatible with the system disclosed by DAI as both systems are accessing DRAM memory (see DAI [0040]; BRANDL [0068]). When one channel in the system of DAI is in a low power state, another channel that is being accessed could perform the chip select interleaving disclosed by DAI. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to perform interleaving on one slice and exclude the other, as disclosed by BRANDL. One of ordinary skill in the art would have been motivated to make such a modification to reduce page conflicts in a memory channel, as taught by BRANDL. DAI and BRANDL are analogous/in the same field of endeavor as both references are directed to accessing DRAM memory systems. 20. The non-transitory computer-readable medium of claim 19, wherein the operations further comprise: detecting a power resume trigger condition associated with the first slice (see [0061]: exceeding a frequency threshold); and based on detecting the power resume trigger condition, adjusting operation of the first slice from the second mode to the first mode independently of the second slice (see [0061]: after a threshold number of times exceeding a frequency threshold, the power state of the channel will be adjusted from the low power state to the high power state). 21. The apparatus of claim 1, wherein the circuitry is further configured to perform the interleaving of the memory access operations to the second slice on the per-slice basis to enable selective deactivation of the first slice independently of the second slice (see DAI [0053]: the address hashing policy determines which channels will be used for data storage and retrieval, the channels are selectively disabled based on the hashing policy; BRANDL [0068]: the interleaving is performed on a single channel, and therefore would not be affected by having another channel deactivated – this enables the system to deactivate a channel while still being able to perform the interleaving). Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over DAI (U.S. Patent Application Publication #2022/0188016) and BRANDL (U.S. Patent Application Publication #2018/0019006) as applied to claims 1, 2, 5, 6, 10, 11, 14, 15, 19, 20 and 21 above, and further in view of NACHIMUTHU (U.S. Patent Application Publication #2014/0143577). 3. The apparatus of claim 1 (see DAI above), wherein the first slice and the second slice each include a system level cache (SLC) controller, a memory controller coupled to the SLC controller (see NACHIMUTHU below), and a physical interface between the memory controller and the memory (see DAI [0046]: memory channels are a physical interface between a controller and the memory). NACHIMUTHU discloses the following limitations that are not disclosed by DAI: wherein the first slice and the second slice each include a system level cache (SLC) controller (see [0048]-[0051]: various levels of memory including a memory side cache {system level cache}; [0053]: high speed memory that is part of the system memory address range to be used as a write buffer or scratchpad memory), a memory controller coupled to the SLC controller (see DAI [0105]: circuitry is already present for controlling access to the memory, a combination of DAI and NACHIMUTHU would result in this circuitry being coupled to the SLC controller disclosed by NACHIMUTHU). Having a system level cache helps solve the challenges regarding memory power and cost (see [0021]-[0022]). It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include and SLC controller, as disclosed by NACHIMUTHU. One of ordinary skill in the art would have been motivated to make such a modification to incorporate high speed memory while limiting the increase in memory power and cost, as taught by NACHIMUTHU. DAI and NACHIMUTHU are analogous/in the same field of endeavor as both references are directed to accessing multiple memories. 12. The method of claim 10 (see DAI above), wherein the first slice and the second slice each include a system level cache (SLC) controller, a memory controller coupled to the SLC controller, and a physical interface between the memory controller and the memory (see NACHIMUTHU below). NACHIMUTHU discloses the following limitations that are not disclosed by DAI: wherein the first slice and the second slice each include a system level cache (SLC) controller (see [0048]-[0051]: various levels of memory including a memory side cache {system level cache}; [0053]: high speed memory that is part of the system memory address range to be used as a write buffer or scratchpad memory), a memory controller coupled to the SLC controller (see DAI [0105]: circuitry is already present for controlling access to the memory, a combination of DAI and NACHIMUTHU would result in this circuitry being coupled to the SLC controller disclosed by NACHIMUTHU). Having a system level cache helps solve the challenges regarding memory power and cost (see [0021]-[0022]). It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include and SLC controller, as disclosed by NACHIMUTHU. One of ordinary skill in the art would have been motivated to make such a modification to incorporate high speed memory while limiting the increase in memory power and cost, as taught by NACHIMUTHU. DAI and NACHIMUTHU are analogous/in the same field of endeavor as both references are directed to accessing multiple memories. Claim(s) 4 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over DAI (U.S. Patent Application Publication #2022/0188016) and BRANDL (U.S. Patent Application Publication #2018/0019006) as applied to claims 1, 2, 5, 6, 10, 11, 14, 15, 19, 20 and 21 above, and further in view of BEKERMAN (U.S. Patent Application Publication #2025/0104742). 4. The apparatus of claim 1 (see DAI above), wherein the plurality of memory channels are coupled to one or more power supply nodes, and wherein the power control circuitry includes, for each slice of the plurality of memory channels, a power gating circuit that is coupled to the one or more power supply nodes and that is configured to selectively disconnect the slice from the one or more power supply nodes (see BEKERMAN below). BEKERMAN discloses the following limitations that are not disclosed by DAI: wherein the plurality of memory channels are coupled to one or more power supply nodes, and wherein the power control circuitry includes, for each slice of the plurality of memory channels, a power gating circuit that is coupled to the one or more power supply nodes and that is configured to selectively disconnect the slice from the one or more power supply nodes (see [0025]: power management circuitry to allow for power gating; [0033]: maintain power state for a memory resource using power gating operations). BEKERMAN discloses a power gating operation to regulate power supplied to a memory resource. It is implicit that there is a power supply node since the memory is receiving power. DAI already discloses the ability to increase or decrease the power supplied to a memory channel, but DAI does not disclose the specifics of how that is accomplished. BEKERMAN discloses a well-known solution to managing power for a memory resource, which is power gating. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include a power gating circuit, as disclosed by BEKERMAN. One of ordinary skill in the art would have been motivated to make such a modification since a power gating circuit is a well-known solution for managing power to a memory resource, as taught by BEKERMAN. DAI and BEKERMAN are analogous/in the same field of endeavor as both references are directed to management of power for a memory resource. 13. The method of claim 10 (see DAI above), further comprising, for each slice of the plurality of memory channels, selectively disconnecting the slice from one or more power supply nodes using power control circuitry (see BEKERMAN below). BEKERMAN discloses the following limitations that are not disclosed by DAI: for each slice of the plurality of memory channels, selectively disconnecting the slice from one or more power supply nodes using power control circuitry (see [0025]: power management circuitry to allow for power gating; [0033]: maintain power state for a memory resource using power gating operations). BEKERMAN discloses a power gating operation to regulate power supplied to a memory resource. It is implicit that there is a power supply node since the memory is receiving power. DAI already discloses the ability to increase or decrease the power supplied to a memory channel, but DAI does not disclose the specifics of how that is accomplished. BEKERMAN discloses a well-known solution to managing power for a memory resource, which is power gating. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include a power gating circuit, as disclosed by BEKERMAN. One of ordinary skill in the art would have been motivated to make such a modification since a power gating circuit is a well-known solution for managing power to a memory resource, as taught by BEKERMAN. DAI and BEKERMAN are analogous/in the same field of endeavor as both references are directed to management of power for a memory resource. Claim(s) 7, 8, 16 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over DAI (U.S. Patent Application Publication #2022/0188016) and BRANDL (U.S. Patent Application Publication #2018/0019006) as applied to claims 1, 2, 5, 6, 10, 11, 14, 15, 19, 20 and 21 above, and further in view of YANG (U.S. Patent Application Publication #2023/0041476). 7. The apparatus of claim 1 (see DAI above), wherein the circuitry includes a memory network-on-chip (NoC) coupled to the plurality of memory channels and to the memory, wherein the memory NoC is configured to operate in accordance with a slice-based interleaving scheme (see YANG below). YANG discloses the following limitations that are not disclosed by DAI: a memory network-on-chip (NoC) coupled to the plurality of memory channels and to the memory (see [0045]: network-on-chip subsystem coupled to a host processor), wherein the memory NoC is configured to operate in accordance with a slice-based interleaving scheme (see DAI [0053]: address hashing policy will determine the number of memory channels that are used, when multiple channels are used the access to the memory is interleaved over the multiple channels to provide the necessary capacity and bandwidth needed). An NOC brings notable improvements over conventional bus and crossbar architectures (see [0045]). DAI already discloses the use of processing circuitry for data access (see [0105]). The NOC disclosed by YANG would improve the scalability and communication of the host processor disclosed by DAI. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include an NOC, as disclosed by YANG. One of ordinary skill in the art would have been motivated to make such a modification to improve scalability and communication, as taught by YANG. DAI and YANG are analogous/in the same field of endeavor as both references are directed to communication with memory systems. 8. The apparatus of claim 7, wherein the slice-based interleaving scheme enables an intra-slice interleaving of the memory access operations within the first slice (see DAI [0081]-[0083: the address hashing policy will determine how many channels are used, data access would be interleaved among the channels that are selected to be used) and disables an inter-slice interleaving of the memory access operations between the first slice and the second slice (see DAI [0095]: data movement between channels can be enabled or disabled based on the address hashing policy). 16. The method of claim 10 (see DAI above), wherein the memory is accessed via a memory network-on-chip (NoC) and in accordance with a slice-based interleaving scheme (see YANG below). YANG discloses the following limitations that are not disclosed by DAI: wherein the memory is accessed via a memory network-on-chip (NoC) (see [0045]: network-on-chip subsystem coupled to a host processor) and in accordance with a slice-based interleaving scheme (see DAI [0053]: address hashing policy will determine the number of memory channels that are used, when multiple channels are used the access to the memory is interleaved over the multiple channels to provide the necessary capacity and bandwidth needed). An NOC brings notable improvements over conventional bus and crossbar architectures (see [0045]). DAI already discloses the use of processing circuitry for data access (see [0105]). The NOC disclosed by YANG would improve the scalability and communication of the host processor disclosed by DAI. It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to include an NOC, as disclosed by YANG. One of ordinary skill in the art would have been motivated to make such a modification to improve scalability and communication, as taught by YANG. DAI and YANG are analogous/in the same field of endeavor as both references are directed to communication with memory systems. 17. The method of claim 16, wherein the slice-based interleaving scheme enables an intra-slice interleaving of the memory access operations within the first slice (see DAI [0081]-[0083: the address hashing policy will determine how many channels are used, data access would be interleaved among the channels that are selected to be used) and disables an inter-slice interleaving of the memory access operations between the first slice and the second slice (see DAI [0095]: data movement between channels can be enabled or disabled based on the address hashing policy). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over DAI (U.S. Patent Application Publication #2022/0188016) and BRANDL (U.S. Patent Application Publication #2018/0019006) as applied to claims 1, 2, 5, 6, 10, 11, 14, 15, 19, 20 and 21 above, and further in view of CHEN (U.S. Patent Application Publication #2020/0133903). 9. The apparatus of claim 1 (see DAI above), wherein the memory corresponds to a hybrid memory including a first memory of a first memory type and a second memory of a second memory type different than the first memory type, wherein the first slice is associated with the first memory, and wherein the second slice is associated with the second memory (see CHEN below). CHEN discloses the following limitations that are not disclosed by DAI: wherein the memory corresponds to a hybrid memory including a first memory of a first memory type and a second memory of a second memory type different than the first memory type (see [0015]: multi-channel interface where each channel can be coupled to a different type of memory; [0020]-[0021]: each channel controller can be configured with a different protocol for interfacing with a particular type of memory), wherein the first slice is associated with the first memory, and wherein the second slice is associated with the second memory (see [0018]-[0019]: each channel is associated with memory). The use of a multi-channel memory configuration that allows for different protocols to be used creates a more flexible memory system through the ability to utilize advanced new media features (see [0003], [0016]). It would have been obvious, before the effective filing date of the claimed invention, to a person having ordinary skill in the art to which said subject matter pertains to modify DAI to incorporate a hybrid memory, as disclosed by CHEN. One of ordinary skill in the art would have been motivated to make such a modification to create a flexible memory system that is able to make use of advanced new media features, as taught by CHEN. DAI and CHEN are analogous/in the same field of endeavor as both references are directed to multi-channel memory systems. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ZHANG [Heterogeneous Multi-Channel…] discloses DRAM power control by dividing the DRAM into multiple types of channels. Section 3. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD J DUDEK JR whose telephone number is (571)270-1030. The examiner can normally be reached Monday - Friday, 8:00A-4:00P. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain T Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD J DUDEK JR/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

May 09, 2024
Application Filed
May 01, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Sep 05, 2025
Final Rejection — §103
Oct 28, 2025
Request for Continued Examination
Nov 01, 2025
Response after Non-Final Action
Jan 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596477
MEMORY DEVICE LOG DATA STORAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12596504
SYSTEMS, METHODS, AND APPARATUS FOR COMPUTATIONAL STORAGE FUNCTIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12578891
ASSIGNING BLOCKS OF MEMORY SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12572280
MEMORY CONTROLLER AND NEAR-MEMORY SUPPORT FOR SPARSE ACCESSES
2y 5m to grant Granted Mar 10, 2026
Patent 12572302
PARTITIONED TRANSFERRING FOR WRITE BOOSTER
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
94%
With Interview (+5.1%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 1102 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month