Prosecution Insights
Last updated: April 19, 2026
Application No. 17/313,668

DEVICE-INITIATED INPUT/OUTPUT ASSISTANCE FOR COMPUTATIONAL NON-VOLATILE MEMORY ON DISK-CACHED AND TIERED SYSTEMS

Final Rejection §103
Filed
May 06, 2021
Examiner
SAIN, GAUTAM
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
SK Hynix Nand Product Solutions Corp. (Dba Solidigm)
OA Round
6 (Final)
67%
Grant Probability
Favorable
7-8
OA Rounds
3y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
277 granted / 415 resolved
+11.7% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
455
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
1.4%
-38.6% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Other references: Nago (US 20220091778) – Nand flash memory, read. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1,2,4,9,10,12, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Burridge (US 20190042441) and in view of Ouyang (US 20180136877), and further in view of Walker (US 20220027270 A1) and Flynn (US 20110022801)(hereinafter “Flynn801”) Claim 1. Burridge discloses A controller (e.g., a semiconductor apparatus 76 (e.g., chip, die), 0025 Fig. 5) comprising: one or more substrates (e.g., one or more substrates 78 , 0025 Fig. 5); and logic coupled to the one or more substrates, wherein the logic is at least partly imjakowsplemented in one or more of configurable or fixed-functionality hardware, and the logic coupled to the one or more substrates is to: (e.g., logic 80 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate(s) 78., 0025 Fig. 5) Burridge does not disclose, but Ouyang discloses detect an application function to be executed on a computational storage device (e.g., a non-volatile memory element 123 is configured to receive storage requests from a device driver or other executable application, 0035; [0059] In the depicted embodiment, the on-die controller 250 uses the command/address decoder 240 to receive command and address information for storage operations, via the command/address path 152. In certain embodiments, command and address information may include commands, such as read commands, write commands, program commands, erase commands, status query commands, and any other commands supported by the cores 200); select a target computational storage computational storage device from a plurality of computational storage devices in a storage node (e.g., device controller 126 may select a die 202 (such as the depicted die 202 or another die 202) as a target computational for a storage operation, and may communicate with the on-die controller 250 for the selected die 202 to send command and address information and to transfer data for storage operations on the selected die 202. Data for a storage operation may include data to be written to a core 200, data to be read from a core 200, or the like, and transferring data may include sending or receiving the data., 0052) Fig. 1; a multiple-core memory die 202 may include a plurality of non-volatile memory cores 200. For example, in the depicted embodiment, the memory die 202 includes two cores 200a-b, referred to as “core 0” 200a, and “core 1” 200b., 0054); and issue the application function to the target computational storage device (e.g., send command and address information and to transfer data for storage operations on the selected die 202, 0052). It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores. (see Ouyang, 0004) to service storage operations independently or in parallel allows a non-volatile memory die 202 to execute, process, or service storage operations faster than a single-core memory die (0054). Burridge in view of Ouyang does not disclose, but Walker discloses the target computational storage device having a front-end device to cache data for the target computational storage device; move dirty data from the front-end device to the target computational storage device (eg., 0039 Fig. 2 - cache controller 232 can write dirty cache lines of the cache memory 234 back to the backend memory 236A-N to ensure the data is maintained at the backend memory 236A-N is valid data); and device for execution of the application function by the target computational storage device with data already stored on the target computational storage device, including the dirty data. (eg., [0030] In general, the memory sub-system controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130. ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, providing the benefit of to ensure the data is maintained at the backend memory (see Walker, 0039). Burridge in view of Ouyang Walker does not disclose, but Flynn801 discloses wherein both the front-end device and the target computational storage device comprise non-volatile memory, wherein both the front-end device and the target computational storage device are separate storage devices and comprise separate respective processors (eg., Fig. 1 0058 - first cache 102 and the second cache 112 are each separate data storage devices. In a further embodiment, the first cache 102 and the second cache 112 may both be part of a single data storage device.; 0059 - first cache 102 and the second cache 112 are each non-volatile, solid-state storage devices, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110 ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, with Flynn801 providing the benefit of to write caching for a storage device (see Flynn801, 0005) in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available systems for caching data. Accordingly, the present invention has been developed to provide a method, apparatus, and computer program product that overcome many or all of the above-discussed shortcomings in the art (0006). Claim 2. Burridge does not disclose, but Ouyang application function comprises a search function component of a database application located at a host device (e.g., 0038 The storage clients 116 include, operating systems, file systems, database applications, server applications, kernel-level processes, user-level processes, applications, and the like.). It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores (see Ouyang, 0004). Claim 4. Burridge does not disclose, but Ouyang wherein the plurality of computational storage devices comprises a first storage device and a second storage device (eg., 0044 Fig. 1 - one or more elements 123 of non-volatile memory media 122, in certain embodiments, comprise storage class memory ); wherein the logic coupled to the one or more substrates is to: detect a unified data condition in which an entirety of data associated with the application function is located in the first storage device, wherein the first storage device is selected as the target computational storage device; and issue the application function to the first storage device (e.g., device controller 126 may select a die 202 (such as the depicted die 202 or another die 202) as a target for a storage operation, and may communicate with the on-die controller 250 for the selected die 202 to send command and address information and to transfer data for storage operations on the selected die 202. , 0052). It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores (see Ouyang, 0004). Claim 9. Burridge discloses A storage node (e.g., a semiconductor apparatus 76 (e.g., chip, die), 0025 Fig. 5) comprising: a plurality of computational storage devices including a first storage device and a second storage device (e.g., cache 70 and mass storage 56, 0021, 0023); a controller coupled to the plurality of computational storage devices, the controller including logic coupled to one more substrates, wherein the logic is to (e.g., a host processor 60 (e.g., central processing unit/CPU), 0020 Fig. 5; logic 80 (e.g., transistor array and other integrated circuit/IC components) coupled to the substrate(s) 78., 0025 Fig. 5) Burridge does not disclose, but Ouyang detect an application function to be executed on one of the plurality of computational storage devices, (e.g., a non-volatile memory element 123 is configured to receive storage requests from a device driver or other executable application, 0035); select a target computational storage device from a plurality of computational storage devices including a first storage device and a second storage device in the storage node (e.g., device controller 126 may select a die 202 (such as the depicted die 202 or another die 202) as a target computational for a storage operation, and may communicate with the on-die controller 250 for the selected die 202 to send command and address information and to transfer data for storage operations on the selected die 202. Data for a storage operation may include data to be written to a core 200, data to be read from a core 200, or the like, and transferring data may include sending or receiving the data., 0052) Fig. 1); and issue the application function to the target computational storage device (e.g., send command and address information and to transfer data for storage operations on the selected die 202, 0052; a multiple-core memory die 202 may include a plurality of non-volatile memory cores 200. For example, in the depicted embodiment, the memory die 202 includes two cores 200a-b, referred to as “core 0” 200a, and “core 1” 200b., 0054). It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores. (see Ouyang, 0004). Burridge in view of Ouyang does not disclose, but Walker discloses wherein the target computational storage device has afront-end device to cache data for the target computational storage device, move dirty data from the front-end device to the target computational storage device (eg., 0039 Fig. 2 - cache controller 232 can write dirty cache lines of the cache memory 234 back to the backend memory 236A-N to ensure the data is maintained at the backend memory 236A-N is valid data); and device for execution of the application function by the target computational storage device with data already stored on the target computational storage device, including the dirty data (eg., [0030] In general, the memory sub-system controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130. ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, providing the benefit of to ensure the data is maintained at the backend memory (see Walker, 0039). Burridge in view of Ouyang Walker does not disclose, but Flynn801 discloses wherein both the front-end device and the target computational storage device comprise non-volatile memory, wherein both the front-end device and the target computational storage device are separate storage devices and comprise separate respective processors (eg., Fig. 1 0058 - first cache 102 and the second cache 112 are each separate data storage devices. In a further embodiment, the first cache 102 and the second cache 112 may both be part of a single data storage device.; 0059 - first cache 102 and the second cache 112 are each non-volatile, solid-state storage devices, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110 ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, with Flynn801 providing the benefit of to write caching for a storage device (see Flynn801, 0005) in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available systems for caching data. Accordingly, the present invention has been developed to provide a method, apparatus, and computer program product that overcome many or all of the above-discussed shortcomings in the art (0006). Claim 10 is rejected for reasons similar to Claim 2 above. Claim 12 is rejected for reasons similar to Claim 4 above. Claim 21 is rejected for reason similar to Claims 1 and/or 9 (above). Similarity admitted by Applicant in the Remarks. Additionally, Burridge discloses a host device to execute an application …. (eg., [0020] The system 50 may also include a graphics processor 62 and a host processor 60 , Fig. 4). Burridge in view of Ouyang Walker does not disclose, but Flynn801 discloses wherein both the front-end device and the target computational storage device comprise non-volatile memory, wherein both the front-end device and the target computational storage device are separate storage devices and comprise separate respective processors (eg., Fig. 1 0058 - first cache 102 and the second cache 112 are each separate data storage devices. In a further embodiment, the first cache 102 and the second cache 112 may both be part of a single data storage device.; 0059 - first cache 102 and the second cache 112 are each non-volatile, solid-state storage devices, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110 ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, with Flynn801 providing the benefit of to write caching for a storage device (see Flynn801, 0005) in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available systems for caching data. Accordingly, the present invention has been developed to provide a method, apparatus, and computer program product that overcome many or all of the above-discussed shortcomings in the art (0006). Claim 22 is rejected for reason similar to Claims 2 (above). Claims 3, 11, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Burridge (US 20190042441) and in view of Ouyang (US 20180136877) and Walker (cited above) and Flynn801 (cited above) and further in view of Flynn (US 20140237159)(hereinafter “Flynn159”) and Thukral (US 9146868) Claim 3. Burridge does not disclose, but Ouyang wherein the plurality of computational storage devices comprises a first storage device and a second storage device (eg., 0044 Fig. 1 - one or more elements 123 of non-volatile memory media 122, in certain embodiments, comprise storage class memory ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores (see Ouyang, 0004). Burridge in view of Ouyang and Walker and Flyn801 does not disclose, but Flynn159 discloses wherein the logic coupled to the one or more substrates is to: detect a split data condition in which a first subset of data associated with the application function is located in the first storage device and a second subset of data associated with the application function is located in the second storage device (e.g., track dirty and clean data in a similar manner to distinguish dirty data from clean data when operating as a cache., 0164); and initiate a transfer of the first subset of data to the second storage device in response to the split data condition, wherein the second storage device is selected as the target computational storage device (e.g., during a flush of the write pipeline 108 to fill the remainder of the logical page in order to improve the efficiency of storage within the non-volatile storage media 110 and thereby reduce the frequency of garbage collection., 0101), and It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, and Walker and Flyn801 with Flynn, providing the benefit of storage device 102 provides nonvolatile storage for the host computing system 114. FIG. 1 shows the storage device 102 as a nonvolatile non-volatile storage device 102 comprising a storage controller 104, a write data pipeline 106, a read data pipeline 108, and nonvolatile non-volatile storage media 110 (see Flynn159, 0034) and refreshing data stored on the non-volatile storage media (0102) and o allow data to be stored contiguously on physical storage locations of the non-volatile storage (0103). Burridge in view of Ouyang and Walker and Flyn801 and Flynn159 does not disclose, but Thukral discloses wherein the application function is issued to the second storage device when the transfer is complete (e.g., Upon flushing the write operations to the backing store, the file-system application may superimpose a synchronization marker on the last of the write operations stored in the cache's queue and the backing store's queue to represent the most recent synchronization point between the backing store and the cache, col 13:62 – col 14:2) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, and Walker and Flyn801 with Flynn159, with Thukrul providing the benefit of file-system application may need to synchronize the backing store and the cache in order to prevent the user application from experiencing downtime or input/output errors (see Thukral, col 14:5-8). Claim 11 is rejected for reasons similar to Claim 3 above. Claim 23 is rejected for reason similar to Claims 3 (above). Claims 6, 14, 24 are rejected under 35 U.S.C. 103 as being unpatentable over Burridge (US 20190042441) and in view of Ouyang (US 20180136877) and Walker (cited above) and Flynn801 (cited above) and further in view of Zohar (US 20060031635) Claim 6. Burridge does not disclose, but Ouyang wherein the plurality of computational storage devices comprises a first storage device and a second storage device (eg., 0044 Fig. 1 - one or more elements 123 of non-volatile memory media 122, in certain embodiments, comprise storage class memory ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, providing the benefit of controlling commands and data for multi-core non-volatile memory … for a plurality of storage operations for one or more non-volatile memory cores (see Ouyang, 0004). Burridge in view of Ouyang and Walker and Flyn801 does not disclose, but Zohar discloses wherein the first storage device is selected as the target computational storage device, and wherein the logic coupled to the one or more substrates is to: issue a data specifier function to the first storage device (e.g., upon receiving a request for one or a set of data blocks associated with a given logical unit of data, 0035); detect an instruction from the first storage device to transfer a subset of data associated with the application function from the second storage device to the first storage device; and initiate the transfer in response to the instruction before execution of the application function by the first storage device (e.g., 0031 - o "prefetch" data into cache, namely, to bring into cache data that has not yet been requested by the host but that is likely to be requested within a short period of time.; may cause all or part of the logical unit to be retrieved from one or more mass storage devices and placed in cache (i.e. prefetched). If all the data blocks to be prefetched reside on the first mass storage device, which is connected or otherwise functionally associated with the first controller, the controller may retrieve and place in cache all the data blocks to be prefetched, 0035; 0039 - some or all of a data segment or of a logical unit may be retrieved in to cache from a mass data storage device or from a plurality of mass data storage devices). It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, and Walker and Flyn801 with Zohar, providing the benefit for a decision is made, based on information about recent IO activity in a storage system, to "prefetch" data into cache, namely, to bring into cache data that has not yet been requested by the host but that is likely to be requested within a short period of time (see Zohar, 0031). Claim 14 is rejected for reasons similar to Claim 6 above. Claim 24 is rejected for reason similar to Claims 6 (above). Claims 7, 8, 15, 16, 25, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Burridge (US 20190042441) and in view of Ouyang (US 20180136877) and Walker (cited above) and Flynn801 (cited above) and further in view of Jakowski (US 20180089082) Claim 7. Burridge in view of Ouyang, Walker and Flyn801 does not disclose, but Jakowski discloses wherein the front-end device and the target computational storage device are tiered storage devices (eg., Fig. 2, 0026 - tiered storage, such as a first tier of solid-state drives and a second tier of hard disk drives) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, and Flyn801 with Jakowski, providing the benefit of to improve latency for processing I/O requests, some storage devices 120 may include (or may use) a cache 124 in addition to their primary data storage mechanism 126. The cache 124, for example, may be a form of storage with lower latency (e.g., faster access) but less storage capacity than the primary data storage mechanism 126 of the storage device 120 (see Jakowski). Claim 8. Burridge in view of Ouyang, Walker and Flyn801 does not disclose, but Jakowski discloses wherein the target computational storage device is to operate more slowly than the front-end storage device (eg., Fig. 2, 0011 - cache 124, for example, may be a form of storage with lower latency (e.g., faster access) but less storage capacity than the primary data storage mechanism 126 ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, and Flyn801 with Jakowski, providing the benefit of to improve latency for processing I/O requests, some storage devices 120 may include (or may use) a cache 124 in addition to their primary data storage mechanism 126. The cache 124, for example, may be a form of storage with lower latency (e.g., faster access) but less storage capacity than the primary data storage mechanism 126 of the storage device 120 (see Jakowski). Claim 15 is rejected for reasons similar to Claim 7 above. Claim 16 is rejected for reasons similar to Claim 8 above. Claim 25 is rejected for reason similar to Claims 7 (above). Claim 26 is rejected for reason similar to Claims 8 (above). Response to Arguments Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. For Claims 1, 9 and 21, Applicant argues that that the cited references do not disclose the amended limitations. The Office disagrees. The present updated Office Action rejects the amended limitations. Specifically, Burridge in view of Ouyang Walker does not disclose, but Flynn801 discloses wherein both the front-end device and the target computational storage device comprise non-volatile memory, wherein both the front-end device and the target computational storage device are separate storage devices and comprise separate respective processors (eg., 0058 - first cache 102 and the second cache 112 are each separate data storage devices. In a further embodiment, the first cache 102 and the second cache 112 may both be part of a single data storage device.; 0059 - first cache 102 and the second cache 112 are each non-volatile, solid-state storage devices, with a solid-state storage controller 104 and non-volatile, solid-state storage media 110 ) It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify the semiconductor apparatus of Burridge, with Ouyang, with Walker, with Flynn801 providing the benefit of to write caching for a storage device (see Flynn801, 0005) in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available systems for caching data. Accordingly, the present invention has been developed to provide a method, apparatus, and computer program product that overcome many or all of the above-discussed shortcomings in the art (0006). Applicant’s arguments for dependent claims are based on their respective base independent claims 1 and 9, which are addressed above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GAUTAM SAIN/Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

May 06, 2021
Application Filed
Sep 07, 2021
Response after Non-Final Action
May 14, 2024
Non-Final Rejection — §103
Jul 05, 2024
Interview Requested
Jul 17, 2024
Examiner Interview Summary
Jul 17, 2024
Applicant Interview (Telephonic)
Jul 19, 2024
Response Filed
Sep 09, 2024
Final Rejection — §103
Sep 20, 2024
Interview Requested
Sep 26, 2024
Applicant Interview (Telephonic)
Sep 26, 2024
Examiner Interview Summary
Oct 17, 2024
Response after Non-Final Action
Oct 29, 2024
Response after Non-Final Action
Oct 29, 2024
Examiner Interview (Telephonic)
Dec 04, 2024
Request for Continued Examination
Dec 12, 2024
Response after Non-Final Action
Feb 11, 2025
Non-Final Rejection — §103
Mar 20, 2025
Response Filed
May 22, 2025
Final Rejection — §103
Jul 22, 2025
Examiner Interview Summary
Jul 22, 2025
Applicant Interview (Telephonic)
Jul 22, 2025
Response after Non-Final Action
Aug 19, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Sep 16, 2025
Non-Final Rejection — §103
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Examiner Interview Summary
Dec 12, 2025
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602326
STORAGE DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12585551
SMART LOAD BALANCING OF CONTAINERS FOR DATA PROTECTION USING SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585386
MEMORY DEVICE WITH COMPUTATION FUNCTION AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12578873
MEMORY SYSTEM AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572303
CACHE MANAGEMENT IN A MEMORY SUBSYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
67%
Grant Probability
92%
With Interview (+25.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month