Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,778

SYSTEMS AND METHODS FOR PREDICTIVE CACHE MANAGEMENT BASED UPON SYSTEM WORKFLOW

Non-Final OA §103
Filed
Aug 31, 2023
Examiner
FAAL, BABOUCARR
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Relativity Oda LLC
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
423 granted / 527 resolved
+25.3% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
34 currently pending
Career history
561
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 527 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 7, 10-15, 17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Creed at al. 20220129379 herein Creed and Hazel et al. herein Hazel in view of Chen et al. 20230214308 herein Chen. Per claim 1, Creed discloses: identifying, via one or more processors, a workflow configured to interact with a cache paired to a cloud storage system; (fig. 2, ¶0057; a method 400 can be executed by a memory management processor (e.g., the cache subsystem 105 of FIG. 1). At 405, the method 400 can include receiving an IO workload. At 410, the method 400 can include analyzing one or more SL workload volumes received within the IO workload) predicting, via the one or more processors, an expected input output operations (IOPS) pattern for transactions generated by the workflow, wherein the IOPS pattern is indicative of a proportion of read operations to write operations; (¶0052; first model may predict that the first anticipated write workload is likely to include, from greatest to lowest in frequency, write IO sizes of 128K, 8K, and 64K. Similarly, the second model may predict that the second anticipated read workload is likely to include, from greatest to lowest in frequency, read IO sizes of 64K, 16K, and 8K. Based on the predicted read/write workloads defined by each of the models, the optimizer 338 can repartition and/or reallocate cache slot bins to each of one or more of the cache segments 225, 230, 235). Creed discloses configuring cache slots based on IOPS but does not specifically disclose: and configuring, via the one or more processors, one or more cache management workers prior to the transactions being generated by the workflow based upon the expected IOPS pattern However, Hazel discloses: and configuring, via the one or more processors, one or more cache management workers prior to the transactions being generated by the workflow based upon the expected IOPS pattern (col. 29, lines 20-30; based on the notification 812, the queueing component 820 may determine to increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing. For example, based on the workload 822, the queueing component 820 may determine to increase (or “reserve”) one or more additional memory workers to facilitate an increase in the amount of ingested data; the examiner notes that the amendment merely requires configuring workers prior to executing the workload). It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Creed and Hazels cache memory management to optimize the cache memory. Hazel improves the through put of the memory system (¶0044; By partitioning the cache segments cache segments 225, 230, 235 along one or more dimensions, the cache subsystem 105 can advantageously provide higher memory resolution and optimization, while also avoiding wasted cache segments that mirror read operations). The combined teachings of do not specifically disclose: predicting, via the one or more processors, when the expected IOPS pattern will occur; ...and when the expected IOPS pattern will occur. Chen discloses: predicting, via the one or more processors, when the expected IOPS pattern will occur; ...and when the expected IOPS pattern will occur (¶0005; obtaining the time series data, a forecasting technique can be used to extrapolate the time series data to predict or forecast future demand or usage levels of the computing service. For instance, exponential smoothing is an example forecast technique for predicting a future data point based on historical data points by smoothing time series data using an exponential window function that assigns exponentially decreasing weights over time). It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Creed, Hazels and Chen time series forecasting technique to predict cloud services usage. Chen improves reliability of the cloud service (¶0070). Per claim 2, Creed discloses: wherein predicting the expected IOPS pattern comprises: analyzing, via a workflow profiler, the workflow to predict whether the expected IOPS pattern is one of a read-heavy IOPS pattern or a write-heavy IOPS pattern (¶0052; first model may predict that the first anticipated write workload is likely to include, from greatest to lowest in frequency, write IO sizes of 128K, 8K, and 64K. Similarly, the second model may predict that the second anticipated read workload is likely to include, from greatest to lowest in frequency, read IO sizes of 64K, 16K, and 8K). Per claim 3, Creed discloses: wherein the workflow profiler identifies component actions of the workflow to predict the IOPS pattern for the component actions (¶0047; the manager 334 can use a recurring neural network (RN N) to analyze the historical and current IO workloads. The RNN can be a Long Short-Term Memory (LSTM) network that anticipates the workloads based on historical/current IO workload input parameters. Further, the ML techniques can include a time series learning logic to anticipate the workloads. The manager 334 can use parameters such as include IO types and sizes, logical block address (LBA), response times, IO data types, IO payloads, and time of any observed IO pattern, amongst other input parameters for ML analysis). Per claim 4, Creed discloses: further comprising: training, via the one or more processors, a model corresponding to a component action of the workflow based upon a detected IOPS pattern when executing the component action (¶0047; the manager 334 can use a recurring neural network (RN N) to analyze the historical and current IO workloads. The RNN can be a Long Short-Term Memory (LSTM) network that anticipates the workloads based on historical/current IO workload input parameters. Further, the ML techniques can include a time series learning logic to anticipate the workloads. The manager 334 can use parameters such as include IO types and sizes, logical block address (LBA), response times, IO data types, IO payloads, and time of any observed IO pattern, amongst other input parameters for ML analysis). Per claim 5, Creed discloses: wherein predicting the expected IOPS pattern comprises: predicting, via the one or more processors, when the expected IOPS pattern for the component actions of the workflow are to commence (¶0047; The RNN can be a Long Short-Term Memory (LSTM) network that anticipates the workloads based on historical/current IO workload input parameters. Further, the ML techniques can include a time series learning logic to anticipate the workloads. The manager 334 can use parameters such as include IO types and sizes, logical block address (LBA), response times, IO data types, IO payloads, and time of any observed IO pattern, amongst other input parameters for ML analysis). Per claim 7, the combined teachings of Creed and Hazel discloses: wherein configuring the one or more cache management workers comprises: determining, via the one or more processors, that the expected IOPS pattern is a write- heavy IOPS pattern; (¶0052; first model may predict that the first anticipated write workload is likely to include, from greatest to lowest in frequency, write IO sizes of 128K, 8K, and 64K.... Based on the predicted read/write workloads defined by each of the models, the optimizer 338 can repartition and/or reallocate cache slot bins to each of one or more of the cache segments 225, 230, 235). Hazel discloses: and performing, via the one or more processors, at least one of increasing a number of data write-back workers, increasing a number of data reaper workers, and decreasing a number of pager workers (col. 29, lines 20-30; based on the notification 812, the queueing component 820 may determine to increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing. For example, based on the workload 822, the queueing component 820 may determine to increase (or “reserve”) one or more additional memory workers to facilitate an increase in the amount of ingested data; the examiner notes that the claim only requires one worker in view of the write. The reaper worker is interpreted as an additional memory worker). Per claim 10, Creed discloses: the workflow is a first workflow; and the method further comprises: identifying, via one or more processors, a second workflow configured to interact with the cache; (fig. 2, ¶0057; a method 400 can be executed by a memory management processor (e.g., the cache subsystem 105 of FIG. 1). At 405, the method 400 can include receiving an IO workload. At 410, the method 400 can include analyzing one or more SL workload volumes received within the IO workload) and predicting, via the one or more processors, an expected IOPS pattern for transactions generated by the second workflow; predicting, via the one or more processors, an expected aggregate IOPS pattern for the transaction generated by the first and second workflows; (¶0052; first model may predict that the first anticipated write workload is likely to include, from greatest to lowest in frequency, write IO sizes of 128K, 8K, and 64K. Similarly, the second model may predict that the second anticipated read workload is likely to include, from greatest to lowest in frequency, read IO sizes of 64K, 16K, and 8K. Based on the predicted read/write workloads defined by each of the models, the optimizer 338 can repartition and/or reallocate cache slot bins to each of one or more of the cache segments 225, 230, 235; the examiner notes that the aggregate is interpreted as a history of IOPS of respective workflow/workloads as disclosed in ¶0047; The RNN can be a Long Short-Term Memory (LSTM) network that anticipates the workloads based on historical/current IO workload input parameters). Creed discloses configuring cache slots based on IOPS but does not specifically disclose: and configuring, via the one or more processors, the one or more cache management workers based upon the expected aggregate IOPS pattern. However, Hazel discloses: and configuring, via the one or more processors, the one or more cache management workers based upon the expected aggregate IOPS pattern; (col. 29, lines 20-30; based on the notification 812, the queueing component 820 may determine to increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing. For example, based on the workload 822, the queueing component 820 may determine to increase (or “reserve”) one or more additional memory workers to facilitate an increase in the amount of ingested data). Claims 11-15 are the system claims corresponding to the method claims 1-5, 7 and 10 and are rejected under the same reasons set forth in connection with the rejection of claims 1-5, 7 and 10. Claim 17 is the system claim corresponding to the method claim 7 and is rejected under the same reasons set forth in connection with the rejection of claim 7. Claim 19 is the system claim corresponding to the method claim 10 and is rejected under the same reasons set forth in connection with the rejection of claim 10. Claim 20 is the CRM claim corresponding to the method claim 1 and is rejected under the same reasons set forth in connection with the rejection of claim 1. Claim(s) 6, 9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Creed at al. 20220129379 herein Creed at al. 20220129379 herein Creed, Hazel et al. herein Hazel and Chen et al. 20230214308 herein Chen in view of Sawdon et al. 20200104159 herein Sawdon. Per claim 6, the combined teachings of Creed and Hazel disclose: wherein configuring the one or more cache management workers comprises: determining, via the one or more processors, that the expected IOPS pattern is a read- heavy IOPS pattern; (Creed: ¶0051; During the second time-window, the second model define the second anticipated workload as including a larger percentage of read vs write IO operations. As stated herein, read data is generally stored in unmirrored cache slots because the data is typically read from disk, which inherently includes original copies of the read data. Accordingly, the memory management processor may allocate a greater portion to global memory's unmirrored segment 230 rather than the global memory's mirrored segments 225, 235.) and performing, via the one or more processors, at least one of increasing an initial amount of document associated read into the cache by a pager worker in response to a read operation (Hazel: col. 29, lines 20-30; the queueing component 820 may determine to increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing… based on the workload 822, the queueing component 820 may determine to increase (or “reserve”) one or more additional memory workers to facilitate an increase in the amount of ingested data). The combined teaching of Creed, Hazel and Chen does not specifically disclose: increasing a number of pager workers, and increasing a number of related documents predictively loaded into the cache. However, Sawdon discloses: increasing a number of pager workers, and increasing a number of related documents predictively loaded (prefetch) into the cache (¶0052; the workflow scheduler 640 informs the file system of the next files to be accessed. As the file system prefetch reaches the end of current inputs, it automatically begins prefetching the next files. In one embodiment, prefetching works continuously across files without waiting for a first user access to the file. The architecture 600 reuses a computed prefetch distance as the initial distance for the next processing stage. In one embodiment, for architecture 600, the read-ahead cache manager 660 may end up prefetching more than one file for the next stage (and can include preloading the next stage executable)). It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Creed, Hazels, Chen and Sawdon’s read ahead for workflow to optimize data processing. Sawdon minimizes read latency (¶0047; Ideal prefetching further involves that the file system continuously monitors/adjusts prefetch distance to achieve ideal trade-off between buffer space used and observed Input/Output (I/O) latency, where prefetch distance is not hard coded and varies). Per claim 9, the combined teachings of Creed, Hazel and Chen does not specifically disclose: the workflow includes two or more function blocks; and predicting the expected IOPS pattern comprises predicting, via the one or more processors, the expected IOPS pattern for each function block of the workflow. However, Sawdon discloses: the workflow includes two or more function blocks; and predicting the expected IOPS pattern comprises predicting, via the one or more processors, the expected IOPS pattern for each function block of the workflow (¶0036;Workloads layer 90 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analytics processing 94; transaction processing 95; and for optimizing data read-ahead for workflow and analytics processing 96). It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Creed, Hazels and Sawdon’s read ahead for workflow to optimize data processing. Sawdon minimizes read latency (¶0047; Ideal prefetching further involves that the file system continuously monitors/adjusts prefetch distance to achieve ideal trade-off between buffer space used and observed Input/Output (I/O) latency, where prefetch distance is not hard coded and varies). Claim 16 is the system claim corresponding to the method claim 6 and is rejected under the same reasons set forth in connection with the rejection of claim 6. Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Creed at al. 20220129379 herein Creed, Hazel et al. herein Hazel and Chen et al. 20230214308 herein Chen Hazel in view of Gupta et al. 20220300284 herein Gupta. Per claim 8, the combined teachings of Creed, Hazel and Chen do not specifically disclose: detecting, via the one or more processors, that a component workflow action wrote one or more temporary files to the cache; and flagging, via the one or more processors, the one or more temporary files such that the write-back workers do not write the temporary files to the cloud storage system. However, Gupta discloses: detecting, via the one or more processors, that a component workflow action wrote one or more temporary files to the cache; and flagging, via the one or more processors, the one or more temporary files (single use) such that the write-back workers do not write the temporary files to the cloud storage system (¶0004; The method may include executing one or more instructions of the GPU in a single-use mode associated with the single-use section. The method may include skipping write-back of single-use values to the vector register file based on the single-use values being forwarded either via bypass path or via a register file cache). It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Creed, Hazels, Chen and Gupta’s opportunistic writeback of a single use file to eliminate unnecessary writeback. Gupta improves the hit rate of the cache (¶0048). Claim 18 is the system claim corresponding to the method claim 6 and is rejected under the same reasons set forth in connection with the rejection of claim 6. Response to Arguments Applicant's arguments filed 2/2/26 have been fully considered but they are not persuasive. The applicant argues: The action states that Creed does not disclose the prior claim 1 recitation of "configuring... one or more cache management workers based upon the expected IOPS pattern" and cites Hazel for disclosing the same. However, Hazel's disclosure of a queueing component determines to increase or decrease memory workers "based on the notification" and "based on the workload"-that is, in response to current workload conditions, not prior to transactions being generated by a workflow. Applicant respectfully submits that Hazel's queueing component 820 determines to "increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing" based on "the notification 812" and "based on the workload 822." (Hazel, Column 29, Lines 18-30). Hazel's worker configuration is responsive to current workload conditions, not prior to the transactions. Specifically, Hazel discloses worker configuration in response to notifications that data has been placed at the object storage system and the current workload being processed. Hazel does not teach or suggest configuring cache management workers "prior to the transactions being generated by the workflow" as recited by amended claims 1, 11, and 20. Chen does not teach or suggest the recitations of amended claims 1, 11, and 20, nor is Chen cited for such teaching or suggestion. Instead, the Examiner relies on Chen for teaching predicting when the expected IOPS pattern will occur. Chen discloses a capacity manager 110 that "can receive historical and/or current usage data in the distributed computing system 100 and predict based thereon, future demand or usage levels for the various computing resources" (Chen, paragraph [0047]). Based on predicted future usage levels, Chen's capacity controller 158 determines whether additional computing resources should be allocated and can "trigger various remedial actions" such as generating alerts, expediting installation of hosts, or generating provisioning instructions (Chen, paragraphs [0066]-[0067]). Chen thus teaches reactive capacity provisioning in response to predicted capacity shortages, and allocating and provisioning computing resources to accommodate predicted future usage levels. The "capacity controller" disclosed by Chen generates "provisioning instructions 173 to be transmitted to, for instance, the management controller 102" which then allocates and provisions "additional computing resources to accommodate the predicted future usage levels." Chen et al., paragraph [0068]. This is fundamentally different from the claimed approach of configuring cache management workers before the workflow transactions are even generated. In contrast, claim 1 as amended recites configuring cache management workers "prior to the transactions being generated by the workflow," which corresponds to the specification's teaching that "the cache controller 309 may predictively configure the cache manager 120 prior to the read / write transaction are generated and placed into the queue 130" (As-Filed Specification, paragraph [0038]). This predictive configuration before transaction generation is distinct from Chen's reactive provisioning approach. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The examiner respectfully disagrees with the applicant and asserts that the combination of Chen and Hazel discloses the cache controller 309 may predictively configure the cache manager 120 prior to the read / write transaction are generated by the workflow based upon the expected IOPS patter and when the expected IOPS pattern will occur. The examiner notes that the claim amendment merely requires that configuring the workers based on the expected IOPS pattern and when the pattern will occur prior to executing the reads/writes of the workflow. Further, the examiner notes that the applicants’ argument appears to ignore the fact that the configuring is based on a criterion which is the expected IOPS pattern. The expected IOPS pattern is a prediction of the pattern based on a profile of the workflow. Meaning a history of the workflow transaction is used to make a prediction of the likely pattern of the workflow. Further, there is nothing in the claim that excludes using previous or current workflow data. The claim merely requires that the configuring happens before the to be executed workflow. All aggregation of data prior to the workflow to be executed can be used to predict the expected pattern. Creed is relied upon to teach “first model may predict that the first anticipated write workload is likely to include, from greatest to lowest in frequency, write IO sizes of 128K, 8K, and 64K. Similarly, the second model may predict that the second anticipated read workload is likely to include, from greatest to lowest in frequency, read IO sizes of 64K, 16K, and 8K. Based on the predicted read/write workloads defined by each of the models, the optimizer 338 can repartition and/or reallocate cache slot bins to each of one or more of the cache segments 225, 230, 235.” Clearly Creed discloses that a prediction is made prior to anticipated workload being executed on whether it’s read heavy or write heavy. The applicants’ characterization of the Creed reference is contradictory to the cited portions. Creed presents an either or scenario of using the history or current usage data. Chen is relied to teach predicting, via the one or more processors, when the expected IOPS pattern will occur; ...and when the expected IOPS pattern will occur. Chen clearly discloses using time series data to forecast/predict future demand or usage levels. Chen discloses, ¶0005, obtaining the time series data, a forecasting technique can be used to extrapolate the time series data to predict or forecast future demand or usage levels of the computing service. For instance, exponential smoothing is an example forecast technique for predicting a future data point based on historical data points by smoothing time series data using an exponential window function that assigns exponentially decreasing weights over time. The combined teachings of Creed prediction of read or write heavy workloads combined with Chens use of time series data forecasting to determine when the patter will occur teaches predicting, via the one or more processors, when the expected IOPS pattern will occur; and configuring, via the one or more processors,…. prior to the transactions being generated by the workflow based upon the expected IOPS pattern and when the expected IOPS pattern will occur. Hazel is relied upon to teach the configuring one or more cache management workers. Hazel discloses “based on the notification 812, the queueing component 820 may determine to increase, decrease, or maintain the quantity of memory workers allocated to perform the real-time indexing. For example, based on the workload 822, the queueing component 820 may determine to increase (or “reserve”) one or more additional memory workers to facilitate an increase in the amount of ingested data.” The combined teaching of Creed and Chen discloses predicting when an expected IOPS pattern will occur based on the expected IOPS pattern and Hazels configuring of queueing components teach predicting, via the one or more processors, when the expected IOPS pattern will occur; and configuring, via the one or more processors, one or more cache management workers prior to the transactions being generated by the workflow based upon the expected IOPS pattern and when the expected IOPS pattern will occur. The applicants’ arguments regarding the dependent claims are directed to the arguments put forth supra. The examiner notes that the response put forth above applies. Remark Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABOUCARR FAAL whose telephone number is (571)270-5073. The examiner can normally be reached M-F 8:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim VO can be reached on 5712723642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. BABOUCARR . FAAL Primary Examiner Art Unit 2138 /BABOUCARR FAAL/Primary Examiner, Art Unit 2138
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Mar 18, 2025
Non-Final Rejection — §103
Jun 16, 2025
Interview Requested
Jun 25, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Examiner Interview Summary
Jul 18, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Dec 29, 2025
Interview Requested
Feb 02, 2026
Request for Continued Examination
Feb 07, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572458
MEMORY MANAGEMENT IN CONTAINERS INITIATED WITH CONTIGUOUS MEMORY OF FIXED SIZE
2y 5m to grant Granted Mar 10, 2026
Patent 12566546
SYSTEMS AND METHODS FOR NOR PAGE WRITE EMULATION MODE IN MEMORY DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12561077
MULTI-FORMAT DATA OBJECTS IN MEMORY
2y 5m to grant Granted Feb 24, 2026
Patent 12554420
POWER MANAGEMENT IN A MEMORY DEVICE BASED ON A HOST DEVICE CONFIGURATION
2y 5m to grant Granted Feb 17, 2026
Patent 12524161
DATA TRANSMISSION MANAGEMENT
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
95%
With Interview (+15.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 527 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month