Prosecution Insights
Last updated: April 19, 2026
Application No. 18/194,332

Write-Back Caching Across Clusters

Non-Final OA §103§112
Filed
Mar 31, 2023
Examiner
HO, AARON D
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
5 (Non-Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
187 granted / 251 resolved
+19.5% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
11 currently pending
Career history
262
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
23.0%
-17.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 251 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 30, 2026 has been entered. Response to Amendment The amendment filed January 30, 2026 has been entered. Claims 23 and 24 are newly added, leaving claims 1-3, 5-17, and 19-24 pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on January 30, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 23 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 23 recites “wherein the write-back is initiated and performed only while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed only under exclusive write delegation”. Independent claim 1, which claim 23 depends on, has been amended to recite “wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation”. As such, claim 1 already provides a requirement that the write-back’s initiation is tied to the presence of the exclusive write delegation and cannot be initiated if an exclusive write delegation is not present, i.e. for the scope of claim 1, the write back is only initiated when the cache retains exclusive write delegation. Similarly, claim 1 already requires that the exclusive write delegation is required for the concurrent writing. The limitations of claim 23 do not alter this requirement in any manner, and therefore do fail to narrow the subject matter of claim 1, and merit a rejection under 35 U.S.C. 112(d). Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Examiner notes that for contrast, claim 24 recites the additional limitation of obtaining a write delegation for the selected file, which is not recited in parent claim 11 and therefore sufficiently narrows the scope of parent claim 11. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 11, 23, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Baergen et al. (10,154,090) in view of Aoyagi et al. (US 2014/0297966), Wang (US 2009/0100224), McKenney (US 6,230,241), and Shultz et al. (US 2005/0223005). Regarding claim 1, Baergen teaches a method comprising: receiving, within a first node in a first cluster, a write request to write data to a selected file on a volume that is hosted by a second node in a second cluster that is different from the first cluster, the write request originating from a client (Fig. 1 shows two clusters, each with computing nodes 104 and storage nodes 108, see Col. 3, Lines 61-64, where the storage nodes are presented as virtual volumes to hosts, see Col. 4, Lines 11-19, and each computing node is capable of hosting cache memory, see Fig. 2 cache memory 208, where host devices can initially write data to the cache and subsequently destage it to backend storage devices, see Col. 4, Lines 45-56; with this context, Fig. 6 shows a write I/O process, where host 112 sends a write IO request 600 to cluster 1001, which necessitates determining which remote owner in cluster 1002 owns a copy of the data, reading upon the selected file on a volume hosted by a second node in a second cluster, see also Col. 8, Lines 38-62); obtaining, for a cache that corresponds to the volume and that is hosted by the first node, a write delegation for the selected file to allow processing of the write request (“The local procedures 610 include a speculative lock and invalidation message 630 which is sent from the local meta-directory owner 4031 to the local directory owner 4011. The local directory owner provides the locks and sends an invalidation message 632 to a previous local owner director 634. The invalidation message 632 prompts the previous local owner director to delete corresponding data from cache. The previous local owner 634 responds with an invalidation reply 636. The local directory owner 4011 then sends a speculative lock and invalidation ready message 638 to the local meta-directory owner 4031. The local meta-directory owner then sends an invalidation done message 640 to the remote meta-directory owner 4032 in cluster 1002,” Col. 8, Lines 61 – Col. 9, Line 8, where Col. 2, Lines 20-24 describes the acquisition of locks as helping maintain cache coherency); writing the data to a cache file (“A SCSI write 664 to the local cluster DR1 is then executed,” Col. 9, Lines 22-23, combined with the earlier citation to Col. 4, Lines 45-56 that writes are initially directed to caches); sending a response to the client after the data is written to the cache file (”Upon completion of the parallel procedures a write done message 668 is sent from the local meta-directory owner to the IO receiving director. The IO receiving director then sends a write acknowledgement message 670 to the host,” Col. 9, Lines 26-29); Baergen fails to teach the method comprising: determining that at least one of a cache file threshold or a cache threshold is met or will be met by writing the data to a cache file in the cache that corresponds to the selected file; initiating a write-back of accumulated data in the cache to the volume hosted by the second node in the second cluster based on the determining that the at least one of the cache file threshold or the cache threshold is met or will be met. While Baergen does disclose caches flushing data back to underlying storage, as seen in the Col. 4, Lines 45-56 citation, Baergen’s triggers are not disclosed to be a cache file or cache threshold, see also Col. 5, Lines 16-19, and flushing data back to underlying storage is not understood to specifically be from the cache of the first node back to the second node in the second cluster specifically. As a consequence of a failure to teach the write-back triggers, Baergen fails to teach where the write of the data to the cache file occurs specifically after initiating the write-back and wherein the writing of the data occurs at least partially concurrently with the write-back sending the accumulated data to the volume hosted by the second node and wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. Regarding the new amended limitation, while Baergen does teach invalidating previous owners for cache coherency (as cited above in Col. 8, Lines 61 – Col. 9, Line 8, Col. 2, Lines 20-24), i.e. - ensuring that a current owner has exclusive write access, this write access is not disclosed in relation to the cache write-backs. Aoyagi’s disclosure relates to managing data operations across multi-nodal systems, and as such comprises analogous art. As part of this disclosure, Aoyagi shows a similar process as Baergen’s in Fig. 5, where a local processor node may request data from a processor node that owns the data, loading it into its respective L2 cache. More specifically, Aoyagi Fig. 7 shows a process where when a local cluster flushes data, the data is also flushed back to the original home cluster, see also [0060]. An obvious modification can be identified: incorporating Aoyagi’s process of flushing data from a local cache back to a remote cluster. Such a modification reads upon the initiation of a write-back of accumulated data in the first cache to the volume hosted by the second node in the second cluster. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Aoyagi’s remote cluster flushing process, as this can reduce Baergen’s complexity by ensuring that only the local cache is dirtied, and therefore written back to the remote cluster to synchronize the data across all clusters. The combination of Baergen and Aoyagi still fails to teach the determining limitation or where this write back is performed in response to the determining, as Aoyagi does not provide any disclosure to modify when a flush occurs. In addition, Aoyagi is not relied upon to teach where the writing of the data to the cache file occurs at least partially concurrently with the write-back sending the accumulated data to the volume hosted by the second node and wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. Wang’s disclosure relates to cache management system and as such comprises analogous art. As part of this disclosure, Wang depicts a system in Fig. 2 with data being moved between cache and optical storage media based on a cache management module, see also [0020,0021], where Fig. 3 depicts the data as being organize into cache files. Wang discloses that “the cache management module 210 can determine whether the cache 170 is above a threshold usage level before moving files to an optical storage media. In other examples, the cache management module 210 can determine whether the cache 170 is above a threshold usage level before determining whether to stop moving files from the cache 170 to the optical storage media. In various examples, the threshold usage level may be a percentage (e.g., 85%, 90%, or 95%) of the total storage size of the cache 170,” [0038]. Wang further discloses the ability to track and identify file scores for identifying files for eviction, see [0023,0045-0048]. An obvious modification can be identified, incorporating Wang’s cache usage threshold to determine when to move data to the underlying storage, i.e. flushing the data. Such a modification reads upon the determining limitation, as Wang teaches identifying whether the threshold usage level is met, as well as where the writeback occurs in response to the cache threshold is met. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Wang’s cache usage threshold for determining when to flush data into Baergen’s system, as this ensures that the cache does not run out of space during operation, which would negatively affect system performance. The combination of Baergen, Aoyagi, and Wang still fails to teach where the writing of the data occurs after initiating the write-back and at least partially concurrently with the write-back sending the accumulated data to the volume hosted by the second node and wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. McKenney’s disclosure relates to cache management and particularly transferring data out of/into caches, and as such comprises analogous art. As part of this disclosure, McKenney Fig. 3 depicts a series of operations, with reading into cache locations in step 202 and writing into cache locations in step 204, as well as flushes in steps 201 and 203. The overall cache operation is generally disclosed in Col. 6, Line 32-Col. 9, Line 50, but of particular note, McKenney provides that “While the processing of steps 201 through 205 has been described in a serialized manner, in a preferred embodiment certain steps may be performed concurrently with other steps to further speed up the entire data transfer process. More specifically, in a preferred embodiment, steps 201, 203 and 205 are performed while steps 202 and/or 204 are being executed. Thus, the steps of emptying the cache memory 104 locations at block 210 (Step 201) and block 211 (step 203) may be performed concurrently as data is loaded from I/O memory A 101 (Step 202) and as data is copied (Step 204) from block 210 to block 211, respectively. Thus, step 201 and 202 may be started together, and before the data from I/O memory 101 is returned from the read request in step 202, step 201 will have completed clearing and synchronizing the cache 201. Likewise, step 203 and 204 may be started concurrently, and step 203 completes is clearing of the destination locations in block 211 just before the data is transferred to those locations in step 204,” Col. 8, Lines 46-64. An obvious modification can be identified: incorporating McKenney’s disclosure of the ability to concurrently flush locations in cache memory while performing writes into the cache locations. Such a modification reads upon the amended limitation where writing of the data occurs concurrently with the write-back as well as where this occurs after the write-back is initiated, as the flushing operation is started prior to performing the writes, and the writes are now specifically being started concurrently with the emptying of the cache, i.e. emptying and flushing the data back to the underlying storage. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate McKenney’s disclosure of concurrent writes and write-backs into Baergen’s system, as this provides for the ability to speed up data transfer processing, see Col. 8, Lines 47-50. The combination of Baergen, Aoyagi, Wang, and McKenney still fails to teach wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. Shultz’s disclosure relates to a file system cache and managing coherency, and as such comprises analogous art. As part of this disclosure, Shultz provides for a lock structure to record if there is an exclusive write lock, see [0016]. Shultz provides a process in Fig. 2 for where lock access functions (LAF), data access functions (DAF), and cache access functions (CAF) for a virtual machine can process write requests to a file system. In particular, after acquiring a lock for the cache at step 202, the virtual machine process writes to the cache in step 214 and possesses the ability to determine whether or not to flush the cache, and if necessary flushes the cache, see steps 216 and 220. After all this, then the write lock is given up in step 222. An obvious modification can be identified: incorporating Shultz’s disclosure of maintaining write locks until writes and flushes are finished into Baergen. The combination of Baergen, Aoyagi, Wang, and McKenney as disclosed so far provides for processing writes with acquisition of a lock and concurrent writebacks and writing of data. Shultz now provides a modification that explicitly provides a nexus between cache flushes, writes, and acquisition/surrender of a write lock, namely that writing and flushing caches occurs while the write lock is maintained. Such a modification reads upon the limitation of the claim, as the write lock providing exclusive write delegation is retained while the write back and writes are initiated/performed. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Shultz’s disclosure of maintaining locks during writes and flushes into Baergen’s system, as Shultz provides for an explicit lock structure to ensure cache coherency during operations that modify the cache, i.e. writing to the cache and flushing from the cache. Regarding claim 11, Baergen teaches a computing device comprising: a memory containing a machine-readable medium comprising machine executable code having instructions stored thereon; and a processor coupled to the memory, the processor configured to execute the machine executable code (“Some aspects, features and implementations may comprise computer components and computer-implemented steps or processes that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that the computer-implemented steps or processes may be stored as computer-executable instructions on a non-transitory computer-readable medium. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of physical processor devices,” Col. 11, Lines 54-62, teaching a device that includes a non-transitory readable medium reading on the memory, and a processor device executing the instructions reading on the processor) to perform the method of claim 1 and rejected according to the same rationale. Examiner notes for clarity of record that while all steps of claim 11 are found in claim 1, not all steps of claim 1 are recited in the scope of claim 11, as discussed below in the context of claim 24. Claim 23 is rejected under 35 U.S.C. 112(d) for failure to further narrow the subject matter of claim 1, and therefore can be rejected according to the same rationale of claim 1. Regarding claim 24, the obtain limitation is recited in claim 1, and the new wherein limitation is recited in claim 23, which is noted to fail to further narrow the subject matter of claim 1, see the rejection under 35 U.S.C. 112(d). Therefore, claim 24 is rejected according to the same rationale of claim 1. Claims 2 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz, and further in view of Foster et al. (US 2011/0219349). Regarding claim 2, the combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: setting the cache file threshold for the amount of accumulated data in the cache file to allow multiple write requests for the selected file to be processed before a write-back of the accumulated data in the cache file is initiated. Foster’s disclosure relates to managing cache data, and as such comprises analogous art. As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold, see [0057]. An obvious modification can be identified: providing for a cache file capable of accumulating multiple write operations, with the ability to flush a particular cache file if the size is reached. Such a modification reads upon the limitation of the claim, as Foster’s cache file is clearly shown in Fig. 3 accumulating multiple data sets before any consideration of a file size threshold is contemplated. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Baergen’s disclosure, as the cache file provides for a file system to access related data, and setting a separate cache file size threshold ensure that no single file can dominate the use of Baergen’s caching resources. Regarding claim 21, the combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, and the combination further teaches wherein the determining that at least one of the cache file threshold or cache threshold is met or will be met comprises: Writing the data to the cache file after all accumulated data of the cache file has been flushed but before all of the accumulated data in the cache has been flushed (as disclosed in the claim 1 rationale, McKenney provides that “Likewise, step 203 and 204 may be started concurrently, and step 203 completes is clearing of the destination locations in block 211 just before the data is transferred to those locations in step 204,” Col. 8, Lines 61-64, where the step 204 of writing data into cache locations are specifically performed right after the step of writing back the data completes in step 203 for those locations; McKenney therefore teaches that performing the writing and flushing concurrently provides for writing immediately after a location is flushed, instead of waiting for the entire cache’s accumulated data is flushed). The combination as disclosed in claim 1 fails to teach wherein the determining that at least one of the cache file threshold or cache threshold is met or will be met comprises: First determining that the cache file threshold is met; Second determining that the cache threshold for an amount of accumulated data in the cache is met; and As identified as part of Wang’s disclosure on cache management, Wang provides that ”The statistics collection module 220 can collect statistics for each of the files in the cache 170. In some implementations, the statistics collection module 220 maintains statistics related to file sizes for the files, most recent access times for the files, and file write frequencies for the files. In one implementation, the statistics collection module 220 may maintain a counter value associated with each of the files stored in the cache 170. For example, the counter value may be the number of pages in the cache 170 that is used to store content of the associated file,” [0023]. Wang further provides that cache statistics and scores are assigned to identify highest scored files, see Fig. 5 step 500-520 and see also [0045-0047], where the file scores can include the file size, see [0034]. Notably, Fig. 5 steps 500-520 are identified as occurring before the cache management module determines that the cache is full, see [0048]. An obvious modification can be identified: incorporating Wang’s disclosure of maintaining cache statistics and scoring files into Baergen’s cache flushing process. Such a modification reads upon first determining that a cache file metric is met, and second determining that the cache threshold is met. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Wang’s maintaining of cache statistics into Baergen’s cache flushing process, as this helps expedite cache flushing process by already having eviction candidates identified. The combination of Baergen, Aoyagi, Wang, McKenney, and Shultz still fails to teach where the cache file metric is specifically the cache file threshold, as Wang’s disclosure provides for a scoring mechanism, although cache file size is close to the threshold. Foster’s disclosure relates to managing cache data, and as such comprises analogous art. As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold, see [0057]. An obvious modification can be identified: providing for the ability to identify and flush a particular cache file if the size is reached. Such a modification reads upon the limitation of the claim, as Foster provides for a threshold to immediately identify candidate cache files to flush instead of just using the cache file size ranking of Wang. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Baergen’s disclosure, as the cache file provides for a file system to access related data, and setting a separate cache file size threshold ensure that no single file can dominate the use of Baergen’s caching resources. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz, and further in view of Yun (US 2016/0004408) and El Kaissi (US 10,365,798) The combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: setting the cache file threshold for the amount of accumulated data in the cache file to a value between 64 kilobytes and 10 gigabytes; and setting the cache threshold for the amount of accumulated data in the cache to a value between 1 megabyte and 10 terabytes Yun’s disclosure relates to providing an optimization function for mobile device memory. As such, while not in the same field of endeavor, one of ordinary skill in the art would find Yun’s disclosure optimizing how memory is cleaned/optimized to be reasonably pertinent to the question of how to manage cache related thresholds for flushing, and Yun therefore comprises analogous art. As part of this disclosure, Yun classifies targets for optimization/cleaning, where cache files can be classified as targets if the file is greater than a threshold size, with Yun providing an example of 100 MB, see [0077]. An obvious modification can be identified: incorporating Yun’s classification of individual files as targets for flushing, and selecting files greater than a set target for cleaning. Such a modification reads upon the limitation of the claim, as the 100MB threshold Yun provides is within the claimed range and Yun provides for specific cache files to clean when they reach thresholds, not just the overall cache. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Yun’s individual classification of files for targeted cleaning and setting a threshold size for selection into Baergen’s system, as this allows the system to optimize flushing by identifying files that would free up the most space upon flushing data. The combination of Baergen, Aoyagi, Wang, McKenney, Shultz, and Yun fails to teach the cache threshold limitation. El Kaissi’s disclosure relates to an application’s integration with a software application. While not part of the same field of endeavor, part of El Kaissi’s disclosure does relate to cache management, and so while El Kaissi’s disclosure as a whole is not in the same field of endeavor, one of ordinary skill in the art would still find the discussion on how to manage cache occupancy reasonably pertinent to the question of how to set cache thresholds, and therefore El Kaissi comprises analogous art. As part of this disclosure, El Kaissi discloses that a cache manager may periodically perform self-eviction on the cache memory layer, where “when a cache layer's occupancy reaches a certain threshold (e.g., 70% full, wherein the cache layer's size is 100 megabytes (MB) and only 30 MB are free), the cache manager may compare the age of each cached file in the cache layer to their respective TTLs and clean out any stale files to free up space,” Col. 12, Lines 35-40. An obvious combining can be identified: combining El Kaissi’s numerical examples of cache sizes and occupancy with Wang’s earlier disclosure of flushing based on cache occupancy, as disclosed in the claim 1 rationale. Such a combination reads upon the limitation of the claim, as El Kaissi’s numerical example provides for a cache full threshold of 70 MB, falling into the claimed range. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to combine El Kaissi’s numerical example of cache sizes/thresholds with Wang’s disclosure of flushing based on cache occupancy as incorporated into Baergen in the claim 1 rationale. Both elements are known in the art, and as both disclosures provide for eviction/flushing data from a cache based on an occupancy, then one of ordinary skill in the art would recognize that El Kaissi’s more specific numbers provides for a predictable combination result, i.e. continued use of a cache flushing policy based on occupancy, just with some specific example sizes. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz, and further in view of Chen et al. (US 2018/0260340) and Desai et al. (US 2011/0119461). Regarding claim 5, the combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: responsive to receiving the write request and determining that the cache currently has an active write delegation, requesting revocation of the active write delegation; flushing the cache after the active write delegation is revoked; and sending a response confirming revocation of the active write delegation. While Bergen’s disclosure does provide for invalidating already owned copies of the data to be written, this is not the same as identifying already existing locks and requesting revocation of the locks. Chen’s disclosure relates to providing multiple storage arrays with lock permissions, and as such comprises analogous art as in the same field of endeavor. As part of this disclosure, Chen provides that when a first storage array applies for lock permission and a lock server determines a second storage array already has the write lock permission, then the lock server sends a lock revocation request to the second storage array, see [0037], where the second storage array notifies the lock server when write lock permission is released/revocation succeeds, and the lock server sends an authorization message back to the original first storage array, see [0039]. An obvious modification can be identified: incorporating Chen’s lock server, with the ability to identify lock conflicts and request revocation. Such a modification reads upon the requesting revocation and sending a response limitations. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Chen’s lock server with the ability to request revocation into Baergen’s system, as a central lock server can ensure that no cluster/node unnecessarily holds the locks, and also provides for a method to ensure that lock conflicts can be resolved. The combination of Baergen, Aoyagi, Wang, McKenney, Shultz, and Chen still fails to teach flushing the cache after the active write delegation is revoked. While Baergen discloses invalidating already existing copies, this is not identical to flushing already existing copies of data in the caches. Desai’s disclosure relates to file systems and specifically managing locks across servers, and as such comprises analogous art as directed to the same area of lock management. As part of this disclosure, Desai provides that when a server revokes all write locks, then the client flushes all data to the different servers, see [0051]. An obvious modification can be identified: incorporating Desai’s process of flushing caches after revoking locks into Baergen’s system. Such a modification reads upon the outstanding limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Desai’s process of flushing caches after revoking locks into Baergen’s system, as this ensures that after a different cluster releases a write lock, then the modified data can be flushed back to the underlying storage to ensure a most up to date version. Claim 14 is rejected according to the same rationale of claim 5. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz, and further in view of Savir et al. (US 2021/0124492). The combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: setting a payload size for write-back messages sent from the cache to the volume to a value between 64 kilobytes and 100 megabytes. Savir’s disclosure relates to managing storage resources by monitoring utilization, and as such comprises analogous art. As part of this disclosure, Savir discloses that “The granularity (unit size) of the data flushes can also be defined, such as minimum data set size in MB (e.g., 10 MB) or block sizes (e.g., 50 blocks),” [0072]. An obvious combination can be identified: combining Savir’s disclosure about the size of data flush with Baergen’s system. Such a combination reads upon the limitation of the claim, as Savir’s example data flush of 10MB falls within the claimed range. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to combine Savir’s disclosure of a flush size with Baergen’s system. Both elements are known in the art, and as both disclosures contemplate how to handle data flushing, then Savir’s specific number provides for an obvious predictable result, where Baergen’s system continues to flush data, with a specific size as provided by Savir. Claims 7 and 16 are rejected under 35 U.S.C. 193 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz and further in view of Jain et al. (US 2017/0004083). Regarding claim 7, the combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach wherein the write delegation allows the data to be written to the cache file corresponding to the selected file and prevents all other processes from accessing the selected file on the volume until the write delegation is revoked. While Baergen discloses obtaining locks throughout the disclosure, Baergen never provides details on how exactly the locks function. Jain’s disclosure relates to managing locks and cache sharing between cluster nodes, and as such comprises analogous art. As part of this disclosure, Jain states that “An exclusive lock can also be referred to as a write lock. An exclusive (or write) lock can be associated with a cacheable data object, and can be granted before a node is allowed to write data to that cacheable data object. Because this type of lock is exclusive, no other nodes can read, write, or otherwise access the data subject to an exclusive write lock until the exclusive write lock has been relinquished or otherwise removed or revoked,” [0030]. An obvious combination can be identified: combining Jain’s definition of write locks with Baergen’s locks. Such a combination reads upon the limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to combine Jain’s definition of write locks with Baergen’s locks. Both elements are known in the art, and as Jain provides more details on how write locks function and what it means for other data accesses, then one of ordinary skill in the art would recognize that Jain’s disclosure provides for the mechanism by which Baergen’s locks are able to achieve and maintain cache coherency. Consequently, the combination would be seen to be a predictable result – i.e. Baergen’s locks continue to function as disclosed, with Jain’s disclosure providing a greater understanding of how the lock mechanism functions. Claim 16 is rejected according to the same rationale of claim 7. Claim 8, 15, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz and further in view of Foster and Jibbe et al. (US 2017/0315913) Regarding claim 8, the combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: tracking an amount of accumulated data in the cache file using a cache metafile maintained by the first node. Foster’s disclosure relates to managing cache data, and as such comprises analogous art. As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011] where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold, see [0057]. An obvious modification can be identified: providing for a cache file capable of accumulating multiple write operations, with the ability to flush a particular cache file if the size is reached. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Baergen’s disclosure, as the cache file provides for a file system to access related data, and setting a separate cache file size threshold ensure that no single file can dominate the use of Baergen’s caching resources. The combination of Baergen, Aoyagi, Wang, McKenney, and Foster still fails to teach the limitation of claim 8. Jibbe’s disclosure relates to managing cache utilization and as such comprises analogous art. As part of this disclosure, Jibbe provides for a metadata tracking an index for mapping data blocks to a volume provided for caching, see [0015], where the metadata is “used by the storage controller for managing the contents of host data within the thinly provisioned volume and tracking the current utilization of the first data cache's data storage capacity,” [0015], where the metadata may be stored within another cache of the storage subsystem, see [0015]. An obvious modification can be identified: incorporating metadata tracking mapped indices to track the utilization of the data cache’s data storage capacity. Such a modification reads upon the limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Jibbe’s metadata tracking cache utilization with Baergen’s system as modified by Foster, as this provides for a mechanism to actually implement monitoring the fullness of a cache file like Foster’s Claim 15 is rejected according to the same rationale of both claims 8 and 9, see the claim 9 rejection below (claim 15 recites tracking both the cache file’s accumulated data and cache’s accumulated data, with claims 8 and 9 covering these two different scenarios). Claim 22 is rejected according to the same rationale of both claims 8 and 9, see the claim 9 rejection below (while claims 8 and 9 only recite tracking the amount of accumulated data in the cache file and cache, claim 22 recites tracking the accumulated data in the cache file and cache to determine whether the cache file threshold and the cache threshold will be met, respectively; while claims 8 and 9 do not explicitly address this intended use of tracking the accumulated data, this is covered by Wang’s disclosure in the claim 1 rationale to retain statistics on individual files, see [0023], and the overall cache, see [0038]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz and further in view of Jibbe. The combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: tracking an amount of accumulated data in the cache using a cache metafile maintained by the first node. Jibbe’s disclosure relates to managing cache utilization and as such comprises analogous art. As part of this disclosure, Jibbe provides for a metadata tracking an index for mapping data blocks to a volume provided for caching, see [0015], where the metadata is “used by the storage controller for managing the contents of host data within the thinly provisioned volume and tracking the current utilization of the first data cache's data storage capacity,” [0015], where the metadata may be stored within another cache of the storage subsystem, see [0015]. An obvious modification can be identified: incorporating metadata tracking mapped indices to track the utilization of the data cache’s data storage capacity. Such a modification reads upon the limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Jibbe’s metadata tracking cache utilization with Baergen’s system as modified by Wang, as this provides for a mechanism to actually implement monitoring cache fullness. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz, and further in view of Fathalla et al. (US 2022/0114274). The combination of Baergen, Aoyagi, Wang, McKenney, and Shultz teaches the method of claim 1, but fails to teach the method further comprising: tracking a status of the write delegation for the selected file using a cache metafile maintained by the first node. Fathalla’s disclosure relates to providing locks for editing files and as such comprises analogous art. As part of this disclosure, Fathalla discloses the presence of lock status metadata, which is associated with files and indicates whether files are locked, see [0023]. An obvious modification can be identified: incorporating lock status metadata as disclosed by Fathalla. Such a modification reads upon the limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate lock status metadata into Baergen’s system, as this provides for an easy data structure to track what locks are present or not for a given file in the cache. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz and further in view of Yun. Claim 12 is rejected according to the same rationale of claim 3, as claim 3 contains claim 12’s limitation as part of the claim (examiner notes that claim 3 also relied upon El Kaissi to address subject matter not recited in claim 12, so rejecting claim 12 does not require dependence on El Kaissi). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Wang, McKenney, and Shultz and further in view of El Kaissi. Claim 13 is rejected according to the same rationale of claim 3, as claim 3 contains claim 13’s limitation as part of the claim (examiner notes that claim 3 also relied upon Yun to address subject matter not recited in claim 13, so rejecting claim 13 does not require dependence on Yun). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Foster, McKenney, and Shultz. Baergen teaches a non-transitory machine-readable medium having stored thereon instructions for performing a method comprising machine-executable code which, when executed by at least one machine, causes the at least one machine to (“Some aspects, features and implementations may comprise computer components and computer-implemented steps or processes that will be apparent to those skilled in the art. For example, it should be understood by one of skill in the art that the computer-implemented steps or processes may be stored as computer-executable instructions on a non-transitory computer-readable medium. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions may be executed on a variety of physical processor devices,” Col. 11, Lines 54-62): receive, at a first node in a first cluster, a write request to write data to a selected file on a volume that is hosted by a second node in a second cluster that is different from the first cluster, the write request originating from a client (Fig. 1 shows two clusters, each with computing nodes 104 and storage nodes 108, see Col. 3, Lines 61-64, where the storage nodes are presented as virtual volumes to hosts, see Col. 4, Lines 11-19, and each computing node is capable of hosting cache memory, see Fig. 2 cache memory 208, where host devices can initially write data to the cache and subsequently destage it to backend storage devices, see Col. 4, Lines 45-56; with this context, Fig. 6 shows a write I/O process, where host 112 sends a write IO request 600 to cluster 1001, which necessitates determining which remote owner in cluster 1002 owns a copy of the data, reading upon the selected file on a volume hosted by a second node in a second cluster, see also Col. 8, Lines 38-62); obtain a write delegation for the selected file to allow a cache corresponding to the volume exclusive access to the selected file, wherein the cache is hosted by the first node (“The local procedures 610 include a speculative lock and invalidation message 630 which is sent from the local meta-directory owner 4031 to the local directory owner 4011. The local directory owner provides the locks and sends an invalidation message 632 to a previous local owner director 634. The invalidation message 632 prompts the previous local owner director to delete corresponding data from cache. The previous local owner 634 responds with an invalidation reply 636. The local directory owner 4011 then sends a speculative lock and invalidation ready message 638 to the local meta-directory owner 4031. The local meta-directory owner then sends an invalidation done message 640 to the remote meta-directory owner 4032 in cluster 1002,” Col. 8, Lines 61 – Col. 9, Line 8, where Col. 2, Lines 20-24 describes the acquisition of locks as helping maintain cache coherency); write the data to the cache file(“A SCSI write 664 to the local cluster DR1 is then executed,” Col. 9, Lines 22-23, combined with the earlier citation to Col. 4, Lines 45-56 that writes are initially directed to caches); and send a response to the client after the data is written to the cache file (”Upon completion of the parallel procedures a write done message 668 is sent from the local meta-directory owner to the IO receiving director. The IO receiving director then sends a write acknowledgement message 670 to the host,” Col. 9, Lines 26-29). Baergen fails to teach the method comprising: determine that a threshold for an amount of accumulated data in a cache file on the cache that corresponds to the selected file is not or will not be exceeded by adding the data to the cache file. determine that the threshold for the amount of accumulated data in the cache wile is or will be reached by adding the data to the cache file; and initiate a write-back of accumulated data in the cache file to the selected file on the volume that is hosted by the second node in the second cluster based on the determining that the threshold is or will be reached. As a consequence of a failure to teach the first determining limitation, Baergen fails to teach where the writing of the data to the cache file is performed upon determining that the threshold will not be exceeded. As a consequence of the failure to teach the second determining limitation and initiation of the write-back, Baergen fails to teach where the data is written to the cache file after initiation of the write-back, wherein the write to the cache file occurs at least partially concurrently with the write-back sending the accumulated data to the volume hosted by the second node and wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. Regarding the new amended limitation, while Baergen does teach invalidating previous owners for cache coherency (as cited above in Col. 8, Lines 61 – Col. 9, Line 8, Col. 2, Lines 20-24), i.e. - ensuring that a current owner has exclusive write access, this write access is not disclosed in relation to the cache write-backs. While Baergen does disclose caches flushing data back to underlying storage, as seen in the Col. 4, Lines 45-56 citation, Baergen’s triggers are not disclosed to be a cache file or cache threshold, see also Col. 5, Lines 16-19, and flushing data back to underlying storage is not understood to specifically be from the cache of the first node back to the second node in the second cluster specifically. Aoyagi’s disclosure relates to managing data operations across multi-nodal systems, and as such comprises analogous art. As part of this disclosure, Aoyagi shows a similar process as Baergen’s in Fig. 5, where a local processor node may request data from a processor node that owns the data, loading it into its respective L2 cache. More specifically, Aoyagi Fig. 7 shows a process where when a local cluster flushes data, the data is also flushed back to the original home cluster, see also [0060]. An obvious modification can be identified: incorporating Aoyagi’s process of flushing data from a local cache back to a remote cluster. Such a modification reads upon where the write back of accumulated data in the first cache is performed to the volume hosted by the second node in the second cluster. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Aoyagi’s remote cluster flushing process, as this can reduce Baergen’s complexity by ensuring that only the local cache is dirtied, and therefore written back to the remote cluster to synchronize the data across all clusters. The combination of Baergen and Aoyagi still fails to teach the determining limitations, as well as where the writing the data to the cache file or initiating a write-back are performed in response to respective determinations and the relation between the write delegations and the write=back and writing, as well as the timing of the writing of the data to the cache file, as Aoyagi does not provide any disclosure to modify when a flush or write occurs. Foster’s disclosure relates to managing cache data, and as such comprises analogous art. As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold, see [0057]. An obvious modification can be identified: providing for a cache file capable of accumulating multiple write operations, with the ability to flush a particular cache file if the size is reached. Such a modification reads upon the determining that the threshold will not be reached, as well as where the write is performed based on determining that the threshold will not be exceeded, as Foster’s ability to accumulate data into a cache file before determining that a cache file threshold is met necessarily means that a determination is made that the cache file size threshold will not be exceeded by a given write. This modification also reads upon where a write-back is initiated based on the status of a threshold for a cache file’s accumulated data It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Baergen’s disclosure, as the cache file provides for a file system to access related data, and setting a separate cache file size threshold ensure that no single file can dominate the use of Baergen’s caching resources and that the cache does not run out of space during operation. The combination of Baergen, Aoyagi, and Foster still fails to teach where the writing of the data occurs after initiating the write-back and at least partially concurrently with the write-back sending the accumulated data to the volume hosted by the second node and wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. McKenney’s disclosure relates to cache management and particularly transferring data out of/into caches, and as such comprises analogous art. As part of this disclosure, McKenney Fig. 3 depicts a series of operations, with reading into cache locations in step 202 and writing into cache locations in step 204, as well as flushes in steps 201 and 203. The overall cache operation is generally disclosed in Col. 6, Line 32-Col. 9, Line 50, but of particular note, McKenney provides that “While the processing of steps 201 through 205 has been described in a serialized manner, in a preferred embodiment certain steps may be performed concurrently with other steps to further speed up the entire data transfer process. More specifically, in a preferred embodiment, steps 201, 203 and 205 are performed while steps 202 and/or 204 are being executed. Thus, the steps of emptying the cache memory 104 locations at block 210 (Step 201) and block 211 (step 203) may be performed concurrently as data is loaded from I/O memory A 101 (Step 202) and as data is copied (Step 204) from block 210 to block 211, respectively. Thus, step 201 and 202 may be started together, and before the data from I/O memory 101 is returned from the read request in step 202, step 201 will have completed clearing and synchronizing the cache 201. Likewise, step 203 and 204 may be started concurrently, and step 203 completes is clearing of the destination locations in block 211 just before the data is transferred to those locations in step 204,” Col. 8, Lines 46-64. An obvious modification can be identified: incorporating McKenney’s disclosure of the ability to concurrently flush locations in cache memory while performing writes into the cache locations. Such a modification reads upon the amended limitation where writing of the data occurs concurrently with the write-back as well as where this occurs after the write-back is initiated, as the flushing operation is started prior to performing the writes, and the writes are now specifically being started concurrently with the emptying of the cache, i.e. emptying and flushing the data back to the underlying storage. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate McKenney’s disclosure of concurrent writes and write-backs into Baergen’s system, as this provides for the ability to speed up data transfer processing, see Col. 8, Lines 47-50. The combination of Baergen, Aoyagi, Foster, and McKenney still fails to teach wherein the write-back is initiated and performed while the cache retains exclusive write delegation for the selected file, such that concurrent writing is performed under exclusive write delegation. Shultz’s disclosure relates to a file system cache and managing coherency, and as such comprises analogous art. As part of this disclosure, Shultz provides for a lock structure to record if there is an exclusive write lock, see [0016]. Shultz provides a process in Fig. 2 for where lock access functions (LAF), data access functions (DAF), and cache access functions (CAF) for a virtual machine can process write requests to a file system. In particular, after acquiring a lock for the cache at step 202, the virtual machine process writes to the cache in step 214 and possesses the ability to determine whether or not to flush the cache, and if necessary flushes the cache, see steps 216 and 220. After all this, then the write lock is given up in step 222. An obvious modification can be identified: incorporating Shultz’s disclosure of maintaining write locks until writes and flushes are finished into Baergen. The combination of Baergen, Aoyagi, Foster, and McKenney as disclosed so far provides for processing writes with acquisition of a lock and concurrent writebacks and writing of data. Shultz now provides a modification that explicitly provides a nexus between cache flushes, writes, and acquisition/surrender of a write lock, namely that writing and flushing caches occurs while the write lock is maintained. Such a modification reads upon the limitation of the claim, as the write lock providing exclusive write delegation is retained while the write back and writes are initiated/performed. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Shultz’s disclosure of maintaining locks during writes and flushes into Baergen’s system, as Shultz provides for an explicit lock structure to ensure cache coherency during operations that modify the cache, i.e. writing to the cache and flushing from the cache. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Foster, McKenney, and Shultz and further in view of Jibbe. Claim 19 is rejected according to the same rationale of claim 8. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Baergen in view of Aoyagi, Foster, McKenney and Shultz and further in view of Fathalla. Claim 20 is rejected according to the same rationale of claim 10. Response to Arguments Applicant’s arguments filed January 30, 2026 have been fully considered but are moot. Applicant’s amendments and new claims are sufficient to overcome the prior grounds of rejection. However, upon an updated search and consideration of the art, a new reference Shultz was found to provide disclosure sufficient for an obviousness rationale, as seen above. The arguments are therefore moot for lack of opportunity to address the new rationale incorporating Shultz. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON D HO whose telephone number is (469)295-9093. The examiner can normally be reached Mon-Fri 8:00-4:00 CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.D.H./Examiner, Art Unit 2139 /REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Jun 15, 2024
Non-Final Rejection — §103, §112
Nov 21, 2024
Examiner Interview Summary
Nov 21, 2024
Applicant Interview (Telephonic)
Dec 16, 2024
Response Filed
Dec 23, 2024
Final Rejection — §103, §112
Mar 24, 2025
Examiner Interview Summary
Mar 24, 2025
Applicant Interview (Telephonic)
Mar 31, 2025
Request for Continued Examination
Apr 02, 2025
Response after Non-Final Action
Apr 03, 2025
Non-Final Rejection — §103, §112
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Sep 09, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103, §112
Jan 30, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578886
METHOD AND APPARATUS FOR MEMORY MANAGEMENT IN MEMORY DISAGGREGATION ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572356
MEMORY DEVICE FOR PERFORMING IN-MEMORY PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12561252
DYNAMIC CACHE LOADING AND VERIFICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12554418
MEMORY CHANNEL CONTROLLER OPERATION BASED ON DATA TYPES
2y 5m to grant Granted Feb 17, 2026
Patent 12524340
ARRAY ACCESS WITH RECEIVER MASKING
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.1%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 251 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month