Prosecution Insights
Last updated: April 19, 2026
Application No. 19/033,224

METHOD AND SYSTEM FOR STORAGE VIRTUALIZATION

Non-Final OA §102§103§112§DP
Filed
Jan 21, 2025
Examiner
FARROKH, HASHEM
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Dynavisor Inc.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
813 granted / 912 resolved
+34.1% vs TC avg
Minimal +2% lift
Without
With
+2.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
13 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 912 resolved cases

Office Action

§102 §103 §112 §DP
DETAIL ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. The instant application having application No. 19/033,224 has a total of 20 claims pending in the application; there are 2 independent claim and 18 dependent claims, all of which are ready for examination by the examiner. IFORMATION CONCENING DRAWING: 3. Application’s drawing submitted on 01/21/2025 are acceptable for examination purposes. INFORMATION CONCERNING CLAIMS: Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21-40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 4. The independent claims 21 and 31 recite, In part, the limitation: “wherein the first Tier 0 in-memory cache is further operable to synchronize the first application data with the first Tier 1 cache and the first Tier 2 storage” (emphasis added). The term “synchronize” is broad without any antecedent basis or definition to establish its scope. The specification does not provide any clear, limiting definition or illustrative example that would inform one of ordinary skill in the art which type of copying or writing data from one cache/memory unit to other cache/memory unit(s) the applicant regards as “synchronize.” See, e.g., Spec. ¶¶[0031], [0040]–[0041] (discussing cache coherency and write-back policies broadly, but failing to tie those policies to a precise operational meaning of the term “synchronize” in the claims). Because the claim fails to disclose any objective criteria defining the boundaries of “synchronize,” one of ordinary skill cannot determine the scope of the claim with reasonable certainty. Claims 21 and 31 are therefore indefinite under 35 U.S.C. § 112(b). Claims 22-30 and 32-40 are rejected at least by virtue of their dependency from their respected base claims. 5. The independent claim 31 recites, In part, the limitation: “wherein each of the first Tier 0 in-memory cache and the second Tier 0 in-memory cache are operable to synchronize respective first and second application data with the first Tier 1 cache and the first Tier 2 storage and/or the second Tier 1 cache and the second Tier 2 storage” (emphasis added). Use of the phrase “and/or” render the claim indefinite. It is not clear whether the Tier 0 cache must synchronize with: Tier 1 only, Tier 2 only, or both Tier 1 and Tier 2. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 6. Claims 21-23 and 31 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 10-11, and 20 of U.S. Patent No. 12,204,451 B2 (e.g., hereinafter “parent patent”). Although the claims at issue are not identical, they are not patentably distinct from each other because difference in claims language or order of the claims limitations do not make the claims distinct from each other. For example, claim 31 recites the first Tier 1 and first Tier 2 are communicatively coupled to the first tier 0, while claim 1 of the parent patent recites data in stored in Tier 0 is transferred to the first Tier 1 and the first Tier 2. The transfer of data among the tiers implies that they are coupled. 7. Claims 21-23 and 31 are compared with claims 1, 3-4, 10-11, and 31 of the parent patent in the following table: US Patent 12,204,451 B2 US Application 19/033,224 1.A method, comprising: providing a first storage device having a first Tier 1 cache and a first Tier 2 storage, the first Tier 1 cache and the first Tier 2 storage shared on a first logical partition of the first storage device, wherein the first Tier 1 cache and the first Tier 2 storage are included within a common hardware platform; providing a first operating system; providing a first file system having a first virtual Tier 0 memory cache that stores first application data, the first file system managing data transfers between the first virtual Tier 0 memory cache, the first Tier 1 cache and the first Tier 2 storage for the first logical partition; and synchronizing, using the first virtual Tier 0 memory cache, the first application data with the first Tier 1 cache and the first Tier 2 storage via the first logical partition, wherein the synchronization is performed within the common hardware platform. 21. A storage visualization system, comprising: a first Tier 0 in-memory cache operable to receive and store first application data from a first application; a first Tier 1 cache communicatively coupled to the first Tier 0 in-memory cache; and a first Tier 2 storage communicatively coupled to the first Tier 0 in-memory cache, wherein the first Tier 0 in-memory cache is further operable to synchronize the first application data with the first Tier 1 cache and the first Tier 2 storage. 20. A system, comprising: a first storage device having a first Tier 1 cache and a first Tier 2 storage, the first Tier 1 cache and the first Tier 2 storage shared on a first logical partition of the first storage device, wherein the first Tier 1 cache and the first Tier 2 storage are included within a common hardware platform; a first operating system; a first file system having a first virtual Tier 0 memory cache that stores first application data, the first file system managing data transfers between the first virtual Tier 0 memory cache, the first Tier 1 cache and the first Tier 2 storage for the first logical partition; and wherein the first virtual Tier 0 memory cache synchronizes the first application data with the first Tier 1 cache and the first Tier 2 storage via the first logical partition, and wherein the synchronization is performed within the common hardware platform. 3. The method of claim 1, further comprising: providing a second storage device having a second Tier 1 cache and a second Tier 2 storage; providing a second operating system; providing a second file system having a second virtual Tier 0 memory cache that stores second application data; and synchronizing, using the second virtual Tier 0 memory cache, the second application data with the first Tier 1 cache, the first Tier 2 storage, the second Tier 1 cache, and the second Tier 2 storage. 4. The method of claim 1, further comprising: providing a second storage device having a second Tier 1 cache and a second Tier 2 storage; providing a second operating system; storing a second application data using a second file system having the second virtual Tier 0 memory cache; and synchronizing, using the first virtual Tier 0 memory cache, the first application data with the first Tier 1 cache, the first Tier 2 storage, second Tier 1 cache, and the second Tier 2 storage 31. A storage virtualization system, comprising: a first node comprising a first Tier 1 cache and a first Tier 2 storage; a first Tier 0 in-memory cache operable to receive and store first application data from a first application; and a second node comprising a second Tier 1 cache and a second Tier 2 storage, a second Tier 0 in-memory cache operable to receive and store second application data from a second application; wherein each of the first Tier 0 in-memory cache and the second Tier 0 in-memory cache are operable to synchronize respective first and second application data with the first Tier 1 cache and the first Tier 2 storage and/or the second Tier 1 cache and the second Tier 2 storage. 10. The method of claim 1, wherein the first Tier 1 memory cache is a solid state drive. 11. The method of claim 1, wherein the first Tier 2 memory cache is a hard disk drive. 22. The storage visualization system of claim 21, wherein the first Tier 1 cache comprises a flash cache and the first Tier 2 storage comprises a hard drive device. 1.A method, comprising: ----- Tier 0 memory cache that stores first application data, the first file system managing data transfers between the first virtual Tier 0 memory cache, the first Tier 1 cache and the first Tier 2 storage ------ 23. (New) The storage visualization system of claim 21, wherein the first Tier 0 in-memory cache and the first Tier 1 cache are operable to hold a subset of the first application data stored in the first Tier 2 storage. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – Claims 21-23 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Smaldone et al. “Smaldone” (US 11,210,263 B1). 8. Regarding claim 21, Smaldone teaches or suggest” “A storage visualization system, comprising: a first Tier 0 in-memory cache operable to receive and store first application data from a first application;” (Fig. 1A, col. 5, lines 12-13, file system buffer cache 125; col. 6, lines 18-19, File system buffer cache 125 can be implemented as volatile random access memory (RAM); col. 6, lines 59-62, Referring to FIG. 1A, the example includes writing (1) of a file system object (FSO) 130A by application 115A to a node file system implemented by Operating System (OS) 120A). The volatile random access memory (VRAM) represents a first Tier 0 in-memory cache recited in the claim. “a first Tier 1 cache communicatively coupled to the first Tier 0 in-memory cache;” (Fig. 1A, col. 5, lines 45-48, file buffer cache 125 (e.g. 125A) can periodically flush (FIG. 1A, (3)) a most recent version of FSO 130 (e.g., FSO 130A) from buffer cache 125 (e.g., 125A)… HST Coherency Mgt. 140 can detect flush messages from buffer cache 125, intended to flush to storage array 250, intercept those messages, and store the flushed FSO 130 to host-side cache 150 (e.g. 150A) as FSO (e.g., FSO 155A)). The host-side cache 150 represents the first Tier 1 cache recited in the claim (e.g., circle 3 in Fig. 1A). “and a first Tier 2 storage communicatively coupled to the first Tier 0 in-memory cache,” (Fig. 1A, col. 5, lines 53-56, Host-side cache 150 can, in turn, periodically and independently from buffer cache 125, flush (FIG. 1A, (4)) a most recent version of FSO 155 (e.g. FSO 155A) to storage array 250). The storage array 250 represents a first Tier 2 storage. “wherein the first Tier 0 in-memory cache is further operable to synchronize the first application data with the first Tier 1 cache and the first Tier 2 storage.” (Fig. 1A, col. 5, lines 20-23, a host-side cache 150 and host-side cache coherency management layer is introduced that coordinates synchronization of writes and write invalidations within the host-side cache 150; Col. 7, lines 65-67, Timing of the write-back of FSO 155A or FSO 155B to storage array 250 can be asynchronous with the above cache coherency logic). Smaldone teaches a clustered/distributed file system (C/DFS). Each of the cluster node volatile random access memory cache (VRAM). For example buffer cache 125A for cluster node 110A and a host side tier 150A. A file system object from application 115A, in response to a request, application data is written to buffer cache 125A and stored as FSO 130A and flushed to the host side tier 150A and stored as FSO 155A (e.g., see paths shown by circles 1 and 3 in Fig. 1A). At the same time (e.g., synchronously) that FSO 130A is written, an invalidate message sent to the other cluster nodes to indicate that the copy of data stored in their buffer caches are no longer are valid. The host side tier FSO 155A writes back the FSO 155A to storage array(s) 250. Smaldone the C/DFS maintains cache/storage coherency to ensure that the most recent (e.g., valid) data is returned when is requested. Par. [0142} of claimed specification recites that the flash devices (e.g., tier 1 cache) write data to T1 storage in write-back mode. As stated related to rejection independent claims 21 and 31 the claims or specification does not describe what is meant by “synchronization”. Examiner assumes synchronization means copying data stored in Tier 1 and Tier 2. 9. Regarding claim 22, Smaldone further teaches: “wherein the first Tier 1 cache (e.g., host side tier 150A) comprises a flash cache and the first Tier 2 storage comprises a hard drive device.” (e.g., col. 3 lines 7-8, host-side tier cache, which can be a non-volatile high-speed memory; col. 6, lines 49-51, A storage array can comprise a large plurality of storage units, such as disk drives). 10. Regarding claim 23, Smaldone further teaches: “wherein the first Tier 0 in-memory cache and the first Tier 1 cache are operable to hold a subset of the first application data stored in the first Tier 2 storage.” (e.g., col. 3, lines 42-50). The most recent copies of data maintained in buffer cache and host-side tier or cache. When storage capacity of host-side cache less than a threshold, or in response to a flush request, flushing the least-recently used data (e.g., the FSO that has not been read for a predetermined period of time) to storage system. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 24-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Smaldone in view of Xu et al. “Xu” (US 10,067,877 B1). 11. Regarding claim 24, Xu further teaches: “wherein the first Tier 1 cache comprises multiple flash storage devices (e.g., flash disk cache 312 in Fig. 3) and the first Tier 2 storage (e.g., PD Layer 314 in Fig. 3) comprises multiple hard drive devices.” (e.g., Fig. 3, col. 8, lines 33-34, The flash disk cache 312 may include flash-based storage devices or other solid state storage; col. 3, line 66 to col. 4, line 2, data storage devices 16a-16n may include one or more types of data storage devices such as, for example, one or more rotating disk drives and/or one or more solid state drives (SSDs)). Disclosures by Smaldone and Xu are analogous because they are in the same field of endeavor and/or solving a similar or common problem. It would have been obvious to a person of having ordinary skill in the art before the effective filing date of the clustered/distributed file systems (C/DF) taught by Smaldone to include the flash disk cache comprising a plurality of storage devices disclosed by Xu. The motivation for including the flash disk cache as taught by col. 8, lines64-67 of Xu is to improve performance by transparently storing or promoting data from PDs 314 into Flash Disk media (Flash disk) of layer 312. Therefore, it would have been obvious to combine teaching of Xu with Smaldone to obtain the invention as specified in the claim. 12. Regarding claim 25, Xu further teaches: “wherein the first Tier 0 in-memory cache is a volatile memory storing frequently accessed first application data.” (e.g., col. 10, lines 14-15, In at least one embodiment the DRAM Cache 310 may cache the hottest (e.g., most frequently accessed) data). Conclusion The prior art made of record and not relied upon are as follows: 1. Farey (US 20180165208 A1) teaches “….the memory including a plurality of distinct memory types each assigned to a respective memory tier of a plurality of memory tiers based on a read latency (also sometimes called read time or access time) of the memory type…” (par. 0019). 2. HAYASHI et al. (US 20160132433 A1) teaches “…Therefore, if a policy to migrate the data having a high read rate to a lower-level tier is adopted, other data can be placed onto the upper-level tier, thereby the server cache and the SSD 267 can be effectively utilized…” (par. 0073) 3. Benhase et al. (US 20130205088 A1) teaches “…providing first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier…” (par. 0012) Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASHEM FARROKH whose telephone number is (571)272-4193. The examiner can normally be reached Monday through Friday from 8:30 am - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. Reginald Bragdon can be reached on (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. For questions regarding access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASHEM FARROKH/ Primary Examiner, Art Unit 2138
Read full office action

Prosecution Timeline

Jan 21, 2025
Application Filed
Mar 20, 2025
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596502
ELECTRONIC DEVICE AND ELECTRONIC DEVICE CONTROL METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12585587
DYNAMIC CACHING OF DATA ELEMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579069
NEURAL PROCESSING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12561062
Write-Back Caching Across Clusters
2y 5m to grant Granted Feb 24, 2026
Patent 12561085
HARDWARE ACCELERATOR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
91%
With Interview (+2.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 912 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month