DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 11, 2026 has been entered.
Response to Amendment
The amendment filed March 11, 2026 has been entered. Claims 3, 9, 14, and 15 have been cancelled and claims 22-24 are newly filed, leaving claims 1, 2, 4-8, 10-13, and 16-24 pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on March 11, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 2, 4-8, 10-13, and 16-24 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent claims 1, 11, and 17 have been amended to recite, using claim 1 for example language, “the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file”. The phrase “sub-portion” is never recited in the specification, so the claim term would normally be given its plain meaning under the broadest reasonable interpretation of the claim, which naturally suggests a subset smaller than a portion. However, this then leads to an issue of relative degree. The term “portion” already provides for a subset of the larger unit, that is the cache file, see for example [0049, 0115, 0150, 0165, 0200, 0203]. The specification does not provide any further clarification on how to distinguish whether a subset of the cache file is small enough to qualify as a sub-portion or still exists as a portion. Therefore, the term “sub-portion” is a relative term which renders the claim indefinite, as the claim term by itself does not help define the limits of what is or is not a sub-portion, the specification does not provide a standard for discerning between a portion and sub-portion, so one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For the purpose of compact prosecution, no language suggestion is provided here, and it is assumed that “sub-portion” and “portion” are substantially similar (i.e. – for this action, the scope is that “sub-portion” and “portion” refer to a subset of the cache file).
The dependent claims are rejected for dependence on the independent claims.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 2, 4-8, 10-13, and 16-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As discussed in the rejection under 35 U.S.C. 112(b) above, the independent claims now recite, using claim 1 for example language, “the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file”. The phrase “sub-portion” is never recited in the specification. While “portions” are described with respect to the cache file, see [0049, 0115, 0150, 0165, 0200, 0203], these are all explicitly portions, and not sub-portions, and as such this leads to a determination of new matter.
The dependent claims are rejected for dependence on the independent claims.
Claims 23 and 24 recite, using claim 23 for example language, “wherein the sequence of file block numbers is a contiguous sequence of file block numbers”. Upon a review of the specification, while the specification refers to a sequence of file block numbers, see [0035, 0143, 0201, 0202, 0207, 0227], the specification does not explicitly refer to this sequence as contiguous. A determination is made that this comprises new matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 5-8, 10, 17-20, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Jibbe et al. (US 2017/0315913, as provided in applicant’s IDS) in view of Foster et al. (US 2011/0219349, as provided in applicant’s IDS), Kodavanji et al. (US 2024/0036883), and Sampathkumar (US 10,152,422)
Regarding claim 1, Jibbe teaches a method comprising:
writing data to a cache, the cache corresponding to a volume (“The storage controller uses a data cache, e.g., a dynamic random access memory (DRAM), as an indirection layer to convert non-sequential write requests received from the host(s) into sequential writes for a thinly provisioned volume (also referred to herein as a “thin volume”) that is stored in a data repository on the SMR device pool,” [0013], where Fig. 2 shows the cache 220 corresponding to volumes 222 in the SMR drives 206);
updating a tracking metafile based on the data written to the cache file (“In an embodiment, the storage controller maintains an index that maps the LBAs of the respective data blocks to their corresponding locations within the allocated portion of the thinly provisioned volume. The index may be maintained as part of metadata used by the storage controller for managing the contents of host data within the thinly provisioned volume and tracking the current utilization of the first data cache's data storage capacity,” [0015], see also “The metadata store 116 may house one or more types of metadata to facilitate translating the specified LBAs of the data in the write-back cache to block addresses used by the storage devices 106. In an embodiment, the metadata includes an index that maps the memory addresses of data blocks in the write-back cache to virtual LBAs of a thinly provisioned volume stored within a repository created on the SMR device pool. In a further embodiment, the metadata also includes an index that maps the virtual LBAs for different data blocks in the thinly provisioned volume to their corresponding physical locations within the repository on the SMR device pool. The mapping of virtual logical blocks in the thinly provisioned volume to logical blocks on the SMR drives is performed when the data is received from the host(s) 104, e.g., as part of a series of write requests directed to non-sequential addresses within the pool of SMR devices. In this manner, the data cache may be used as an indirection layer to write data from non-contiguous virtual logical blocks to sequential physical blocks in the SMR device pool,” [0024]; storing new data in the data cache necessarily requires updating the metadata maintaining the mapping to data in the data cache);
triggering a write-back of data stored in the group of blocks in the cache that corresponds to the record to the volume (“Upon determining that the current utilization of the data storage capacity of the data cache 220 exceeds a threshold, the storage controller 200 flushes the data cache 220 by transferring the sequence of data clusters including the data blocks from the data cache 220 to the pool of SMR devices 206,” [0042]).
Jibbe fails to teach where the data is written specifically to a cache file in the cache as well as the method comprising:
determining that a record in the tracking metafile corresponding to a group of blocks in the cache file is full when a bitmap of the record indicates that all corresponding group of blocks represented by the bitmap have been modified, the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
determining that the write-back has been completed; and
updating the tracking metafile to indicate that the write-back has been completed.
Jibbe further fails to teach that the write-back is specifically asynchronous, as well as where the write-back is triggered upon the determining that the record in the tracking metafile corresponding to the group of blocks in the cache file is full. While Jibbe does disclose a metadata store that relates to blocks in the cache, see [0024] as cited above discussing the mapping between cache data blocks and underlying storage, Jibbe does not specifically disclose the determination that the record is full in relation to a writeback. Instead, Jibbe utilizes an overall capacity of the data cache, see [0042] as cited above and [0014], or on demand flushing, see [0045].
Jibbe is noted for showing how the group of blocks have a corresponding sequence of file block numbers, as Jibbe Fig. 3 shows how blocks may be written to a sequence of data clusters within the provisioned space of the thin volume, see also [0041]. This is further shown in that while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] teaching how the metadata maps from LBA’s/indices within the data cache to the volume/SMR devices.
Foster’s disclosure relates to managing cache data, and as such comprises analogous art.
As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold and in particular an example where the cache file is specifically full, see [0047,0057]. Further, this flushing in response to a size threshold being met is contrasted with periodically discarding of the cache file, see [0045, 0054] discussing periodic checks of the dependency files, and where the disclosure of [0057] specifically states that “Flushing mechanism 228 may also flush cache file 216 and/or index file 222 independently of dependencies 410. For example, flushing mechanism 228 may discard evaluation results in cache file 216 based at least on a file size threshold associated with cache file 216…”.
An obvious modification can be identified: incorporating Foster’s cache file for accumulating write data operations, with the ability to flush particular cache files if the size threshold is reached/the cache file is full. Such a modification reads upon where the data is written to a cache file within a cache, as well as where a write-back can be triggered asynchronously.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Jibbe’s disclosure, as the cache file provides for a file system to access related data, and providing separate cache file size thresholds ensures that no single file can dominate the use of Jibbe’s caching resources while still allowing for a degree of caching policies.
The combination of Jibbe and Foster still fails to teach the method comprising:
determining that a record in the tracking metafile corresponding to a group of blocks in the cache file is full when a bitmap of the record indicates that all corresponding group of blocks represented by the bitmap have been modified, the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
determining that the write-back has been completed; and
updating the tracking metafile to indicate that the write-back has been completed.
As a consequence, while Foster does modify Jibbe to teach triggering asynchronous write-backs, the combination also still fails to teach where the trigger occurs upon determining that the record in the tracking metafile is full.
While Foster does teach flushing when the cache file is full, this is not the same as where metadata is full.
Kodavanji’s disclosure relates to tracking data for providing a backup in a storage system. As such, Kodavanji’s disclosure is analogous art for the same field of endeavor of storage management, and the discussion on how to manage tracking data for moving data in a backup context would be reasonably pertinent to the context of moving data in a caching/write-back context.
As part of this disclosure, Kodavanji provides for a modified page tracking bitmap, which “each persistent memory 122 of the respective memory server 104 stores a modified pages tracking bitmap 132, which is an example of a modified pages tracking structure mentioned further above. The modified pages tracking bitmap 132 contains a collection of bits (e.g., an array of bits) that represent modification states of respective pages 134 stored in the persistent memory 122 of the respective memory server 104,” [0037] (see also [0038] providing further definition that the bit value represents whether the page is modified or not), where in a backup process as disclosed in Fig. 3, an incremental backup loop identifies modifies pages in step 316, copies the pages to the backup storage system in step 318, and after copying the page, then the tracking bitmap is reset to indicate it is no longer modified, see also [0065].
An obvious modification can be identified: incorporating a page tracking bitmap into Jibbe’s metadata store, including tracking when pages are modified, and then incorporating the iterative process to copy pages to the underlying storage system and reset the bitmap for that page. Such a modification reads upon the missing limitations of the claim, as 1) incorporating a page tracking bitmap means tracking the pages stored into the write data cache, where Foster’s earlier disclosure of a full cache file necessarily means that the bitmap for the pages of the cache file are all full, because every page is modified (i.e. – the write data temporarily stored into Jibbe’s data cache write data cache is modifying the pages), reading upon the determining that the record is full when all corresponding groups have been modified, 2) the iterative backup process incorporated into Jibbe’s flushing provides for an iterative flushing, where the tracking bitmap is updated after the pages are flushed necessarily requiring the determination that the flushing of a page is done, reading upon the determining that the write-back has been completed and updating of the tracking metafile, and 3) in combination with Jibbe and Foster’s earlier disclosures of tracking the fullness of the data cache and cache files and utilizing them for triggering flushing mechanisms, a fullness of the cache file storing write data is necessarily also reflected in a fullness of the tracking bitmap, reading upon the triggering of the write-back upon the determining that the record in the tracking metafile is full. The combination further provides that Kodavanji’s bitmap relates to the pages in the persistent memory, where the combination provides this bitmap for the data cache, and therefore each bit corresponds to a block/page, so each bit corresponds to a respective cluster in the thin volume of Jibbe, reading upon the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kodavanji’s tracking bitmap and iterative backup process into Jibbe’s metadata and data cache, as this provides a data structure that can easily track the progress of flushing and ensure that the correct number of pages/correct pages are flushed.
The combination of Jibbe, Foster, and Kodavanji still fails to teach where the sequence of file block numbers form a sub-portion of the cache file specifically.
Sampathkumar’s disclosure relates to managing cache metadata, and as such comprises analogous art.
As part of this disclosure, Sampathkumar provides in Fig. 2 how a cache may be divided up into a series of windows, labeled cache window 0-n, and a section for cache metadata, where each cache window has a corresponding metadata entry. As further shown in Fig. 3, the metadata stored 318 includes a dirty cache bitmap 322 and other cache information metadata 324, with Fig. 3 showing that each cache window corresponds to a portion for the metadata bitmap, see also Col. 8, Line 48 - Col. 9, Line 10. Further, when considering the flush mechanism, Sampathkumar discloses that the cache bocks grouped for flushing can be within cache windows, see Col. 9, Lines 33-40.
An obvious modification can be identified: dividing caches and cache files up into their unit cache windows. Such a modification reads upon where the record corresponds to a sub-portion of the cache file, as the metadata can be organized according to cache windows instead of the larger cache file/caches.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sampathkumar’s disclosure of cache windows into Jibbe’s cache device, as cache windows provide basic/atomic units for cache allocation and management, see Col. 4, Lines 53-60, allowing for the ability to track and move data in the cache in same-sized unit increments.
Regarding claim 2, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1 and further teaches wherein updating the tracking metafile based on the data written to the cache file comprises:
updating at least one bit in the bitmap of the record that corresponds to the group of blocks in the cache file (as discussed in the claim 1 rationale, Kodavanji Fig. 3 and [0065] discloses resetting a bit for a page that has been copied to the underlying storage).
Regarding claim 5, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, and Jibbe further teaches wherein updating the tracking metafile to indicate that the write-back has been completed comprises:
deleting the record from the tracking metafile to thereby make file block numbers of the group of blocks corresponding to the record available for future writes (Jibbe discloses that the metadata stored includes mapping and translations from data in the write-back cache to LBA’s in the volume or physical addresses of the underlying SMR devices, see [0024], necessarily, when data is flushed out of the write data cache, the data is no longer extant within the cache, so the mapping is invalid, and the metadata store will purge it).
Regarding claim 6, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1 and further teaches wherein updating the tracking metafile to indicate that the write-back has been completed comprises:
changing values for bits in the record to indicate that the corresponding group of blocks is available for future writes (as cited in the claim 1 rationale, Kodavanji resets the bits of modified pages after backing them up, see specifically “After the page is copied to the backup storage system 130, the memory server 104 resets (at 320) the corresponding bit in the modified pages tracking bitmap 132, which changes the corresponding bit from the first value to the second value to indicate that the page is unmodified,” [0065], teaching that the page is available for a write, as there is no modified data that needs to be evicted/flushed).
Regarding claim 7, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, and the combination further teaches wherein the record includes metadata (see Jibbe [0024] as cited in the claim 1 rationale) and the bitmap (see Kodavanji [0037,0065] as cited in the claim 1 rationale, see also Kodavanji [0038]).
Regarding claim 8, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, and the combination further teaches wherein the record includes a key that includes a cache file identifier for the cache file and an identifier for a first file block number in the sequence of file block numbers (Jibbe’s metadata store provides for indices of the blocks within the data cache, see [0024], with Foster’s citation in claim 1 providing for the cache file accumulating writes over time, see [0011]; necessarily, as a consequence of the incorporation of Foster into Jibbe, and identifier must be present to distinguish between cache files, and where the cache files are stored in the data cache, reading upon the limitation of the claim).
Regarding claim 10, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, and further teaches wherein initiating the write-back includes tracking the write-back in a data structure that indicates which file block numbers of the cache file are associated with an in-progress write-back (as disclosed in the claim 1 rationale, Kodavanji provides for a modified page tracking bitmap which is updated with each iteration of the write back, where each page is reset upon completion of the page’s copying, see [0065]; as such, Kodavanji’s bitmap provides for the data structure that tracks which blocks are associated with an in-progress write back, as the bitmap shows the progress of which pages/blocks still need to be moved/have been moved).
Regarding claim 17, Jibbe teaches a non-transitory machine-readable medium having stored thereon instructions for performing a method comprising machine-executable code (see “Accordingly, it is understood that any operation of the computing system according to the aspects of the present disclosure may be implemented by the computing system using corresponding instructions stored on or in a non-transitory computer readable medium accessible by the processing system. For the purposes of this description, a tangible computer-usable or computer-readable medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device. The medium may include for example non-volatile memory including magnetic storage, solid-state storage, optical storage, cache memory, and Random Access Memory (RAM),” [0058]) which, when executed by at least one machine, causes the at least one machine to:
writing data to a cache, the cache corresponding to a volume (“The storage controller uses a data cache, e.g., a dynamic random access memory (DRAM), as an indirection layer to convert non-sequential write requests received from the host(s) into sequential writes for a thinly provisioned volume (also referred to herein as a “thin volume”) that is stored in a data repository on the SMR device pool,” [0013], where Fig. 2 shows the cache 220 corresponding to volumes 222 in the SMR drives 206);
update a record in a tracking metafile based on the data written to the cache, the record including a key and metadata that represents a sequence of file block numbers in the cache file ((“In an embodiment, the storage controller maintains an index that maps the LBAs of the respective data blocks to their corresponding locations within the allocated portion of the thinly provisioned volume. The index may be maintained as part of metadata used by the storage controller for managing the contents of host data within the thinly provisioned volume and tracking the current utilization of the first data cache's data storage capacity,” [0015], see also “The metadata store 116 may house one or more types of metadata to facilitate translating the specified LBAs of the data in the write-back cache to block addresses used by the storage devices 106. In an embodiment, the metadata includes an index that maps the memory addresses of data blocks in the write-back cache to virtual LBAs of a thinly provisioned volume stored within a repository created on the SMR device pool. In a further embodiment, the metadata also includes an index that maps the virtual LBAs for different data blocks in the thinly provisioned volume to their corresponding physical locations within the repository on the SMR device pool. The mapping of virtual logical blocks in the thinly provisioned volume to logical blocks on the SMR drives is performed when the data is received from the host(s) 104, e.g., as part of a series of write requests directed to non-sequential addresses within the pool of SMR devices. In this manner, the data cache may be used as an indirection layer to write data from non-contiguous virtual logical blocks to sequential physical blocks in the SMR device pool,” [0024]; storing new data in the data cache necessarily requires updating the metadata maintaining the mapping to data in the data cache; while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] teaching how the metadata maps from LBA’s/indices within the data cache to the volume/SMR devices; Jibbe’s metadata store provides for indices, reading on the key, and the mapping reads upon the metadata);
triggering a write-back of data stored in the sequence of file block numbers (“Upon determining that the current utilization of the data storage capacity of the data cache 220 exceeds a threshold, the storage controller 200 flushes the data cache 220 by transferring the sequence of data clusters including the data blocks from the data cache 220 to the pool of SMR devices 206,” [0042]); and
delete the record from the tracking metafile (Jibbe discloses that the metadata stored includes mapping and translations from data in the write-back cache to LBA’s in the volume or physical addresses of the underlying SMR devices, see [0024], necessarily, when data is flushed out of the write data cache, the data is no longer extant within the cache, so the mapping is invalid, and the metadata store will purge it).
Jibbe fails to teach where the data is specifically written to a cache file in the cache, where the record includes a bitmap, as well as where the method includes the steps to:
determine that the record in the tracking metafile is full when the bitmap indicates that all blocks in the cache file corresponding to the sequence of file block numbers has been modified, the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
determine that the write-back has been completed.
Jibbe further fails to teach that the write-back is specifically asynchronous, as well as where the write-back is triggered upon the determining that the record in the tracking metafile corresponding to the group of blocks in the cache file is full. While Jibbe does disclose a metadata store that relates to blocks in the cache, see [0024] as cited above discussing the mapping between cache data blocks and underlying storage, Jibbe does not specifically disclose the determination that the record is full in relation to a writeback. Instead, Jibbe utilizes an overall capacity of the data cache, see [0042] as cited above and [0014], or on demand flushing, see [0045].
Jibbe is noted for showing how the group of blocks have a corresponding sequence of file block numbers, as Jibbe Fig. 3 shows how blocks may be written to a sequence of data clusters within the provisioned space of the thin volume, see also [0041]. This is further shown in that while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] teaching how the metadata maps from LBA’s/indices within the data cache to the volume/SMR devices.
Foster’s disclosure relates to managing cache data, and as such comprises analogous art.
As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold and in particular an example where the cache file is specifically full, see [0047,0057]. Further, this flushing in response to a size threshold being met is contrasted with periodically discarding of the cache file, see [0045, 0054] discussing periodic checks of the dependency files, and where the disclosure of [0057] specifically states that “Flushing mechanism 228 may also flush cache file 216 and/or index file 222 independently of dependencies 410. For example, flushing mechanism 228 may discard evaluation results in cache file 216 based at least on a file size threshold associated with cache file 216…”.
An obvious modification can be identified: incorporating Foster’s cache file for accumulating write data operations, with the ability to flush particular cache files if the size threshold is reached/the cache file is full. Such a modification reads upon where the data is written to a cache file within a cache, as well as where a write-back can be triggered asynchronously.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Jibbe’s disclosure, as the cache file provides for a file system to access related data, and providing separate cache file size thresholds ensures that no single file can dominate the use of Jibbe’s caching resources while still allowing for a degree of caching policies.
The combination of Jibbe and Foster still fails to teach the record including the bitmap and the method comprising:
determine that the record in the tracking metafile is full when the bitmap indicates that all blocks in the cache file corresponding to the sequence of file block numbers has been modified , the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
determine that the write-back has been completed.
As a consequence, while Foster does modify Jibbe to teach triggering asynchronous write-backs, the combination also still fails to teach where the trigger occurs upon determining that the record in the tracking metafile is full.
While Foster does teach flushing when the cache file is full, this is not the same as where metadata is full.
Kodavanji’s disclosure relates to tracking data for providing a backup in a storage system. As such, Kodavanji’s disclosure is analogous art for the same field of endeavor of storage management, and the discussion on how to manage tracking data for moving data in a backup context would be reasonably pertinent to the context of moving data in a caching/write-back context.
As part of this disclosure, Kodavanji provides for a modified page tracking bitmap, which “each persistent memory 122 of the respective memory server 104 stores a modified pages tracking bitmap 132, which is an example of a modified pages tracking structure mentioned further above. The modified pages tracking bitmap 132 contains a collection of bits (e.g., an array of bits) that represent modification states of respective pages 134 stored in the persistent memory 122 of the respective memory server 104,” [0037] (see also [0038] providing further definition that the bit value represents whether the page is modified or not), where in a backup process as disclosed in Fig. 3, an incremental backup loop identifies modifies pages in step 316, copies the pages to the backup storage system in step 318, and after copying the page, then the tracking bitmap is reset to indicate it is no longer modified, see also [0065].
An obvious modification can be identified: incorporating a page tracking bitmap into Jibbe’s metadata store, including tracking when pages are modified, and then incorporating the iterative process to copy pages to the underlying storage system and reset the bitmap for that page. Such a modification reads upon the missing limitations of the claim, as 1) Kodavanji provides for a bitmap in metadata tracking, 2) incorporating a page tracking bitmap means tracking the pages stored into the write data cache, where Foster’s earlier disclosure of a full cache file necessarily means that the bitmap for the pages of the cache file are all full, because every page is modified (i.e. – the write data temporarily stored into Jibbe’s data cache write data cache is modifying the pages), reading upon the determining that the record is full when all corresponding blocks have been modified, 3) the iterative backup process incorporated into Jibbe’s flushing provides for an iterative flushing, where the tracking bitmap is updated after the pages are flushed necessarily requiring the determination that the flushing of a page is done, reading upon the determining that the write-back has been completed, and 4) in combination with Jibbe and Foster’s earlier disclosures of tracking the fullness of the data cache and cache files and utilizing them for triggering flushing mechanisms, a fullness of the cache file storing write data is necessarily also reflected in a fullness of the tracking bitmap, reading upon the triggering of the write-back upon the determining that the record in the tracking metafile is full. The combination further provides that Kodavanji’s bitmap relates to the pages in the persistent memory, where the combination provides this bitmap for the data cache, and therefore each bit corresponds to a block/page, so each bit corresponds to a respective cluster in the thin volume of Jibbe, reading upon the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kodavanji’s tracking bitmap and iterative backup process into Jibbe’s data cache, as this provides a data structure that can easily track the progress of flushing and ensure that the correct number of pages/correct pages are flushed.
Sampathkumar’s disclosure relates to managing cache metadata, and as such comprises analogous art.
As part of this disclosure, Sampathkumar provides in Fig. 2 how a cache may be divided up into a series of windows, labeled cache window 0-n, and a section for cache metadata, where each cache window has a corresponding metadata entry. As further shown in Fig. 3, the metadata stored 318 includes a dirty cache bitmap 322 and other cache information metadata 324, with Fig. 3 showing that each cache window corresponds to a portion for the metadata bitmap, see also Col. 8, Line 48 - Col. 9, Line 10. Further, when considering the flush mechanism, Sampathkumar discloses that the cache bocks grouped for flushing can be within cache windows, see Col. 9, Lines 33-40.
An obvious modification can be identified: dividing caches and cache files up into their unit cache windows. Such a modification reads upon where the record corresponds to a sub-portion of the cache file, as the metadata can be organized according to cache windows instead of the larger cache file/caches.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sampathkumar’s disclosure of cache windows into Jibbe’s cache device, as cache windows provide basic/atomic units for cache allocation and management, see Col. 4, Lines 53-60, allowing for the ability to track and move data in the cache in same-sized unit increments.
Regarding claim 18, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the non-transitory machine-readable medium of claim 17, and further teaches wherein each bit of the bitmap represents a different file block number in the sequence of file block numbers (see Kodavanji [0037,0065] as cited in the claim 17 rationale showing that each bit represents modification states of respective pages, i.e., different pages, where the combination now brings this bitmap to the indices of the data cache, see also Kodavanji [0038]).
Claims 19 and 20 are rejected according to the rationale of claims 10 and 8 respectively.
Regarding claim 22, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, and further teaches wherein the group of data blocks are a sequence of data blocks (while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] , see also Jibbe Fig. 3 showing that as writes are accumulated within the data cache, they are collected sequentially, see also [0041]).
Regarding claim 23, the combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 22, and further teaches wherein the sequence of file block numbers is a contiguous sequence of file block numbers (while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] , see also Jibbe Fig. 3 showing that as writes are accumulated within the data cache, they are collected sequentially, see also [0041]).
Claim 4 is rejected under 35 U.S. C. 103 as being unpatentable over Jibbe in view of Foster, Kodavanji, and Sampathkumar and further in view of Jarvis (US 2021/0303399).
The combination of Jibbe, Foster, Kodavanji, and Sampathkumar teaches the method of claim 1, but fails to teach wherein initiating the write-back comprises:
encoding data stored in the group of blocks represented by the record to be sent in one or more write-back messages to the volume.
Jarvis’ disclosure is related to providing a shared file system with caching and flushing, and as such comprises analogous art.
As part of this disclosure, Jarvis discloses providing the ability to flush data in an NVRAM cache through erasure encoding logic, which can include RMW steps, see [0016].
An obvious modification can be identified, incorporating Jarvis’ disclosure of providing RMW steps as a form of encoding into Jibbe’s disclosure. This reads upon the limitation of the claim, as Jibbe has earlier identified providing RMW updates to the underlying storage when flushing data, see [0045], and so the modification just incorporates the teaching that the RMW steps count as a form of encoding logic.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Jarvis’ disclosure of RMW as part of an erasing encoding logic into Jibbe’s disclosure, as this just helps one of ordinary skill in the art understand a definition of Jibbe’s flushing to the SMR drives, and the RMW technique provides a way to access the SMR drives.
Claims 11-16, 21, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Jibbe in view of Foster, Kodavanji, and Sampathkumar and further in view of Kano (US 2006/0155944).
Regarding claim 11, Jibbe teaches a computing device (“Accordingly, each storage system 102 and host 104 includes at least one computing system,” [0018]) comprising:
a memory containing a machine-readable medium comprising machine executable code having instructions stored thereon (“Accordingly, it is understood that any operation of the computing system according to the aspects of the present disclosure may be implemented by the computing system using corresponding instructions stored on or in a non-transitory computer readable medium accessible by the processing system. For the purposes of this description, a tangible computer-usable or computer-readable medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device. The medium may include for example non-volatile memory including magnetic storage, solid-state storage, optical storage, cache memory, and Random Access Memory (RAM),” [0058]); and
a processor coupled to the memory, the processor configured to execute the machine executable code (“which in turn includes a processor such as a microcontroller or a central processing unit (CPU) operable to perform various computing instructions. The instructions may, when executed by the processor, cause the processor to perform various operations described herein with the storage controllers 108.a, 108.b in the storage system 102 in connection with embodiments of the present disclosure,” [0018]) to:
write data to a cache, the cache corresponding to a volume (“The storage controller uses a data cache, e.g., a dynamic random access memory (DRAM), as an indirection layer to convert non-sequential write requests received from the host(s) into sequential writes for a thinly provisioned volume (also referred to herein as a “thin volume”) that is stored in a data repository on the SMR device pool,” [0013], where Fig. 2 shows the cache 220 corresponding to volumes 222 in the SMR drives 206);
update a tracking metafile based on the data written to the cache file (“In an embodiment, the storage controller maintains an index that maps the LBAs of the respective data blocks to their corresponding locations within the allocated portion of the thinly provisioned volume. The index may be maintained as part of metadata used by the storage controller for managing the contents of host data within the thinly provisioned volume and tracking the current utilization of the first data cache's data storage capacity,” [0015], see also “The metadata store 116 may house one or more types of metadata to facilitate translating the specified LBAs of the data in the write-back cache to block addresses used by the storage devices 106. In an embodiment, the metadata includes an index that maps the memory addresses of data blocks in the write-back cache to virtual LBAs of a thinly provisioned volume stored within a repository created on the SMR device pool. In a further embodiment, the metadata also includes an index that maps the virtual LBAs for different data blocks in the thinly provisioned volume to their corresponding physical locations within the repository on the SMR device pool. The mapping of virtual logical blocks in the thinly provisioned volume to logical blocks on the SMR drives is performed when the data is received from the host(s) 104, e.g., as part of a series of write requests directed to non-sequential addresses within the pool of SMR devices. In this manner, the data cache may be used as an indirection layer to write data from non-contiguous virtual logical blocks to sequential physical blocks in the SMR device pool,” [0024]; storing new data in the data cache necessarily requires updating the metadata maintaining the mapping to data in the data cache);
trigger a write-back of data stored in the group of blocks in the cache file that corresponds to the record to the volume (“Upon determining that the current utilization of the data storage capacity of the data cache 220 exceeds a threshold, the storage controller 200 flushes the data cache 220 by transferring the sequence of data clusters including the data blocks from the data cache 220 to the pool of SMR devices 206,” [0042]).
Jibbe fails to teach where the data is written specifically to a cache file in the cache, as well as the processor to:
determine that a record in the tracking metafile corresponding to a group of blocks in the cache file is full when a bitmap of the record indicates that all corresponding group of blocks represented by the bitmap have been modified, the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
create an entry in a hash data structure of an asynchronous write-back tracker to track the write-back associated with the group of blocks;
determine that the write-back has been completed; and
update the tracking metafile and the hash data structure to indicate that the write-back has been completed.
Jibbe further fails to teach that the write-back is specifically asynchronous, as well as where the write-back is triggered upon the determining that the record in the tracking metafile corresponding to the group of blocks is full. While Jibbe does disclose a metadata store that relates to blocks in the cache, see [0024] as cited above discussing the mapping between cache data blocks and underlying storage, Jibbe does not specifically disclose the determination that the record is full in relation to a writeback. Instead, Jibbe utilizes an overall capacity of the data cache, see [0042] as cited above and [0014], or on demand flushing, see [0045].
Jibbe is noted for showing how the group of blocks have a corresponding sequence of file block numbers, as Jibbe Fig. 3 shows how blocks may be written to a sequence of data clusters within the provisioned space of the thin volume, see also [0041]. This is further shown in that while Jibbe [0014] provides that the write requests may be non-sequential with regards to the thinly provisioned volume, the data cache accumulates the data sequentially and therefore the group of blocks would be sequentially related within the indices of the data cache, see also [0015,0024] teaching how the metadata maps from LBA’s/indices within the data cache to the volume/SMR devices.
Foster’s disclosure relates to managing cache data, and as such comprises analogous art.
As part of this disclosure, Foster manages a cache file for accumulating results for circuit design evaluation results, see [0047]. Of particular note, the cache file accumulates multiple results over time, see [0011], where a flushing mechanism is also provided to flush the cache file if the size of the cache file reaches a file size threshold and in particular an example where the cache file is specifically full, see [0047,0057]. Further, this flushing in response to a size threshold being met is contrasted with periodically discarding of the cache file, see [0045, 0054] discussing periodic checks of the dependency files, and where the disclosure of [0057] specifically states that “Flushing mechanism 228 may also flush cache file 216 and/or index file 222 independently of dependencies 410. For example, flushing mechanism 228 may discard evaluation results in cache file 216 based at least on a file size threshold associated with cache file 216…”.
An obvious modification can be identified: incorporating Foster’s cache file for accumulating write data operations, with the ability to flush particular cache files if the size threshold is reached/the cache file is full. Such a modification reads upon where the data is written to a cache file within a cache, as well as where a write-back can be triggered asynchronously.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Foster’s cache file and cache file size threshold into Jibbe’s disclosure, as the cache file provides for a file system to access related data, and providing separate cache file size thresholds ensures that no single file can dominate the use of Jibbe’s caching resources while still allowing for a degree of caching policies.
The combination of Jibbe and Foster still fails to teach the processor to:
determining that a record in the tracking metafile corresponding to a group of blocks in the cache file is full when a bitmap of the record indicates that all corresponding group of blocks represented by the bitmap have been modified, the group of blocks having a corresponding sequence of file block numbers forming a sub-portion of the cache file, the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence;
create an entry in a hash data structure of an asynchronous write-back tracker to track the write-back associated with the group of blocks;
determine that the write-back has been completed; and
update the tracking metafile and the hash data structure to indicate that the write-back has been completed.
As a consequence, while Foster does modify Jibbe to teach triggering asynchronous write-backs, the combination also still fails to teach where the trigger occurs upon determining that the record in the tracking metafile is full.
While Foster does teach flushing when the cache file is full, this is not the same as where metadata is full.
Kodavanji’s disclosure relates to tracking data for providing a backup in a storage system. As such, Kodavanji’s disclosure is analogous art for the same field of endeavor of storage management, and the discussion on how to manage tracking data for moving data in a backup context would be reasonably pertinent to the context of moving data in a caching/write-back context.
As part of this disclosure, Kodavanji provides for a modified page tracking bitmap, which “each persistent memory 122 of the respective memory server 104 stores a modified pages tracking bitmap 132, which is an example of a modified pages tracking structure mentioned further above. The modified pages tracking bitmap 132 contains a collection of bits (e.g., an array of bits) that represent modification states of respective pages 134 stored in the persistent memory 122 of the respective memory server 104,” [0037] (see also [0038] providing further definition that the bit value represents whether the page is modified or not), where in a backup process as disclosed in Fig. 3, an incremental backup loop identifies modifies pages in step 316, copies the pages to the backup storage system in step 318, and after copying the page, then the tracking bitmap is reset to indicate it is no longer modified, see also [0065].
An obvious modification can be identified: incorporating a page tracking bitmap into Jibbe’s metadata store, including tracking when pages are modified, and then incorporating the iterative process to copy pages to the underlying storage system and reset the bitmap for that page. Such a modification reads upon the majority of missing limitations of the claim, as 1) incorporating a page tracking bitmap means tracking the pages stored into the write data cache, where Foster’s earlier disclosure of a full cache file necessarily means that the bitmap for the pages of the cache file are all full, because every page is modified (i.e. – the write data temporarily stored into Jibbe’s data cache write data cache is modifying the pages), reading upon the determining that the record is full when all corresponding groups have been modified, 2) the iterative backup process incorporated into Jibbe’s flushing provides for an iterative flushing, where the tracking bitmap is updated after the pages are flushed necessarily requiring the determination that the flushing of a page is done, reading upon the determining that the write-back has been completed and updating of the tracking metafile, and 3) in combination with Jibbe and Foster’s earlier disclosures of tracking the fullness of the data cache and cache files and utilizing them for triggering flushing mechanisms, a fullness of the cache file storing write data is necessarily also reflected in a fullness of the tracking bitmap, reading upon the triggering of the write-back upon the determining that the record in the tracking metafile is full. The combination further provides that Kodavanji’s bitmap relates to the pages in the persistent memory, where the combination provides this bitmap for the data cache, and therefore each bit corresponds to a block/page, so each bit corresponds to a respective cluster in the thin volume of Jibbe, reading upon the bitmap including a plurality of bits each corresponding to a respective file block number in the sequence.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kodavanji’s tracking bitmap and iterative backup process into Jibbe’s data cache, as this provides a data structure that can easily track the progress of flushing and ensure that the correct number of pages/correct pages are flushed.
The combination of Jibbe, Foster, and Kodavanji fails to teach the creation of the entry in a hash data structure of an asynchronous write-back tracker, and as a result fails to teach updating the hash data structure to indicate that the write-back has been completed. The combination also still fails to teach where the sequence of file block numbers form a sub-portion of the cache file specifically.
Sampathkumar’s disclosure relates to managing cache metadata, and as such comprises analogous art.
As part of this disclosure, Sampathkumar provides in Fig. 2 how a cache may be divided up into a series of windows, labeled cache window 0-n, and a section for cache metadata, where each cache window has a corresponding metadata entry. As further shown in Fig. 3, the metadata stored 318 includes a dirty cache bitmap 322 and other cache information metadata 324, with Fig. 3 showing that each cache window corresponds to a portion for the metadata bitmap, see also Col. 8, Line 48 - Col. 9, Line 10. Further, when considering the flush mechanism, Sampathkumar discloses that the cache bocks grouped for flushing can be within cache windows, see Col. 9, Lines 33-40.
An obvious modification can be identified: dividing caches and cache files up into their unit cache windows. Such a modification reads upon where the record corresponds to a sub-portion of the cache file, as the metadata can be organized according to cache windows instead of the larger cache file/caches.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sampathkumar’s disclosure of cache windows into Jibbe’s cache device, as cache windows provide basic/atomic units for cache allocation and management, see Col. 4, Lines 53-60, allowing for the ability to track and move data in the cache in same-sized unit increments.
The combination of Jibbe, Foster, Kodavanji, and Sampathkumar still fails to teach the creation of the entry in a hash data structure of an asynchronous write-back tracker, and as a result fails to teach updating the hash data structure to indicate that the write-back has been completed.
Kano’s disclosure relates to migration of data between volumes, and as such comprises analogous art as in the same field of endeavor of data movement in a storage system, and the discussion on how to manage migrating data between storage devices would be reasonably pertinent to the question of tracking data in a write-back context.
As part of this disclosure, Kano discloses the use of a bitmap, where “The bits in the bitmap are set as the data is migrated from LDEV 1 to LDEV 10,” [0046], see Fig. 6b.
An obvious modification can be identified: incorporating a bitmap specifically for tracking a data migration operation into Jibbe’s disclosure as modified by Kodavanji. Such a modification reads upon the limitation of the claim, as the creation of the bitmap for tracking the movement of data reads on the creating of an entry limitation, and the updating of the bitmap as data is moved reads upon the updating of the tracker when a write-back is completed.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kano’s bitmap specifically to track movement of data into Jibbe’s disclosure as modified by Kodavanji, as the movement bitmap provides a specific data structure to track whether data has finished moving, not just the modified status of the data, as a movement-specific data structure provides clear information for the specific operation, whereas Kodavanji’s bitmap provides usage beyond the backup/movement of data context.
Regarding claim 12, the combination of Jibbe, Foster, Kodavanji, Sampathkumar, and Kano teaches the computing device of claim 11, and Jibbe further teaches wherein the record includes a key that uniquely identifies the record in the tracking metafile and wherein the record includes metadata that includes information about the record (as cited in the claim 11 rationale, Jibbe [0015, 0024] describe the metadata in the metadata store that include mapping information between the data cache and the volume/SMR devices, where the mapping information identifies the records, and the mapping information is the information about the record).
Claims 13 and 16 are rejected according to the same rationale of claims 8 and 5 respectively.
Regarding claim 21, the combination of claim Jibbe, Foster, Kodavanji, and Sampathkumar teaches the non-transitory machine-readable medium of claim 17, but fails to teach wherein the machine-executable code further causes the at least one machine to:
create an entry in a hash data structure for the sequence of file block numbers represented by the record in response to the write-back being initiated; and
delete the entry from the hash data structure in response to the write-back being completed.
Jibbe discloses that the metadata stored includes mapping and translations from data in the write-back cache to LBA’s in the volume or physical addresses of the underlying SMR devices, see [0024], necessarily, when data is flushed out of the write data cache, the data is no longer extant within the cache, so the mapping is invalid, and the metadata store will purge it. However, as the combination fails to teach the entry of the hash data structure, then this disclosure from Jibbe fails to teach the full limitation.
Kano’s disclosure relates to migration of data between volumes, and as such comprises analogous art as in the same field of endeavor of data movement in a storage system, and the discussion on how to manage migrating data between storage devices would be reasonably pertinent to the question of tracking data in a write-back context.
As part of this disclosure, Kano discloses the use of a bitmap, where “The bits in the bitmap are set as the data is migrated from LDEV 1 to LDEV 10,” [0046], see Fig. 6b.
An obvious modification can be identified: incorporating a bitmap specifically for tracking a data migration operation into Jibbe’s disclosure as modified by Kodavanji. Such a modification reads upon the limitation of the claim, as the creation of the bitmap for tracking the movement of data reads on the creating of an entry limitation, and the earlier disclosure from Jibbe teaches the deletion of the hash data structure after completion.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Kano’s bitmap specifically to track movement of data into Jibbe’s disclosure as modified by Kodavanji, as the movement bitmap provides a specific data structure to track whether data has finished moving, not just the modified status of the data, as a movement-specific data structure provides clear information for the specific operation, whereas Kodavanji’s bitmap provides usage beyond the backup/movement of data context.
Claim 24 is rejected according to the same rationale of claims 22 and 23.
Response to Arguments
Applicant's arguments filed October 8, 2025 have been fully considered but are moot in part and unpersuasive in part.
Claims 22-24 are newly filed and therefore have not previously had rejections issued. As such, the arguments are moot in part for lack of opportunity to address the new rejections to the newly filed claims.
The claims feature new rejections under 35 U.S.C. 112, and as applicant has not had opportunity to address these, the arguments are moot in part.
The majority of the arguments focus on the art rejection and prior grounds of Jibbe, Foster, and Kodavanji (with Kano getting a brief mention with regards to claim 11). In particular, after discussing Jibbe and Foster’s failure to teach the full determination of a full record based on a bitmap and the triggering of the asynchronous write-back in response to this bitmap being full, the argument looks at Kodavanji in isolation, arguing that Kodavanji’s different context for bitmap tracking modified pages leads to a failure to teach the limitation. However, this focuses on Kodavanji in isolation instead of looking at the full combination of Jibbe, Foster, and Kodavanji, where the rejection provides that it is the combination that teaches this feature, not Kodavanji in isolation, and as such is unpersuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
This applies as well to the argument against claim 7, where applicant argues against the separate information in the record from Jibbe and Kodavanji, when the combination in the rationale of claim 1 articulates combining Kodavanji’s bit map into Jibbe’s use of the data cache and the related tracking information.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gunda et al. (US 2015/0312343) discloses a cached block bitmap showing portions of a file that are located in cache,
Noe (US 2017/0060433) discloses the use of region bitmaps to track dirty pages within a portion of storage based on regions,
Yamamoto et al. (US 2018/0373429) discloses bitmaps to track portions of a region in a dirty state, as well as located in a cache,
Tas (US 2019/0068601) discloses using bitmaps for files to track clusters that are modified.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON D HO whose telephone number is (469)295-9093. The examiner can normally be reached Mon-Fri 8:00-4:00 CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.D.H./Examiner, Art Unit 2139
/REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139