Prosecution Insights
Last updated: April 18, 2026
Application No. 19/197,227

OPTIMIZED RESTORATION OF DEDUPLICATED DATA

Non-Final OA §103§DP
Filed
May 02, 2025
Examiner
GEBRESENBET, DINKU W
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Commvault Systems Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
428 granted / 604 resolved
+15.9% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
13 currently pending
Career history
617
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
4.5%
-35.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are present. As a result, claims 1-20 are pending. Drawings The drawings received on 02 May 2025 are accepted by the Examiner This Office Action is Non-Final. Abstract Objections Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,321,313. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1 and 2 of the instant application are obvious variants of claims 1 and 11 of U.S. Patent No. 12,321,313. Claims Comparison Table U.S. Patent No. 12,321,313 Application 19197227 1. A system comprising: a first computing device comprising one or more hardware processors, wherein the first computing device is communicatively coupled to one or more data storage resources, wherein a backup copy is stored at the one or more data storage resources, wherein the backup copy comprises a plurality of data segments including a first data segment, and wherein the plurality of data segments are stored with deduplication at the one or more data storage resources; wherein the first computing device is configured to: receive a first read request for the first data segment of the backup copy, parse one or more metadata indexes to identify a first data file that comprises the first data segment, wherein the first data file is stored at the one or more data storage resources, based on a first metadata index among the one or more metadata indexes, wherein the first metadata index corresponds to the first data file, generate a first list of data segments that are physically stored consecutively in the one or more data storage resources, wherein the first list includes the first data segment, issue a second read request to the one or more data storage resources for all the data segments of the first list, store all the data segments of the first list, including the first data segment, at a data storage area configured at the first computing device, serve the first data segment, in response to the first read request, from the data storage area configured at the first computing device, and in response to one or more third read requests for one or more second data segments among the plurality of data segments of the backup copy: determine that the one or more second data segments are among the data segments of the first list, and serve the one or more second data segments from the data storage area configured at the first computing device. 1. A computer-implemented method comprising: at a first computing device, receiving a first read request for a first data segment, wherein the first data segment is part of a backup copy that was previously generated, wherein the backup copy comprises a plurality of data segments that were stored with deduplication in a data storage system; and by the first computing device: responsive to the first read request, parsing one or more indexes to identify, among the plurality of data segments, a subset of data segments that includes the first data segment, and wherein the subset of data segments are stored consecutively at the data storage system; generating an aggregated read request for the subset of data segments, wherein the aggregated read request is directed at the data storage system; receiving the subset of data segments from the data storage system; storing the subset of data segments at the first computing device; serving the first data segment from the first computing device responsive to the first read request; and responsive to subsequent read requests for data segments among the subset of data segments, serving the data segments from the first computing device, wherein the first computing device comprises one or more hardware processors and non-transitory computer memory. Claims rejection 35 U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dornemann et al. (US 20170277686 A1) in view of Prahlad et al. (US 20100333116 A1) in view of Damgar et al. (US 20200133569 A1). Regarding claims 1 and 12, Dornemann discloses a first computing device, receiving a first read request for a first data segment, wherein the first data segment is part of a backup copy that was previously generated (see Dornemann paragraph [0314], Read operations (e.g., read requests for one or more data blocks) initiated by VM 201 and/or application(s) 110 executing thereon may be directed by host computing device 202 to the shared file system (e.g., logical source 249) that is configured as the restore point for VM 201. Media agent 244 may serve these read requests based on the read cache 245, as described in further detail below), wherein the backup copy comprises a plurality of data segments that were stored with deduplication in a data storage system (see Dornemann paragraph [0317],some read requests cannot be initially satisfied from read cache 245. In some embodiments, the requested data blocks may be copied from the backup copy 228 to the read cache 245 before serving the read request from read cache 245). serving the first data segment from the first computing device responsive to the first read request (see Dornemann paragraph [0368] Serving the present read request preferably occurs at substantially the same priority as serving other read requests, e.g., read requests initiated by VM 201 and/or application(s) 110 executing thereon (see, e.g., block 509); and responsive to subsequent read requests for data segments among the subset of data segments, serving the data segments from the first computing device (see Dornemann paragraph [0368] Serving the present read request preferably occurs at substantially the same priority as serving other read requests, e.g., read requests initiated by VM 201 and/or application(s) 110 executing thereon (see, e.g., block 509), wherein the first computing device comprises one or more hardware processors and non-transitory computer memory (see Dornemann paragraph [0059], Storage devices can generally be of any suitable type including, without limitation, disk drives, hard-disk arrays, semiconductor memory (e.g., solid state storage devices), network attached storage (NAS) devices, tape libraries or other magnetic, non-tape storage devices, optical media storage devices, DNA/RNA-based memory technology, combinations of the same, and the like). Prahlad expressly discloses responsive to the first read request, parsing one or more indexes to identify, among the plurality of data segments, a subset of data segments that includes the first data segment (see Prahlad paragraph [0258], At step 1225, the metadata file is parsed until the stream header corresponding to the data object or block to be restored is accessed. At step 1230, the cloud storage server determines the location of the file from the stream data. The stream data indicates the location of the data object to be restored, which is either in a container file in the chunk folder or within a container file in another chunk folder), and wherein the subset of data segments are stored consecutively at the data storage system (see Fig 18). It would have been obvious to a person of ordinary skill in art before the effective filing date of the claimed invention to incorporate the teaching of Prahlad into the method of Dornemann to have parsing one or more indexes to identify. Here, combining Prahlad with Dornemann, which are both related to data processing, improves Dornemann, by providing systems and methods, for identifying suitable storage locations, including suitable cloud storage sites, for data files subject to a storage policy and systems and methods for providing a cloud gateway and a scalable data object store within a cloud environment (see Prahlad paragraph [0001] Damgar expressly generating an aggregated read request for the subset of data segments, wherein the aggregated read request is directed at the data storage system (see Damgar paragraph [0003], aggregating read requests requesting common data objects into a common read operation, and dispatching the common read operation to a multi-threaded I/O layer of the data storage system for retrieving data associated with the read request); receiving the subset of data segments from the data storage system (see Damgar paragraph [0039], aggregating read requests requesting common data objects into a common read operation in a data storage system for reducing latency that would otherwise result in the data storage system from using an extensive/unnecessary number of threads while retrieving data. Accordingly, as a result of aggregating the read requests, throughput in the data storage system is improved as compared to conventional data storage systems that use such an extensive/unnecessary number of threads while retrieving data); storing the subset of data segments at the first computing device (see Damgar paragraph [0042], method 400 is performed in a data storage system that is processing received read requests for data, e.g., where the data associated with the received read requests is stored on disc, magnetic recording tape, cloud based storage, etc). It would have been obvious to a person of ordinary skill in art before the effective filing date of the claimed invention to incorporate the teaching of Damgar into the method of Dornemann to have generating an aggregated read request. Here, combining Damgar with Dornemann, which are both related to data processing, improves Dornemann, by providing systems and methods, for improving throughput and thereby reducing overall latency in the data storage system (see Damgar paragraph [0001]). Regarding claims 2 and 13, Dornemann discloses wherein the one or more indexes comprise links to data segments stored among multiple containers at the data storage system (see Dornemann paragraph [0275] The container index file 192/194 stores an index to the container files 190/191/193. Among other things, the container index file 192/194 stores an indication of whether a corresponding block in a container file 190/191/193 is referred to by a link in a metadata file 186/187. For example, data block B2 in the container file 190 is referred to by a link in the metadata file 187 in the chunk folder 185). Regarding claims 3 and 14, Dornemann discloses, wherein the data storage system comprises a cloud-based object storage platform (see Dornemann paragraph [0184] one or more of the storage devices of the target-side and/or source-side of an operation can be cloud-based storage devices. Thus, the target-side and/or source-side deduplication can be cloud-based deduplication. In particular, as discussed previously, ). Regarding claims 4 and 15, Dornemann discloses wherein the data storage system comprises a multi-node replicated file system (see Dornemann paragraph [0137], Media agents 144 can comprise separate nodes in the information management system 100 (e.g., nodes that are separate from the client computing devices 102, storage manager 140, and/or secondary storage devices 108)). Regarding claims 5 and 16, Damgar expressly discloses by the first computing device, concurrently issuing multiple aggregated read requests to distinct storage files at the data storage system (see Damgar paragraph [0039], aggregating read requests requesting common data objects into a common read operation in a data storage system for reducing latency that would otherwise result in the data storage system from using an extensive/unnecessary number of threads while retrieving data. Accordingly, as a result of aggregating the read requests, throughput in the data storage system is improved as compared to conventional data storage systems that use such an extensive/unnecessary number of threads while retrieving data). It would have been obvious to a person of ordinary skill in art before the effective filing date of the claimed invention to incorporate the teaching of Damgar into the method of Dornemann to have generating an aggregated read request. Here, combining Damgar with Dornemann, which are both related to data processing, improves Dornemann, by providing systems and methods, for improving throughput and thereby reducing overall latency in the data storage system (see Damgar paragraph [0001]). Regarding claims 6 and 17, Dornemann discloses wherein the subset of data segments at the first computing device are stored in a data bucket configured in a data storage area at the first computing device, wherein the data bucket is managed by a look ahead reader that executes at the first computing device (see Dornemann paragraph [0292] analyze said profile by performing a predictive analysis, and determine certain key blocks of data in a backup copy of VM 201; pre-stage said key data blocks to a read cache to speed up booting of VM 201; pre-stage certain sets of data blocks to the read cache to speed up the relocation operation; copy other data blocks from the backup copy of VM 201 to the read cache; manage the serving of read requests, based on the read cache, received from host computing device 202; track the data blocks requested in read requests and determine whether a series of data blocks consistent with the relocation sequence of the VMFR operation has been requested, and if so, delete said series of data blocks from the read cache after the data blocks have been served). Regarding claim 7, Dornemann discloses wherein the look-ahead reader serves data segments from the data bucket responsive to read requests (see Dornemann paragraph [0292] analyze said profile by performing a predictive analysis, and determine certain key blocks of data in a backup copy of VM 201; pre-stage said key data blocks to a read cache to speed up booting of VM 201; pre-stage certain sets of data blocks to the read cache to speed up the relocation operation; copy other data blocks from the backup copy of VM 201 to the read cache; manage the serving of read requests, based on the read cache, received from host computing device 202; track the data blocks requested in read requests and determine whether a series of data blocks consistent with the relocation sequence of the VMFR operation has been requested, and if so, delete said series of data blocks from the read cache after the data blocks have been served). Regarding claims 8 and 18, Dornemann discloses wherein among the one or more indexes, a first index comprises a respective link to each data segment among the subset of data segments, including a first link to the first data segment, and wherein a second index comprises a second link to the first data segment, wherein the second link represents the first data segment stored in deduplicated form (see Dornemann paragraph [0275], The container index file 192/194 stores an index to the container files 190/191/193. Among other things, the container index file 192/194 stores an indication of whether a corresponding block in a container file 190/191/193 is referred to by a link in a metadata file 186/187. For example, data block B2 in the container file 190 is referred to by a link in the metadata file 187 in the chunk folder 185). Regarding claims 9 and 19, Dornemann discloses, wherein small data segments below a threshold size are retrieved directly from the one or more indexes without accessing the data storage system (see Dornemann paragraph [0169], data satisfying criteria for removal (e.g., data of a threshold age or size) may be removed from source storage). Regarding claims 10 and 20, Dornemann discloses, wherein a media agent that executes at the first computing device orchestrates restoring the backup copy from the data storage system, wherein restoring the backup copy comprises issuing the first read request (see Dornemann paragraph [0314] Read operations (e.g., read requests for one or more data blocks) initiated by VM 201 and/or application(s) 110 executing thereon may be directed by host computing device 202 to the shared file system (e.g., logical source 249) that is configured as the restore point for VM 201. Media agent 244 may serve these read requests based on the read cache 245, as described in further detail below). Regarding claim 11, Damgar expressly discloses by the first computing device, determining a capacity for processing concurrent reads at the data storage system, and, based on the capacity, generating concurrent aggregated read requests directed at the data storage system (see Damgar paragraph [0039], aggregating read requests requesting common data objects into a common read operation in a data storage system for reducing latency that would otherwise result in the data storage system from using an extensive/unnecessary number of threads while retrieving data. Accordingly, as a result of aggregating the read requests, throughput in the data storage system is improved as compared to conventional data storage systems that use such an extensive/unnecessary number of threads while retrieving data; see Damgar paragraph [0076] recall that processes in which conventional data storage systems satisfy read requests typically include using an extensive number of concurrent threads, e.g., 100+. These large number of threads typically retrieve only sub-portions of requested data, and are extensive in that such threads are deployed without considering whether another thread is already retrieving the same sub-portion of data). It would have been obvious to a person of ordinary skill in art before the effective filing date of the claimed invention to incorporate the teaching of Damgar into the method of Dornemann to have generating an aggregated read request. Here, combining Damgar with Dornemann, which are both related to data processing, improves Dornemann, by providing systems and methods, for improving throughput and thereby reducing overall latency in the data storage system (see Damgar paragraph [0001]). Remarks The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dornemann (US 20180173454 A1) discloses servicing users based on pre-staged data blocks supplied from a backup copy in secondary storage. Substantially concurrently with the ongoing execution of the virtual machine, a virtual-machine-file-relocation operation moves data blocks from backup to a primary storage destination that becomes the virtual machine's primary data store after relocation completes. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DINKU W GEBRESENBET whose telephone number is (571)270-1636. The examiner can normally be reached between 8:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached on 571- 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DINKU W GEBRESENBET/Primary Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

May 02, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §103, §DP
Mar 27, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596675
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MIGRATING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12585621
Directory Level Storage Management of a File System
2y 5m to grant Granted Mar 24, 2026
Patent 12585628
GEOSPATIAL ANOMALY FILTERING OF GEOLOCATION DATA STREAMS
2y 5m to grant Granted Mar 24, 2026
Patent 12579172
GEOLOCATION NAME SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561208
TECHNIQUE TO COMPUTE DELTAS BETWEEN ANY TWO ARBITRARY SNAPSHOTS IN A DEEP SNAPSHOT REPOSITORY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+35.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month