Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,029

Method and Apparatus for Managing Data Integrity in a Distributed Storage Network

Non-Final OA §102§112§DP
Filed
Aug 01, 2024
Examiner
HERSHLEY, MARK E
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Pure Storage Inc.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
432 granted / 552 resolved
+23.3% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
18 currently pending
Career history
570
Total Applications
across all art units

Statute-Specific Performance

§101
12.8%
-27.2% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 552 resolved cases

Office Action

§102 §112 §DP
Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Claims 1, 3, 5 – 14 and 15 – 23 are pending. Response to Arguments Applicant presents the following arguments in the 30 January 2026 response: Applicant’s arguments with respect to the non-statutory double patenting rejection have been fully considered and are persuasive in view of the amendments. The double patenting rejection of claims 1, 3, 5-13 and 15-23 has been withdrawn. Applicant's arguments with respect the rejection in view of Eidler have been fully considered but they are not persuasive. Applicant’s specification discloses, in a non-limiting manner, that a DS unit failure is the data loss or corruption in [0046]. Other examples of DS storage unit failures and site failures are given, however, none are limiting, such as [0084], [0087]-[0089], etc. Therefore, a failure of a storage unit may merely be the loss or corruption of the data on the unit itself. This is disclosed by the handling of Eidler’s micro-failures and determining if the disaster is a temporary disaster or a non-temporary disaster, such as a deletion of the only copy, and a recovery process is performed. Further, even should a more limiting interpretation of the failure be taken into account, Eidler discloses the handling of a storage unit or CPU failure and recovery process thereafter. Therefore, Eidler is sufficient in disclosing the current claim language and as supported by Applicant’s specification itself. Further, Applicant argues that the amended language is only directed to select storage units and that Eidler does not disclose such a limiting factor. However, the language does not disclose as to what a select storage unit comprises or how any specific storage units are selected. Furthermore, the language and specification has no limiting language to not include all of the storage units as part of a select storage units. Additionally, Eidler discloses the integrity check and restoration for the storage units for the data being transmitted, stored, archived, backup and restored. Using broadest reasonable interpretation, any storage unit with the data of interest or being checked for the data of interest comprises select storage units as currently claimed and supported by Applicant’s specification. Further, as cited, Eidler’s transmitted data is data being stored at the active site, and, therefore, is relevant data stored in select storage units give the above broadest reasonable interpretation of select storage units. Further, Applicant argues that the rebuilding of the claim language differs from the restoring process of Eidler because “the Office' argument conflates "rebuilding" as used in the Applicant's claim 1 with each of 1) a restore process (for a "disaster"); and 2) a restore operation (for a micro-failure), of which it is neither. As detailed above, it is improper for the Office to characterize one irregular component with the regularly occurring events of Applicant's claimed subject matter limitations.” However, the claim language and specification has no indication that the failures being determined and rebuilding thereafter is a “regularly occurring event”, nor would such a designation be relevant without very specific limitation of what “has failed” comprises, as to which the specification is silent other than the examples given above such as data loss or corruption, storage unit offline, etc. Additionally, nothing in Eidler discloses that the failures/errors are irregular events and not the handling of regularly occurring errors/failures. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., failed storage devices being regularly occurring events) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 22 and 23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite “wherein the data is associated with a common identifier and the common identifier”. However, there is no prior use of “the common identifier” and appears to be redundant language. There is insufficient antecedent basis for this limitation in the claim. Therefore, the language will be interpretated as being “associated with a common identifier”. Claim Objections Claims 22 and 23 are objected to because of the following informalities: Claim 22 is directed to the “storage network of claim 23”, and claim 23 is directed to the “storage network of claim 13”. However, both claims recite the same claim limitation and appears to be an error in both numbering and dependency. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claim(s) 1, 3, 5 – 14 and 15 – 23 is/are rejected under pre-AIA 35 U.S.C. 102(b) as being anticipated by U.S. Patent Application Publication No. 2009/0210427 issued to Eidler et al (hereinafter Eidler). As to claim 1, Eidler discloses a method for execution by a storage network, the method comprising: determining data integrity information for data stored in select storage units of a plurality of storage units associated with the storage network (using checksum and logging routines to check data integrity and perform first, second and third level verifications of data, stored in the DMZ, storage locations or archive storage, to determine failures or micro-failures, including storage and CPU failures at the active site, see Eidler: Para. 0090 – 0091, 0099 – 0103, 0105 – 0110 and 0148 – 0151, individual storage/computer server is a data storage unit, storage servers and units at an active site are “select storage units”); based on the data integrity information, determining whether a storage device associated with the select storage units has failed (determining failures or micro-failures based on verifications/checks of integrity data and checksums, including if the active site suffers a disaster that does not require abandonment of the active site, such as a security breach, loss of data, storage failure, CPU failure, etc., then in a recovery operation, operational copies of the backed up applications and data may be moved to computer servers and used at the active site., see Eidler: Para. 0090 – 0091, 0099 – 0100, 0106 – 0107 and 0148 – 0149); in response to a determination that a storage device has failed, determining whether the storage device has failed due to a transitory condition (determination of failure due to a failed link, a time delay, server failure or a micro-failure thereof, see Eidler: Para. 0090 – 0091 and 0148 – 0149, and verification of transmitted data is checked using values including checksum values, see Eidler: Para. 0098 – 0100, 0106 – 0107 and 0148 – 0149); in response to a determination that the storage device has failed due to a transitory condition, waiting a predetermined amount of time before determining data integrity information for the select storage units again (for temporary disasters, inflating files, applications and data sets and routing operations to a remote/temporary location until the disaster is mitigated instead of local recovery, see Eidler: Para. 0032, 0052 and 0148 – 0150, routing and inflating files, applications and data sets until a disaster is mitigated is a predetermined amount of time); in response to a determination that the storage device has failed not due to a transitory condition, initiating rebuilding of data stored in the storage unit (disaster recovery utilizing a restore process, see Eidler: Para. 0033 – 0034 and 0047, and if the active site suffers a disaster that does not require abandonment of the active site, such as a security breach, loss of data, storage failure, CPU failure, etc., then in a recovery operation, operational copies of the backed up applications and data may be moved to computer servers and used at the active site. A micro-failure may comprise, for example, accidental deletion of the only copy of an important customer file, for which the customer does not have a local backup, but which was previously collected, transmitted, verified, and stored in data centers 170. Recovering from a micro-failure may involve performing a simple restore operation involving selecting an image, copying the image to CPE Server 114, performing verification, inflating and starting the image on the CPE Server for test purposes, shutting down the image, and copying the inflated image to customer storage for active use, see Eidler: Para. 0148 – 0151). As to claim 3, Eidler discloses the method of claim 1, wherein the select storage units are associated with a data storage site (storage failure, security breach, CPU failure, data loss, etc. at the active site and determination if recovery is to be done at the active site or routed to the serve provider or operational copies moved to the recovery site temporarily until the disaster is mitigated, see Eidler: Para. 0148 – 0149, see also 0032 and 0052, the active site comprising the servers is the data storage site). As to claim 5, Eidler discloses the method of claim 1, wherein the select storage units are associated with an address range associated with a storage site (addresses for the servers are collected and mapped to the SLA terms, see Eidler: Para. 0084 and 0148 – 0149, addresses of the servers at an active site make up and address range for the active site). As to claim 6, Eidler discloses the method of claim 1, wherein the storage device has failed is determined from a list consisting of: a minimum number of data slice errors has been exceeded; one or more storage devices is powered off; one or more network elements is not functioning; an equipment failure; a scheduled storage unit outage; and a threshold number of data slice errors has been exceeded (failures due to failed link, time delay, security breach, loss of data, storage failure, CPU failure, reasons other than a disaster, etc., see Eidler: Para. 0090, 0147 – 0152). As to claim 7, Eidler discloses the method of claim 1, wherein the determining data integrity information comprises executing a hash function on the data (verification includes encoding the data (hash values) and identifying variances among encoded data segments, see Eidler: Para. 0068). As to claim 8, Eidler discloses the method of claim 1, wherein the determining data integrity information comprises searching the selected storage units using a lookup list of unique identifiers associated with the data (verification code may be configured to individually identify files that are within received images of archives (e.g., ZIP files) so that the database 146 reflects names of individual files rather than opaque archives, verification code may be configured to individually identify individual database table spaces of a customer, rows and columns of databases, messages within message archives, calendars or other sub-applications within a unified information application, or other units of information. Further, database 146 may serve as a meta-catalog that references other data catalogs represented in storage units of the system, such as a directory maintained in the CommVault system, see Eidler: Para. 0098 – 0099 and 0152). As to claim 9, Eidler discloses the method of claim 1, wherein the determining data integrity information comprises calculating a checksum on the data (using checksum values for verification, see Eidler: Para. 0098 – 0104); and comparing the checksum to a previously stored checksum (Comparing a checksum previously received from the CPE Server 114 before the transmission phase and associated with the same image, see Eidler: Para. 0098 – 0104). As to claim 10, Eidler discloses the method of claim 1, wherein the determining data integrity information comprises calculating a checksum on the data (using checksum values for verification, see Eidler: Para. 0098 – 0104), and comparing the checksum to a checksum calculated from a copy of the data (Comparing a checksum previously received from the CPE Server 114 before the transmission phase and associated with the same image, see Eidler: Para. 0098 – 0104) stored in one or more additional select storage units of the storage network (checksums may be stored in the stackware 148 and used for scheduling and automating timing of storage, see Eidler: Para. 0117 – 0118, stackware for the service provider stores and uses checksums of the data models for scheduling of storage operations, the checksums being from the CPE server of the active sites being utilized at the service provider for use with data centers, see 0117 – 0118 and Fig. 1). As to claim 11, Eidler discloses the method of claim 1, further comprising: in response to a determination that a storage device associated with the select storage units has not failed, determining if a site failure has occurred (if active site 102 experiences a fire, flood, earthquake, or other natural disaster it may be necessary to abandon the active site at least temporarily and establish business operations elsewhere. Recovery site 120 represents a temporary operational location and comprises user stations 104A, a local network 106A, network connectivity to public network 130 through router 109A, and computer servers 108A. In this arrangement, user stations 104A may access backed up applications 110 and data 112 on hardware 150 using processes that are described further herein, see Eidler: Para. 0052, 0147 – 0150); and in response to a determination that a site failure has occurred, determining whether the failure is due to a transitory condition (temporarily abandon the active site to use the recovery site until the disaster is mitigated, see Eidler: Para. 0052, 0147 – 0150); in response to a determination that the site failure is due to a transitory condition, waiting a predetermined amount of time before determining data integrity information for the select storage units of the storage network again (route operations, such as virtually, to the service provider until the disaster is mitigated to be accessed at the recovery site, see Eidler: Para. 0052, 0147 – 0150); and in response to a determination that the site failure is not due to a transitory condition, initiating rebuilding of the select storage units of the storage network (in a recovery operation, operational copies of the backed up applications 110 and data 112 may be moved to computer servers 108A and used locally, see Eidler: Para. 0052, 0147 – 0150, copies are rebuilt at computer servers local to the recovery site rather than accessed by copies at the service provider when the active site is experiencing a disaster). As to claim 12, Eidler discloses the method of claim 11, wherein the rebuilding comprises: determining a plurality of unique identifiers associated with the data (database storing names of individuals files enables subsequent restoration operation to target the individual file rather than the entire archive, see Eidler: Para. 0098 – 0104); rebuilding the data associated with each unique identifier of the plurality of unique identifiers to provide rebuilt data (database storing names of individuals files enables subsequent restoration operation to target the individual file rather than the entire archive, see Eidler: Para. 0098 – 0104); and storing the rebuilt data at another storage network site (database 146 is located at the service provider, see Eidler: Fig.1 and Para. 0043, 0045, 0083 – 0087, 0094, 0098, 0117 and 0130). Claim 13 is rejected using similar rationale to the rejection of claim 1 above. Claim 15 is rejected using similar rationale to the rejection of claim 3 above. Claim 16 is rejected using similar rationale to the rejection of claim 4 above. Claim 17 is rejected using similar rationale to the rejection of claim 5 above. Claim 18 is rejected using similar rationale to the rejection of claim 7 (or 8 or 9 or 10) above. Claim 19 is rejected using similar rationale to the rejection of claim 11 above. Claim 20 is rejected using similar rationale to the rejection of claim 12 above. As to claim 21, Eidler discloses the method of claim 1, wherein the select storage units are associated with an address range associated with a plurality of storage units (addresses for the servers are collected and mapped to the SLA terms, see Eidler: Para. 0084 and 0148 – 0149, each server is a storage unit and address thereof is the address range associated with the storage unit). As to claim 22, Eidler discloses the storage network of claim 23, wherein the data is associated with a common identifier and the common identifier (data associated with customer identifier and vm identifiers, see Eidler: Para. 0134 - 0136). As to claim 23, Eidler discloses the storage network of claim 13, wherein the data is associated with a common identifier and the common identifier (data associated with customer identifier and vm identifiers, see Eidler: Para. 0134 - 0136). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK E HERSHLEY whose telephone number is (571)270-7774. The examiner can normally be reached M-F: 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK E HERSHLEY/Primary Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
May 02, 2025
Non-Final Rejection — §102, §112, §DP
Jul 31, 2025
Response Filed
Oct 31, 2025
Final Rejection — §102, §112, §DP
Jan 30, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602402
SYNCHRONOUS PROCESSING SYSTEMS AND METHODS WITH IN-MEMORY DATABASE
2y 5m to grant Granted Apr 14, 2026
Patent 12596719
SEARCH REQUEST PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12591627
ENHANCED AUTO-SUGGESTION FUNCTIONALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12579164
SYNCING OBJECTS FOR MULTIDEVICE SYNCHRONIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12579205
CONTENT RECOMMENDATION METHOD AND APPARATUS, DEVICE, MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+18.5%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 552 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month