DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The response filed 10/07/2025 has entered. Applicant has amended claims 1, 3, 4, 10, and 16. No claims have been added or cancelled. Claims 1-20 are currently pending in the instant application.
Response to Arguments
Applicant’s arguments, see pages 11-13, filed 10/07/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in further view of Chen et al (US 2013/0339297). Chen teaches the amended limitations as seen in the current rejection below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mehta et al (US20200195719) in view of Chen et al (US 2013/0339297).
Regarding claim 1, Mehta teaches a computer-implemented method comprising: maintaining, at a centralized cache device in a distributed data system, first metadata indicating that a first backend storage partition device of the distributed data system manages, at a first time, a given data record stored on the first backend storage partition device, the first metadata being pushed to the centralized cache device by the first backend storage partition device ([0081] Depending on context, the term “information management system” can refer to generally all of the illustrated hardware and software components in FIG. 1C, or the term may refer to only a subset of the illustrated components. For instance, in some cases, system 100 generally refers to a combination of specialized components used to protect, move, manage, manipulate, analyze, and/or process data and metadata generated by client computing devices 102.); receiving, at the centralized cache device, a second metadata from a second backend storage partition device of the distributed data system based on the second backend storage partition device pushing the second metadata to the centralized cache device, the second metadata indicating that the second backend storage partition device has received the given data record and currently manages the given data record ([0093] Primary data 112 stored on primary storage devices 104 may be compromised in some cases, such as when an employee deliberately or accidentally deletes or overwrites primary data 112. Or primary storage devices 104 can be damaged, lost, or otherwise corrupted. For recovery and/or regulatory compliance purposes, it is therefore useful to generate and maintain copies of primary data 112. Accordingly, system 100 includes one or more secondary storage computing devices 106 and one or more secondary storage devices 108 configured to create and store one or more secondary copies 116 of primary data 112 including its associated metadata. The secondary storage computing devices 106 and the secondary storage devices 108 may be referred to as secondary storage subsystem 118.); replacing, at the centralized cache device, the first metadata with the second metadata indicating that the second backend storage partition device currently manages the given data record; receiving, at the centralized cache device, a consistency check from the first backend storage partition device regarding management status of the given data record to determine current management of the given data record ([0096] Secondary storage computing devices 106 may index secondary copies 116 (e.g., using a media agent 144), enabling users to browse and restore at a later time and further enabling the lifecycle management of the indexed data. After creation of a secondary copy 116 that represents certain primary data 112, a pointer or other location indicia (e.g., a stub) may be placed in primary data 112, or be otherwise associated with primary data 112, to indicate the current location of a particular secondary copy 116. Since an instance of a data object or metadata in primary data 112 may change over time as it is modified by application 110 (or hosted service or the operating system), system 100 may create and manage multiple secondary copies 116 of a particular data object or metadata, each copy representing the state of the data object in primary data 112 at a particular point in time. Moreover, since an instance of a data object in primary data 112 may eventually be deleted from primary storage device 104 and the file system, system 100 may continue to manage point-in-time representations of that data object, even though the instance in primary data 112 no longer exists. ); and in response to receiving the consistency check, providing the second metadata from the centralized cache device to the first backend storage partition device, wherein providing the second metadata to the first backend storage partition device causes the first backend storage partition device to replace the first metadata with the second metadata indicating that the second backend storage partition device currently manages the given data record, and to mark the given data record stored on the first backend storage partition device as inactive ([0104] Secondary copy data objects 134A-C can individually represent more than one primary data object. For example, secondary copy data object 134A represents three separate primary data objects 133C, 122, and 129C (represented as 133C′, 122′, and 129C′, respectively, and accompanied by corresponding metadata Meta11, Meta3, and Meta8, respectively). Moreover, as indicated by the prime mark (′), secondary storage computing devices 106 or other components in secondary storage subsystem 118 may process the data received from primary storage subsystem 117 and store a secondary copy including a transformed and/or supplemented representation of a primary data object and/or metadata that is different from the original format, e.g., in a compressed, encrypted, deduplicated, or other modified format. For instance, secondary storage computing devices 106 can generate new metadata or other information based on said processing, and store the newly generated information along with the secondary copies. Secondary copy data object 134B represents primary data objects 120, 133B, and 119A as 120′, 133B′, and 119A′, respectively, accompanied by corresponding metadata Meta2, Meta10, and Meta1, respectively. Also, secondary copy data object 134C represents primary data objects 133A, 119B, and 129A as 133A′, 119B′, and 129A′, respectively, accompanied by corresponding metadata Meta9, Meta5, and Meta6, respectively.).
Mehta does not explicitly teach the second metadata indicating that the second backend storage partition device has received the given data record and currently manages, at a second time after the first time, the given data record.
Chen teaches the second metadata indicating that the second backend storage partition device has received the given data record and currently manages, at a second time after the first time, the given data record ([0020] The method includes identifying, by the central operational data store, a last backup time of the set of operational data records from a source operational data store. The method includes calculating, by the central operational data store, a tolerance number based on an elapsed time that is indicative of a range of timestamps that can be processed at a same time such that it cannot be guaranteed that that operational data records with timestamps separated by less than the tolerance number were assigned timestamps in the order that the operational data records were created, modified, or both. The method includes calculating, by the central operational data store, a synchronization timestamp including subtracting the tolerance number from the last backup time. The method includes transmitting, by the central operational data store, the synchronization timestamp to the source operational data store to instruct the source operational data store to transmit any operational data records stored at the source operational data store with timestamps greater than the synchronization timestamp.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Mehta to include the second metadata indicating that the second backend storage partition device has received the given data record and currently manages, at a second time after the first time, the given data record as taught by Chen. It would be advantageous since it improved storage capacity and network bandwidth. The Data Management Virtualization system achieves these improvements by leveraging extended capabilities of modern storage systems by tracking the portions of the data that have changed over time and by data deduplication and compression algorithms that reduce the amount of data that needs to be copied and moved as taught by Chen [0078].
Regarding claim 2, Mehta in view of Chen teaches The computer-implemented method of claim 1, Mehta further teaches wherein providing the second metadata to the first backend storage partition device further causes the first backend storage partition device to compare the second metadata of the first stored at the first backend storage partition device to determine that the second backend storage partition device manages the given data record ([0103] FIG. 1B is a detailed view of some specific examples of primary data stored on primary storage device(s) 104 and secondary copy data stored on secondary storage device(s) 108, with other components of the system removed for the purposes of illustration. Stored on primary storage device(s) 104 are primary data 112 objects including word processing documents 119A-B, spreadsheets 120, presentation documents 122, video files 124, image files 126, email mailboxes 128 (and corresponding email messages 129A-C), HTML/XML or other types of markup language files 130, databases 132 and corresponding tables or other data structures 133A-133C. Some or all primary data 112 objects are associated with corresponding metadata (e.g., “Meta1-11”), which may include file system metadata and/or application-specific metadata. Stored on the secondary storage device(s) 108 are secondary copy 116 data objects 134A-C which may include copies of or may otherwise represent corresponding primary data 112.).
Regarding claim 3, Mehta in view of Chen teaches The computer-implemented method of claim 2, Chen further teaches wherein comparing a second timestamp of the second metadata with a first timestamp of the first metadata, the first timestamp corresponding to the first time when the first backend storage device manages the given data record, the second timestamp corresponding to a second time when the second backend storage device manages the given data record. ([0020] The method includes identifying, by the central operational data store, a last backup time of the set of operational data records from a source operational data store. The method includes calculating, by the central operational data store, a tolerance number based on an elapsed time that is indicative of a range of timestamps that can be processed at a same time such that it cannot be guaranteed that that operational data records with timestamps separated by less than the tolerance number were assigned timestamps in the order that the operational data records were created, modified, or both. The method includes calculating, by the central operational data store, a synchronization timestamp including subtracting the tolerance number from the last backup time. The method includes transmitting, by the central operational data store, the synchronization timestamp to the source operational data store to instruct the source operational data store to transmit any operational data records stored at the source operational data store with timestamps greater than the synchronization timestamp.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Mehta to include wherein comparing a second timestamp of the second metadata with a first timestamp of the first metadata, the first timestamp corresponding to the first time when the first backend storage device manages the given data record, the second timestamp corresponding to a second time when the second backend storage device manages the given data record as taught by Chen. It would be advantageous since it improved storage capacity and network bandwidth. The Data Management Virtualization system achieves these improvements by leveraging extended capabilities of modern storage systems by tracking the portions of the data that have changed over time and by data deduplication and compression algorithms that reduce the amount of data that needs to be copied and moved as taught by Chen [0078].
Regarding claim 4, Mehta in view of Chen teaches The computer-implemented method of claim 3, Chen further teaches wherein the first backend storage partition device determines that the second backend storage partition device manages the given data record based on the second timestamp of the second metadata being newer than a first timestamp of the first metadata by at least a threshold time [0417] Traditionally, operational data is synchronized by comparing data from source and target, adding data to target that exists only in source, deleting data that only exists in target, and update target data with data from source if they are different. Techniques are disclosed herein to replicate operational data. Different techniques can be used based on the number of operational data records. For example, a small set can simply be replaced each time synchronization occurs; as the data often changes and can be done quickly. For a medium set, both timestamps and record IDs can be used to synchronize the data (e.g., since the number of IDs is manageable, and can be used to indicate deletion information. For a large set, record IDs alone can be used to synchronize the data in conjunction with a tolerance number to account for a simultaneous processing window (e.g., since some operations cannot be guaranteed to occur prior to other operations). This is possible because large sets of data typically do not change once they are created, and is typically deleted based on some retention policy..).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Mehta to include wherein the first backend storage partition device determines that the second backend storage partition device manages the given data record based on the second timestamp of the second metadata being newer than a first timestamp of the first metadata by at least a threshold time as taught by Chen. It would be advantageous since it improved storage capacity and network bandwidth. The Data Management Virtualization system achieves these improvements by leveraging extended capabilities of modern storage systems by tracking the portions of the data that have changed over time and by data deduplication and compression algorithms that reduce the amount of data that needs to be copied and moved as taught by Chen [0078].
Regarding claim 5, Mehta in view of Chen teaches The computer-implemented method of claim 1, Mehta further teaches wherein the consistency check comprises the first backend storage partition device verifying the management status of the given data record by: sending a request to the centralized cache device for metadata of the given data record being currently stored at the centralized cache device; and verifying whether the first backend storage partition device currently manages the given data record based on comparing the second metadata received from the centralized cache device to the first metadata stored locally at the first backend storage partition device ([0146] Media agent 144 is a component of system 100 and is generally directed by storage manager 140 in creating and restoring secondary copies 116. Whereas storage manager 140 generally manages system 100 as a whole, media agent 144 provides a portal to certain secondary storage devices 108, such as by having specialized features for communicating with and accessing certain associated secondary storage device 108. Media agent 144 may be a software program (e.g., in the form of a set of executable binary files) that executes on a secondary storage computing device 106. Media agent 144 generally manages, coordinates, and facilitates the transmission of data between a data agent 142 (executing on client computing device 102) and secondary storage device(s) 108 associated with media agent 144. For instance, other components in the system may interact with media agent 144 to gain access to data stored on associated secondary storage device(s) 108, (e.g., to browse, read, write, modify, delete, or restore data). Moreover, media agents 144 can generate and store information relating to characteristics of the stored data and/or metadata, or can generate and store other types of information that generally provides insight into the contents of the secondary storage devices 108—generally referred to as indexing of the stored secondary copies 116. Each media agent 144 may operate on a dedicated secondary storage computing device 106, while in other embodiments a plurality of media agents 144 may operate on the same secondary storage computing device 106.).
Regarding claim 6, Mehta in view of Chen teaches The computer-implemented method of claim 1, Mehta further teaches further comprising: receiving, from the first backend storage partition device, third metadata of the given data record created by the first backend storage partition device; and in response to receiving the third metadata from the first backend storage partition device, writing, at the centralized cache device, the first backend storage partition device as manager of the given data record.( [0141] Data agent 142 is a component of information system 100 and is generally directed by storage manager 140 to participate in creating or restoring secondary copies 116. Data agent 142 may be a software program (e.g., in the form of a set of executable binary files) that executes on the same client computing device 102 as the associated application 110 that data agent 142 is configured to protect. Data agent 142 is generally responsible for managing, initiating, or otherwise assisting in the performance of information management operations in reference to its associated application(s) 110 and corresponding primary data 112 which is generated/accessed by the particular application(s) 110. For instance, data agent 142 may take part in copying, archiving, migrating, and/or replicating of certain primary data 112 stored in the primary storage device(s) 104. Data agent 142 may receive control information from storage manager 140, such as commands to transfer copies of data objects and/or metadata to one or more media agents 144. Data agent 142 also may compress, deduplicate, and encrypt certain primary data 112, as well as capture application-related metadata before transmitting the processed data to media agent 144. Data agent 142 also may receive instructions from storage manager 140 to restore (or assist in restoring) a secondary copy 116 from secondary storage device 108 to primary storage 104, such that the restored data may be properly accessed by application 110 in a suitable format as though it were primary data 112.)
Regarding claim 7, Mehta in view of Chen teaches The computer-implemented method of claim 1, Mehta further teaches wherein the given data record from the first backend storage partition device is moved to the second backend storage partition device, and wherein the computer-implemented method further comprises:receiving the second metadata of the given data record from the second backend storage partition device, the second metadata being based on the given data record moving from the first backend storage partition device to the second backend storage partition device; and in response to receiving the second metadata from the second backend storage partition device, updating the centralized cache device with the second metadata including writing the second backend storage partition device as manager of the given data record.
Regarding claim 8, Mehta in view of Chen teaches The computer-implemented method of claim 7, Mehta further teaches wherein the second metadata is received in connection with updated timestamps of the given data record.
Regarding claim 9, Mehta in view of Chen teaches The computer-implemented method of claim 1, Mehta further teaches further comprising: receiving, at the centralized cache device, a query from a computing device requesting information associated with the given data record; determining, by the centralized cache device, that the second backend storage partition device is manager of the given data record based on the given data record being associated with the second backend storage partition device in a data record mapping table; and in response to the query, directing the query to the second backend storage partition device for fulfillment of the query. ([0141] Data agent 142 is a component of information system 100 and is generally directed by storage manager 140 to participate in creating or restoring secondary copies 116. Data agent 142 may be a software program (e.g., in the form of a set of executable binary files) that executes on the same client computing device 102 as the associated application 110 that data agent 142 is configured to protect. Data agent 142 is generally responsible for managing, initiating, or otherwise assisting in the performance of information management operations in reference to its associated application(s) 110 and corresponding primary data 112 which is generated/accessed by the particular application(s) 110. For instance, data agent 142 may take part in copying, archiving, migrating, and/or replicating of certain primary data 112 stored in the primary storage device(s) 104. Data agent 142 may receive control information from storage manager 140, such as commands to transfer copies of data objects and/or metadata to one or more media agents 144. Data agent 142 also may compress, deduplicate, and encrypt certain primary data 112, as well as capture application-related metadata before transmitting the processed data to media agent 144. Data agent 142 also may receive instructions from storage manager 140 to restore (or assist in restoring) a secondary copy 116 from secondary storage device 108 to primary storage 104, such that the restored data may be properly accessed by application 110 in a suitable format as though it were primary data 112.)
Claims 10-20 are rejected using similar reasoning seen in the rejection of claims 1-9 due to reciting similar limitations but directed towards different statutory categories.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHARPLESS whose telephone number is (571)272-1521. The examiner can normally be reached M-F 7:30 AM- 3:30 PM (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALEKSANDR KERZHNER can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.C.S./Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165