Prosecution Insights
Last updated: April 19, 2026
Application No. 19/058,312

METHOD, DEVICE AND STORAGE MEDIUM FOR DEDUPLICATION OF OBJECT STORAGE SYSTEM

Final Rejection §103
Filed
Feb 20, 2025
Examiner
MARI VALCARCEL, FERNANDO MARIANO
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
71%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
71 granted / 145 resolved
-6.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
40 currently pending
Career history
185
Total Applications
across all art units

Statute-Specific Performance

§101
13.5%
-26.5% vs TC avg
§103
66.1%
+26.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to applicant’s arguments and amendments filed 2/09/2026, which are in response to USPTO Office Action mailed 11/07/2025. Applicant’s arguments have been considered with the results that follow: THIS ACTION IS MADE FINAL. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN202410841039.X, filed on 6/26/2024. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/15/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is/are being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-6, 13-14 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas (US PGPUB No. 2019/0332597; Pub. Date: Oct. 31, 2019) in view of Qiu et al (US PGPUB No. 2023/0062644; Pub. Date: Mar. 2, 2023). Regarding independent claim 1, Marelas discloses a method for deduplication of an object storage system, comprising: determining target granularity for deduplication of the object storage system, See Paragraphs [0022] & [0023], (Disclosing a system for managing deduplication of user data. The system may perform user-specific deduplication via the creation of user-level deduplication domains shared by multiple users. Deduplication domains may be determined based on content salts associated with a user, i.e. determining target granularity for deduplication of the object storage system) the target granularity being object granularity or slice granularity, See Paragraph [0023], (The user-level deduplication domains allow for deduplication to be performed at any desired granularity which includes user object-level deduplication, i.e. the target granularity being object granularity (e.g. the system determines a user-level deduplication domain and carries out deduplication at any desired granularity which includes user object-level).) The examiner notes that the limitation includes selecting from one of two types of mutually exclusive granularities. Marelas explicitly discloses the use of object granularity and therefore teaches the limitation above. wherein the object granularity takes entire actual data of an object as a data processing unit, and the slice granularity takes slice data corresponding to the actual data of the object as a data processing unit; See Paragraph [0071], (Data may be maintained in a cloud system that supports tiering using content salted deduplication domains. This enables objects to live or die in cloud object stores as atomic units, i.e. wherein the object granularity takes entire actual data of an object as a data processing unit (e.g. object-level deduplication is applied to user objects of the user-specific domains).) The examiner notes that the limitation above referring to the “slice granularity” is optional because the use of a slice granularity itself is optional as indicated in the previous limitation (i.e. object granularity or slice granularity). Marelas describes a deduplication process using object granularity which therefore discloses the limitation above. performing duplicate data screening on each data of target granularity in the object storage system; See Paragraph [0036], (Deduplication server 250 includes deduplication application 252 configured to perform deduplication services with respect to backed up client data. Note [0023] wherein deduplication may be performed over a desired granularity which includes object-level granularity, i.e. performing duplicate data screening on each data of target granularity in the object storage system;) and adding, by taking one data in any group of duplicate data as reference data, a soft link to metadata of other data in any group of the duplicate data except the reference data to point to the reference data, and recycling storage space of the other data. See Paragraph [0077], (The method comprises appending user salt to file objects. Note [0042] wherein salt 604 may be based on any piece of information or combination of pieces of information that uniquely identify a particular user, i.e. and adding, by taking one data in any group of duplicate data as reference data, a soft link to metadata of other data in any group of the duplicate data except the reference data to point to the reference data (e.g. the appending adds salt, i.e. metadata, to a data object wherein the salt comprises a piece(s) of information that uniquely identify a user).) See Paragraph [0078], (New data chunks may be copied to a cloud store 778 and marked as dead so they may be removed from local storage. The storage space may then be reclaimed by a garbage collection process performed by the deduplication system such as by dumping copied chunks to a local container 780 as illustrated in FIG. 6, i.e. recycling storage space of the other data.) Marelas does not disclose the step of recycling storage space of the other data by deleting actual data of the other data, with the metadata of the other data retained, wherein the reference data is found through the soft link when the other data is searched based on the metadata of the other data. Qiu discloses the step of recycling storage space of the other data by deleting actual data of the other data, with the metadata of the other data retained, See Paragraph [0058], (Disclosing a storage system configured to ingest data from a source system. Storage system 112 may perform partial in-line deduplication wherein if a corresponding chunk identifier is stored in a chunk metadata data structure, then storage system 112 stores a reference to a storage location of the already stored data chunk and deletes the data chunk from memory, i.e. recycling storage space of the other data by deleting actual data of the other data, with the metadata of the other data retained) wherein the reference data is found through the soft link when the other data is searched based on the metadata of the other data. See Paragraph [0021], (The storage system may maintain a chunk metadata data structure in a metadata store that indicates data chunks that are already stored by the storage system. When a storage node receives a request for an object, the storage node utilizes the chunk metadata data structure to look up a storage location for data chunks associated with an object. Note [0058] wherein the system stores a reference to a storage location of the already stored data chunk and deletes the data chunk from memory, i.e. wherein the reference data is found through the soft link when the other data is searched based on the metadata of the other data (e.g. the metadata chunk data structure is used to retrieve objects. For a deduplicated object, the system may use a reference to a storage location to locate a previously stored data object as opposed to retrieving a duplicate object). Marelas and Qiu are analogous art because they are in the same field of endeavor, data deduplication. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas to include the method of deduplicating data using a metadata data structure as disclosed by Qiu. Paragraph [0034] of Qiu discloses that the process of partial post-processing deduplicating reduces the bottleneck associated with in-line deduplication and allows the system to perform partial post-processing deduplication at a time when the storage system has sufficient resources to perform partial post-processing deduplication without affecting a performance of one or more other processes. Regarding dependent claim 4, As discussed above with claim 1, Marelas-Qiu discloses all of the limitations. Marelas further discloses the step wherein performing the duplicate data screening on each data of the target granularity in the object storage system comprises: obtaining a first check value of each data of the target granularity in the object storage system, and screening out data with the same first check value from the object storage system to determine as the duplicate data. See Paragraph [0022], (The deduplication process may identify duplicate data by comparing a new hash of a data chunk with existing hashes for the same user. If the new and existing hashes match then the data is considered duplicate data and may be eliminated, i.e. obtaining a first check value (e.g. the hash value) of each data of the target granularity in the object storage system, and screening out data with the same first check value from the object storage system to determine as the duplicate data (e.g. the hash match indicates duplicate data to be eliminated).) Regarding dependent claim 5, As discussed above with claim 4, Marelas-Qiu discloses all of the limitations. Marelas further discloses the step wherein obtaining the first check value of each data of the target granularity in the object storage system and screening out the data with the same first check value from the object storage system to determine as the duplicate data comprises: traversing each data of the target granularity in the object storage system, See Paragraph [0022], (Deduplication is performed at a user-level in the deduplication environment or domain.) See Paragraph [0077], (The combination of salts and file objects may be hashed and the new chunks from the hashing process may then be deposited in a container, which can then be copied by a deduplication server and/or application to cloud store 778, i.e. traversing each data of the target granularity in the object storage system (e.g. the plurality of chunks generated for a user are subjected to the deduplication process).) obtaining the first check value of the currently traversed data in a traversal process, and determining that the first check value of the currently traversed data matches a historical first check value in a first set, See Paragraph [0022], (The deduplication process may identify duplicate data by comparing a new hash of a data chunk with existing hashes for the same user. If the new and existing hashes match then the data is considered duplicate data and may be eliminated, i.e. obtaining the first check value of the currently traversed data in a traversal process (e.g. obtaining the new hash), and determining that the first check value of the currently traversed data matches a historical first check value in a first set (e.g. obtaining an existing hash for matching).) wherein the first set stores each historical first check value obtained for the first time in the traversal process and an identification of corresponding data; See Paragraph [0022], (The deduplication process compares a new hash with existing hashes for a particular user. Note [0043] wherein a hash 606 may be generated once a user-specific salt is created which is then associated to a data chunk 602.) See Paragraph [0046], (A hash 606 may be associated with a container 612 that includes chunks 602 from which the hashes 606 were derived, i.e. wherein the first set stores each historical first check value obtained for the first time in the traversal process (e.g. existing hashes are generated previously for chunks when the user-specific salt 604 is generated) and an identification of corresponding data (e.g. the user-specific salt + chunk combination identifies a unique chunk for a particular user).) in response to determining that there is a historical first check value in the first set that is the same as the first check value of the currently traversed data, determining the currently traversed data to be the duplicate data; or in response to determining that there is no historical first check value in the first set that is the same as the first check value of the currently traversed data, determining the currently traversed data not to be the duplicate data, and taking the first check value of the currently traversed data as the historical first check value, and storing it in the first set in association with the identification of the currently traversed data. See Paragraph [0022], (If the new and existing hashes match then the data is considered duplicate data and may be eliminated, i.e. in response to determining that there is a historical first check value in the first set that is the same as the first check value of the currently traversed data, determining the currently traversed data to be the duplicate data;) Regarding dependent claim 6, As discussed above with claim 5, Marelas-Qiu discloses all of the limitations. Marelas further discloses the step wherein adding, by taking one data in any group of the duplicate data as the reference data, a soft link in the metadata of other data in any group of the duplicate data except the reference data to point to the reference data, and recycling the storage space of the other data comprises: adding, by taking the data corresponding to the first check value in the first set that is the same as the first check value of the currently traversed data as the reference data, the soft link to the metadata of the currently traversed data to point to the reference data, and recycling the storage space of the currently traversed data. See Paragraph [0077], (The method comprises appending user salt to file objects. Note [0042] wherein salt 604 may be based on any piece of information or combination of pieces of information that uniquely identify a particular user.) See Paragraph [0078], (New data chunks may be copied to a cloud store 778 and marked as dead so they may be removed from local storage. The storage space may then be reclaimed by a garbage collection process performed by the deduplication system such as by dumping copied chunks to a local container 780 as illustrated in FIG. 6, i.e. recycling storage space of the other data, i.e. adding, by taking the data corresponding to the first check value in the first set that is the same as the first check value of the currently traversed data as the reference data (e.g. Note [0077] wherein additional salt may be appended before or after the user salt is associated with the file object. The combination is then hashed and copied to a deduplication server where the deduplication process of matching new and existing hashes is performed as described in [0022]) , the soft link to the metadata of the currently traversed data to point to the reference data (e.g. the appending adds salt, i.e. metadata, to a data object wherein the salt comprises a piece(s) of information that uniquely identify a user), and recycling the storage space of the currently traversed data. Regarding dependent claim 13, As discussed above with claim 1, Marelas-Qiu discloses all of the limitations. Marelas further discloses the step wherein before determining the target granularity for the deduplication of the object storage system, further comprises: determining a target range for the deduplication in the object storage system to perform the deduplication within the target range; See Paragraph [0022], (Deduplication is performed at a user level by matching a new hash for a chunk generated by a user with existing hashes for the same user, i.e. determining a target range (e.g. the objects subject to the deduplication process are those that are associated with a particular user) for the deduplication in the object storage system to perform the deduplication within the target range;) and accordingly, determining the target granularity for the deduplication of the object storage system comprises: determining the target granularity for the deduplication within the target range. See Paragraph [0023], (The process enables creation of user-level deduplication domains within a deduplication environment shared by multiple users. The system may then perform the deduplication at any desired granularity, which includes user object level, i.e. accordingly, determining the target granularity for the deduplication of the object storage system comprises: determining the target granularity for the deduplication within the target range (e.g. the deduplication process is performed at object level over objects of the user-level deduplication domains).) Regarding independent claim 14, The claim is analogous to the subject matter of independent claim 1 directed to a device or apparatus and is rejected under similar rationale. Regarding dependent claim 17, The claim is analogous to the subject matter of dependent claim 4 directed to a device or apparatus and is rejected under similar rationale. Regarding dependent claim 18, The claim is analogous to the subject matter of dependent claim 5 directed to a device or apparatus and is rejected under similar rationale. Regarding dependent claim 19, The claim is analogous to the subject matter of dependent claim 6 directed to a device or apparatus and is rejected under similar rationale. Regarding independent claim 20, The claim is analogous to the subject matter of independent claim 1 directed to a non-transitory, computer readable medium and is rejected under similar rationale. Claim(s) 2-3 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas in view of Qiu as applied to claim 1 above, and further in view of Malladi et al. (US PGPUB No. 2018/0210659; Pub. Date: Jul. 26, 2018). Regarding dependent claim 2, As discussed above with claim 1, Marelas-Qiu discloses all of the limitations. Marelas-Qiu does not disclose the step wherein determining the target granularity for deduplication of the object storage system comprises: determining a first benefit for deduplication of the object storage system at the object granularity and a second benefit for deduplication of the object storage system at the slice granularity; and determining the target granularity for the deduplication of the object storage system according to the first benefit and the second benefit. Malladi further discloses the step wherein determining the target granularity for deduplication of the object storage system comprises: determining a first benefit for deduplication of the object storage system at the object granularity and a second benefit for deduplication of the object storage system at the slice granularity; See Paragraphs [0059]-[0060], (Disclosing a system for dynamically selecting deduplication granularity in a memory system. The system may compare deduplication granularities in terms of increases/decreases in overhead. For example, a 32-byte integer deduplication granularity results in 20% additional overhead when compared to a 64-byte deduplication granularity, i.e. wherein determining the target granularity for deduplication of the object storage system comprises: determining a first benefit for deduplication of the object storage system at the object granularity and a second benefit for deduplication of the object storage system at the slice granularity (e.g. the granularities of Malladi may be compared based on overhead metrics. In [0060], a comparison is made based on additional overhead incurred by a 32-byte deduplication granularity vs. a 64-byte deduplication granularity. A lower overhead would be considered a benefit of a particular granularity selection).) and determining the target granularity for the deduplication of the object storage system according to the first benefit and the second benefit. See FIG. 2 & Paragraph [0062], (FIG. 2 illustrates four methods for selecting a deduplication granularity at the application for a memory system such as memory system 100 of FIG. 1. Note [0060] wherein the system provides dynamic, selectable granularity with adaptive behavior, i.e. determining the target granularity for the deduplication of the object storage system according to the first benefit and the second benefit (e.g. the deduplication granularity is selected based on a comparison of additional overhead as described in [0060]).) Marelas, Qiu and Malladi are analogous art because they are in the same field of endeavor, deduplication. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu to include the method of selecting a more efficient deduplication granularity according to system resource metrics as disclosed by Malladi. Paragraph [0009] of Malladi discloses that the plurality of algorithms described allow systems to reduce the overhead of auxiliary data structures in deduplication-based memory systems while also providing effective deduplication ratios for different applications. Regarding dependent claim 3, As discussed above with claim 2, Marelas-Qiu-Malladi discloses all of the limitations. Malladi further discloses the step wherein determining the target granularity for the deduplication of the object storage system according to the first benefit and the second benefit comprises: in response to a difference between the first benefit and the second benefit being less than a preset threshold, determining the object granularity as the target granularity; or in response to the difference between the first benefit and the second benefit being not less than the preset threshold, determining a granularity corresponding to a maximum benefit between the first benefit and the second benefit as the target granularity. See Paragraphs [0059]-[0060], (An example is provided wherein the system may determine that a 32-byte integer deduplication granularity results in 20% additional overhead when compared to a 64-byte deduplication granularity. The objective of the system is to reduce the overhead for deduplication-based memory systems, therefore the method would select a deduplication granularity having the lowest overhead, i.e. in response to the difference between the first benefit and the second benefit being not less than the preset threshold (e.g. The difference between 32-byte deduplication and 64-byte deduplication is that the 32-byte deduplication adds a 20% additional overhead, the overhead represents a benefit threshold ), determining a granularity corresponding to a maximum benefit between the first benefit and the second benefit as the target granularity (e.g. the system selects the granularity with the most efficient characteristics, in this case minimizing overhead which would be a maximized benefit of selecting the 64-byte deduplication granularity.) Marelas, Qiu and Malladi are analogous art because they are in the same field of endeavor, deduplication. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu to include the method of selecting a more efficient deduplication granularity according to system resource metrics as disclosed by Malladi. Paragraph [0009] of Malladi discloses that the plurality of algorithms described allow systems to reduce the overhead of auxiliary data structures in deduplication-based memory systems while also providing effective deduplication ratios for different applications. Regarding dependent claim 15, The claim is analogous to the subject matter of dependent claim 2 directed to a device or apparatus and is rejected under similar rationale. Regarding dependent claim 16, The claim is analogous to the subject matter of dependent claim 3 directed to a device or apparatus and is rejected under similar rationale. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas in view of Qiu as applied to claim 6 above, and further in view of DE SMET (International Publication No. WO 2020139731 A1, International Pub. Date: July 2, 2020). Regarding dependent claim 7, As discussed above with claim 6, Marelas-Qiu discloses all of the limitations. Marelas further discloses the step wherein in response to the target granularity being the object granularity, traversing each data of the target granularity in the object storage system and obtaining the first check value of the currently traversed data in the traversal process comprises: traversing each object in the object storage system, See Paragraph [0022], (Deduplication is performed at a user-level in the deduplication environment or domain.) See Paragraph [0077], (The combination of salts and file objects may be hashed and the new chunks from the hashing process may then be deposited in a container, which can then be copied by a deduplication server and/or application to cloud store 778, i.e. traversing each object in the object storage system (e.g. the plurality of chunks generated for a user are subjected to the deduplication process).) Marelas-Qiu does not disclose the step of obtaining a second check value of the currently traversed object from metadata of the currently traversed object in the traversal process, and querying a number of occurrence of the second check value of the currently traversed object from a second set, wherein the second set includes each of second check values of all objects in the object storage system and a corresponding number of occurrence; and in response to the number of occurrence of the second check value of the currently traversed object being greater than one, obtaining the actual data of the currently traversed object, and obtaining the first check value of the actual data of the currently traversed object. DE SMET discloses the step of obtaining a second check value of the currently traversed object from metadata of the currently traversed object in the traversal process, and querying a number of occurrence of the second check value of the currently traversed object from a second set, wherein the second set includes each of second check values of all objects in the object storage system and a corresponding number of occurrence; See Paragraph [0028], (Disclosing a system for deduplicating data objects by performing a bottom-up deduplication such that objects at hierarchically lower levels of a data structure are deduplicated first. The system performs value-based deduplication for data objects. The system may perform passes through a collection of objects in a global manner based on frequency of occurrence of objects analysis.) See Paragraph [0084], (Frequency analysis may be employed to improve the deduplication process by determining a cardinality of different types of objects, i.e. obtaining a second check value of the currently traversed object from metadata of the currently traversed object in the traversal process, and querying a number of occurrence of the second check value of the currently traversed object from a second set, wherein the second set includes each of second check values of all objects in the object storage system and a corresponding number of occurrence; (e.g. via the frequency analysis that is performed for every object of every type).) and in response to the number of occurrence of the second check value of the currently traversed object being greater than one, obtaining the actual data of the currently traversed object, and obtaining the first check value of the actual data of the currently traversed object. See Paragraph [0028], (Value-based deduplication is executed upon analyzing a candidate set of objects in a global manner based on a frequency of occurrence of objects analysis as in [0084], the information retrieved by the analysis is associated with the objects to which the deduplication is directed, i.e. and in response to the number of occurrence of the second check value of the currently traversed object being greater than one, obtaining the actual data of the currently traversed object, and obtaining the first check value of the actual data of the currently traversed object (e.g. Note [0072]-[0073] wherein a dictionary may be used to determine equality between objects being subjected to a deduplication process. The dictionary may rely on a hash code to look up duplicates, i.e. the first check value.) Marelas, Qiu and DE SMET are analogous art because they are in the same field of endeavor, deduplication processes. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu to include the method of performing a frequency analysis as part of a deduplication process as disclosed by DE SMET. Paragraph [0018] of DE SMET discloses that the use of a dictionary allows for quick and efficient discovery of duplicate data objects by identifying other data objects at the same level of a hierarchical data structure as determined by the dictionary, of the same type as determined by the dictionary and having the same values as those present in a dictionary entry. Claim(s) 8 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas in view of Qiu as applied to claim 6 above, and further in view of Wei et al. (US Patent No.: 10,359,939; Date of Patent: Jul. 23, 2019). Regarding dependent claim 8, As discussed above with claim 6, Marelas-Qiu discloses all of the limitations. Marelas-Qiu does not disclose the step wherein in response to the target granularity being the slice granularity, traversing each data of the target granularity in the object storage system and obtaining the first check value of the currently traversed data in the traversal process comprises: traversing each object in the object storage system, obtaining the actual data of the currently traversed object in the traversal process, and segmenting the actual data of the currently traversed object to obtain slice data; and obtaining the first check value of the slice data of the currently traversed object. Wei disclose the step wherein in response to the target granularity being the slice granularity, traversing each data of the target granularity in the object storage system and obtaining the first check value of the currently traversed data in the traversal process comprises: traversing each object in the object storage system, See Col. 4, lines 23-27, (Disclosing a system for data object processing including dividing a data object into one or more blocks as part of a deduplication process such that the data object may be stored or transmitted on the premise that storage space utilization is reduced without incurring data loss, i.e. a slice granularity for a deduplication process).) obtaining the actual data of the currently traversed object in the traversal process, and segmenting the actual data of the currently traversed object to obtain slice data; See Col. 4, lines 23-27, (A data object may be divided into data chunks which may be used as a unit of data deduplication such that the data object may be stored or transmitted on the premise that storage space utilization is reduced without incurring data loss.) See FIG. 2, (FIG. 2 illustrates a method for processing data objects comprising step 26 wherein a data object is split into chunking subsequences of having an expected length according to a chunking policy mapping table. The method repeats steps 22-29 until there are no more data objects to process, i.e. obtaining the actual data of the currently traversed object in the traversal process (e.g. step 22 wherein a data object is input), and segmenting the actual data of the currently traversed object to obtain slice data; and obtaining the first check value of the slice data of the currently traversed object. See FIG. 2, (FIG. 2 illustrates the method comprising step 28 of calculating a hash fingerprint for each chunk, i.e. obtaining the first check value of the slice data of the currently traversed object.) Marelas, Qiu and Wei are analogous art because they are in the same field of endeavor, deduplication systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu to include the method of processing data objects as chunks as disclosed by Wei. Col. 11, lines 28-34 of Wei disclose that a data object only needs to be scanned once in order to divide said object into data chunks. This represents an optimization by saving system resources and improving the processing efficiency of a data object. The examiner notes that while Marelas discloses object-level granularity, Marelas also states that any desired granularity may be used to perform the method, which may therefore include the data chunking method of Wei. Regarding dependent claim 11, As discussed above with claim 8, Marelas-Qiu-Wei discloses all of the limitations. Wei further discloses the step wherein segmenting the actual data of the currently traversed object to obtain the slice data comprises: segmenting the actual data of each object based on an un-fixed length segmentation method of content; or segmenting the actual data of each object using a fixed-length segmentation method. See Col. 6, lines 44-48, (The method of generating data chunks from a data object comprises calculating multiple groups of candidate chunk boundaries with different expected lengths and dividing the data object into one or more variable-length blocks using one of the multiple groups of candidate chunk boundaries, i.e. segmenting the actual data of each object based on an un-fixed length segmentation method of content) Marelas, Qiu and Wei are analogous art because they are in the same field of endeavor, deduplication systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu to include the method of processing data objects as chunks as disclosed by Wei. Col. 11, lines 28-34 of Wei disclose that a data object only needs to be scanned once in order to divide said object into data chunks. This represents an optimization by saving system resources and improving the processing efficiency of a data object. Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas in view of Qiu and Wei as applied to claim 8 above, and further in view of Shilane et al. (US PGPUB No.: 2023/0376461; Pub. Date: Nov. 23, 2023). Regarding dependent claim 9, As discussed above with claim 8, Marelas-Qiu-Wei discloses all of the limitations. Marelas further discloses the step wherein in response to determining that there is no historical first check value in the first set that is the same as the first check value of the currently traversed data, determining the currently traversed data not to be the duplicate data, See Paragraph [0022], (The deduplication process may identify duplicate data by comparing a new hash of a data chunk with existing hashes for the same user. If the new and existing hashes match then the data is considered duplicate data and may be eliminated, i.e. determining the currently traversed data not to be the duplicate data (e.g. the system may determine whether or not a match exists between the new hash and existing hashes).) Marelas-Qiu-Wei does not disclose the step of storing, by taking the first check value of the currently traversed data as the historical first check value, the first check value in the first set in association with the identification of the currently traversed data, comprises: in response to determining that there is no historical first check value in the first set that is the same as the first check value of any slice data of the currently traversed object, determining any of the slice data not to be duplicate data, and rewriting any of the slice data into the object storage system as new slice data, and storing, by taking the first check value of the new slice data as the historical first check value, the first check value in the first set in association with an identification of the new slice data. Shilane discloses the step of storing, by taking the first check value of the currently traversed data as the historical first check value, the first check value in the first set in association with the identification of the currently traversed data, comprises: in response to determining that there is no historical first check value in the first set that is the same as the first check value of any slice data of the currently traversed object, determining any of the slice data not to be duplicate data, and rewriting any of the slice data into the object storage system as new slice data, and storing, by taking the first check value of the new slice data as the historical first check value, the first check value in the first set in association with an identification of the new slice data. See Paragraph [0032], (Disclosing a system for processing a stream of fingerprints corresponding to data segments of a data file for deduplication. Deduplication service 316 may receive fingerprints for new data file segments which are compared against previously generated fingerprints for previously stored data file segments that were previously identified as unique. The comparison may determine which of the data file segments are unique and which are duplicates, i.e. in response to determining that there is no historical first check value in the first set that is the same as the first check value of any slice data of the currently traversed object, determining any of the slice data not to be duplicate data. Unique data file segments may be stored in a compressed format in a compressed region by client 304 or access object service 308, i.e. rewriting any of the slice data into the object storage system as new slice data, , and storing, by taking the first check value of the new slice data as the historical first check value, the first check value in the first set in association with an identification of the new slice data (e.g. the unique fingerprint is determined by the comparison. If a segment is determined to be unique, it is then stored in a compressed region.) Marelas, Qiu, Wei and Shilane are analogous art because they are in the same field of endeavor, deduplication systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu-Wei to include the method of deduplicating data chunks as disclosed by Shilane. Paragraph [0031] of Shilane discloses that an advantage of generating fingerprints using either client 304 or access object service 308 for data file segments includes the fact that the deduplications service 316 itself does not have to generate said fingerprints, which results in a significant reduction in communication volume and time. Regarding dependent claim 10, As discussed above with claim 9, Marelas-Qiu-Wei-Shilane discloses all of the limitations. Shilane further discloses the step wherein adding, by taking the historical first check value corresponding data in the first set that is the same as the first check value of the currently traversed data as the reference data, the soft link in the metadata of the currently traversed data to point to the reference data, and recycling the storage space of the currently traversed data comprises: adding, by taking the slice data corresponding to all the slice data included in the currently traversed object in the first set as the reference data, a soft link pointing to the respective reference data to the metadata of the currently traversed object, See Paragraph [0003], (The data deduplication system may receive data file segments and compare said segments against previously stored data file segments. A comparison may identify a duplicate of a data file segment and replace the duplicate with a small reference that points to the previously stored data file segment. Note [0032] wherein Deduplication service 316 may receive fingerprints for new data file segments which are compared against previously generated fingerprints for previously stored data file segments that were previously identified as unique. The comparison may determine which of the data file segments are unique and which are duplicates, i.e. adding, by taking the slice data corresponding to all the slice data included in the currently traversed object in the first set as the reference data, a soft link pointing to the respective reference data to the metadata of the currently traversed object (e.g. by maintaining a reference to the previously stored data).) Additionally, Marelas discloses the step of recycling the storage space occupied by the actual data of the currently traversed object. See Paragraph [0078], (New data chunks may be copied to a cloud store 778 and marked as dead so they may be removed from local storage. The storage space may then be reclaimed by a garbage collection process performed by the deduplication system such as by dumping copied chunks to a local container 780 as illustrated in FIG. 6, i.e. recycling storage space of the other data.) Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marelas in view of Qiu and Milladi as applied to claim 2 above, and further in view of Wei et al. (US Patent No.: 10,359,939; Date of Patent: Jul. 23, 2019). Regarding dependent claim 12, As discussed above with claim 2, Marelas-Qiu-Milladi discloses all of the limitations. Marelas-Qiu-Milladi does not disclose the step wherein determining the second benefit for the deduplication of the object storage system at the slice granularity comprises: respectively determining the second benefit for the deduplication of the object storage system at the slice granularity under a condition of adopting different segmentation modes and/or different segmentation lengths; and correspondingly, after determining the target granularity for the deduplication of the object storage system, the method further comprises: in response to determining the target granularity to be the slice granularity, determining a segmentation mode and/or a segmentation length corresponding to the maximum second benefit as a target segmentation mode and/or a target segmentation length. Wei discloses the step wherein determining the second benefit for the deduplication of the object storage system at the slice granularity comprises: respectively determining the second benefit for the deduplication of the object storage system at the slice granularity under a condition of adopting different segmentation modes and/or different segmentation lengths; See Col. 5, lines 1-13, (A data object may be scanned to output candidate chunk boundaries with different expected lengths.) See Col. 11, lines 44-48, (A smaller expected length indicates a finer average chunking granularity and is more conducive to perceiving a change of partial compression ratio of data content while a larger expected length indicates a coarser average chunking granularity, i.e. respectively determining the second benefit for the deduplication of the object storage system at the slice granularity under a condition of adopting different segmentation modes and/or different segmentation lengths (e.g. the varying expected lengths represent benefits for the chunking process applied to data objects.).) and correspondingly, after determining the target granularity for the deduplication of the object storage system, the method further comprises: in response to determining the target granularity to be the slice granularity, determining a segmentation mode and/or a segmentation length corresponding to the maximum second benefit as a target segmentation mode and/or a target segmentation length. See FIG. 2 & Col. 9, lines 29-32 & 57-62, (FIG. 2 illustrates the method comprising step 25 of splitting data segments into chunking subsequences of a specific expected length according to chunking policies of a chunking policy mapping table. The chunking policy mapping table records a length range to which a length of a data segment belongs and an expected length determined by the length range and the compression ratio range, i.e. after determining the target granularity for the deduplication of the object storage system (e.g. the segmenting of data object indicates that the deduplication will proceed at slice-level granularity), the method further comprises: in response to determining the target granularity to be the slice granularity, determining a segmentation mode and/or a segmentation length corresponding to the maximum second benefit as a target segmentation mode and/or a target segmentation length (e.g. the length of segments is determined based on candidate lengths and the chunking policy mapping table records).) Marelas-Qiu, Milladi and Wei are analogous art because they are in the same field of endeavor, deduplication systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Marelas-Qiu-Milladi to include the method of processing data objects as chunks as disclosed by Wei. Col. 11, lines 28-34 of Wei disclose that a data object only needs to be scanned once in order to divide said object into data chunks. This represents an optimization by saving system resources and improving the processing efficiency of a data object. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 14 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s amendments modify the scope of the claimed invention and therefore necessitated the new grounds of rejection presented in this Office Action. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FMMV/Examiner, Art Unit 2159 /ANN J LO/Supervisory Patent Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Feb 20, 2025
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Feb 09, 2026
Response Filed
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591588
CATEGORICAL SEARCH USING VISUAL CUES AND HEURISTICS
2y 5m to grant Granted Mar 31, 2026
Patent 12547593
METHOD AND APPARATUS FOR SHARING FAVORITE
2y 5m to grant Granted Feb 10, 2026
Patent 12505129
Distributed Database System
2y 5m to grant Granted Dec 23, 2025
Patent 12499123
ACTOR-BASED INFORMATION SYSTEM
2y 5m to grant Granted Dec 16, 2025
Patent 12499121
REAL-TIME MONITORING AND REPORTING SYSTEMS AND METHODS FOR INFORMATION ACCESS PLATFORM
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
71%
With Interview (+22.0%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month