Prosecution Insights
Last updated: April 19, 2026
Application No. 17/452,177

BACKUP AND RESTORE OF ARBITRARY DATA

Final Rejection §103
Filed
Oct 25, 2021
Examiner
FERRER, JEDIDIAH P
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
6 (Final)
52%
Grant Probability
Moderate
7-8
OA Rounds
4y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
114 granted / 220 resolved
-3.2% vs TC avg
Strong +40% interview lift
Without
With
+39.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
26 currently pending
Career history
246
Total Applications
across all art units

Statute-Specific Performance

§101
19.2%
-20.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
5.8%
-34.2% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 220 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-18 are pending. Claim 1, 5; 7, 11; 13, and 17 are amended. Claims 1-18 are rejected. Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes The independent claims recite “a parameter store storing runtime information comprising private user information.” The support for this in the specification is interpreted as ¶ 0016, reciting, “so-called secret or parameter stores, which may be intended to hold crucial runtime information (e.g., passwords, private keys, etc.)” Response to Arguments 35 U.S.C. 103 Applicant’s arguments, see Remarks pp9-11, with respect to the rejection(s) of claims 1, 7, and 13 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made incorporating new reference Branton. Dependent claims 2-6, 8-12, and 14-18 remain rejected at least by virtue of their dependence on rejected base claims. Statutory Review under 35 USC § 101 Claims 1-6 are directed towards a method and have been reviewed. Claims 1-6 are directed to patent-eligible subject matter as the method is directed to significantly more than an abstract idea in Step 2B of the subject matter eligibility analysis; see MPEP 2106.05, Section I.A. The claims add a specific limitation other than what is well-understood, routine, conventional activity in the field, or add unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)). In the BASCOM decision, it is stated that the claims do not merely recite the abstract idea of filtering content along with the requirement to perform it on the Internet, or to perform it on a set of generic computer components, and that such claims would not contain an inventive concept. It is further stated that the claims do not preempt all ways of filtering content on the Internet; rather, they recite a specific, discrete implementation of the abstract idea of filtering content. It is mentioned that prior art filters were either susceptible to hacking and dependent on local hardware and software, or confined to an inflexible one-size-fits-all scheme, BASCOM asserting that the inventors recognized there could be a filter implementation versatile enough that it could be adapted to many different users' preferences while also installed remotely in a single location. Thus, the claims were considered to be "more than a drafting effort designed to monopolize the [abstract idea]” and that the claims may be read to "improve[ ] an existing technological process." Similarly, the instant claims do not preempt all ways of splitting data in a networked environment, allowing implementation versatile enough to be adapted to many types of encoding schemes, each of which have various requirements including maximum data portion sizes. As a result, the claims can be considered to improve the functioning of a computer or any other technology or technical field, showing that the claim is eligible under Step 2B (see MPEP 2106.05(a)). Claims 7-12 are directed toward a system and have been reviewed. Claims 7-12 initially appear to be statutory, as the system includes hardware (a non-transitory machine-readable medium). The system also includes hardware (at least one programmable processor) as disclosed in ¶ 0049 of the applicant’s specification (¶ 0053 of the applicant’s specification is also relevant). Claims 7-12 are directed to patent-eligible subject matter as the method is directed to significantly more than an abstract idea in Step 2B of the subject matter eligibility analysis; see MPEP 2106.05, Section I.A. The claims add a specific limitation other than what is well-understood, routine, conventional activity in the field, or add unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)). In the BASCOM decision, it is stated that the claims do not merely recite the abstract idea of filtering content along with the requirement to perform it on the Internet, or to perform it on a set of generic computer components, and that such claims would not contain an inventive concept. It is further stated that the claims do not preempt all ways of filtering content on the Internet; rather, they recite a specific, discrete implementation of the abstract idea of filtering content. It is mentioned that prior art filters were either susceptible to hacking and dependent on local hardware and software, or confined to an inflexible one-size-fits-all scheme, BASCOM asserting that the inventors recognized there could be a filter implementation versatile enough that it could be adapted to many different users' preferences while also installed remotely in a single location. Thus, the claims were considered to be "more than a drafting effort designed to monopolize the [abstract idea]” and that the claims may be read to "improve[ ] an existing technological process." Similarly, the instant claims do not preempt all ways of splitting data in a networked environment, allowing implementation versatile enough to be adapted to many types of encoding schemes, each of which have various requirements including maximum data portion sizes. As a result, the claims can be considered to improve the functioning of a computer or any other technology or technical field, showing that the claim is eligible under Step 2B (see MPEP 2106.05(a)). Claims 13-18 are directed toward an article of manufacture and have been reviewed. Claims 13-18 initially appear to be statutory, as the article of manufacture excludes transitory signals (claim says the computer program product comprises a non-transitory machine-readable medium.) Claims 13-18 are directed to patent-eligible subject matter as the method is directed to significantly more than an abstract idea in Step 2B of the subject matter eligibility analysis; see MPEP 2106.05, Section I.A. The claims add a specific limitation other than what is well-understood, routine, conventional activity in the field, or add unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)). In the BASCOM decision, it is stated that the claims do not merely recite the abstract idea of filtering content along with the requirement to perform it on the Internet, or to perform it on a set of generic computer components, and that such claims would not contain an inventive concept. It is further stated that the claims do not preempt all ways of filtering content on the Internet; rather, they recite a specific, discrete implementation of the abstract idea of filtering content. It is mentioned that prior art filters were either susceptible to hacking and dependent on local hardware and software, or confined to an inflexible one-size-fits-all scheme, BASCOM asserting that the inventors recognized there could be a filter implementation versatile enough that it could be adapted to many different users' preferences while also installed remotely in a single location. Thus, the claims were considered to be "more than a drafting effort designed to monopolize the [abstract idea]” and that the claims may be read to "improve[ ] an existing technological process." Similarly, the instant claims do not preempt all ways of splitting data in a networked environment, allowing implementation versatile enough to be adapted to many types of encoding schemes, each of which have various requirements including maximum data portion sizes. As a result, the claims can be considered to improve the functioning of a computer or any other technology or technical field, showing that the claim is eligible under Step 2B (see MPEP 2106.05(a)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6; 7-10, 12; 13-16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Dennis et al., U.S. Patent Application Publication No. 2016/0292043 (hereinafter Dennis) in view of Lin et al., U.S. Patent Application Publication No. 2020/0183839 (shares assignee with instant application; published June 11, 2020, at least one year prior to the filing date of the instant application: October 25, 2021; hereinafter Lin) in further view of Yanovsky et al., U.S. Patent Application Publication No. 2020/0241960 (published July 30, 2020, at least one year prior to the instant application date of October 25, 2021, hereinafter "Yanovsky") in further view of Branton, U.S. Patent Application Publication No. 2014/0279893 (hereinafter Branton). Regarding claim 1, Dennis teaches: A computer-implemented method, comprising: receiving, by at least one processor, one or more data files in a plurality of data files for backup and storage in a parameter store … the plurality of data files being received from one or more file systems; (Dennis FIG. 6, ¶ 0088: At 602, the backup system receives a first data stream from the server as part of a backup of a set of files stored on the server. The system may acquire a network stream that represents a file to be stored from the set of files; ¶ 0022 shows receipt from 'one or more file systems': Examples of a data system may include a file system; the data system can be configured to store the data items on one or more remote storage locations operatively connected to the data system, for example a remote data center, a remote cloud server or any other types of remote data storage) generating, by at least one processor, one or more compressed data files corresponding to the received one or more data files; (Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; see also relevant ¶ 0097 showing data items corresponding to files: nth data item type can be files) selecting, by at least one processor, one or more portions of … plurality of portions of the compressed data files; (Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; see also relevant ¶ 0097 showing data items corresponding to files: nth data item type can be files; FIG. 6, ¶ 0092: At 608, a first set of chunks is identified that correspond to the file to be stored. When all the blocks are finished being copied, a request can be made to the storage system of the staging area, where the request indicates the set of blocks that make up the single file) ... ...assigning a predetermined file name to each portion in the one or more portions of the compressed data files, (Dennis ¶ 0086 describes being 'compressed': a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; Dennis ¶ 0097 shows these are 'data files': nth data item type can be files; see Dennis ¶ 0047-0049: the header information for a given data item on the data system 106 may indicate ... any other identifying information that are dynamically updated when the data item is modified; the backup component 210 can be configured to divide a given data item that is to be backed up into multiple parts; An identification of each of the multiple part can be given by the backup component 210 to indicate a position of the part with respect to the entire data item [shows a name being assigned to each portion]; see relevant Dennis ¶ 0040 referring to file names: the data system 106 is a file system, the data system 106 may store files as data items. In that example, header information for a particular file may indicate a size of the file, a filename of the file) each assigned predetermined file name having at least one common sequence of characters identifying the received one or more data files, (Dennis ¶ 0049: a first part of the data item can be given an identification “part 1: filename of the data item”, a second part of the data item can be given an identification “part 2: filename of the data item”, a third part of the data item can be given an identification “part 3: filename of the data item” [the claimed 'common sequence of characters' is shown through Dennis describing 'filename of the data item' in its identifications]; see relevant Dennis ¶ 0040 referring to file names: the data system 106 is a file system, the data system 106 may store files as data items. In that example, header information for a particular file may indicate a size of the file, a filename of the file) storing, by at least one processor, the ... one or more portions of the compressed data files in the parameter store. (Dennis ¶ 0086 describes being 'compressed': a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; FIG. 6, ¶ 0094 describes the claimed storing of portions: At 612, the first set of chunks and the corresponding hashes are stored in a persistent backup storage with the labels identifying the first set of chunks as corresponding to the first file; see this in light of ¶ 0021: A file item may be referred to a file resource on a file system for persistently storing information) Dennis teaches compressed data files to which the predetermined file name is assigned. (Dennis ¶ 0049: a first part of the data item can be given an identification “part 1: filename of the data item”, a second part of the data item can be given an identification “part 2: filename of the data item”, a third part of the data item can be given an identification “part 3: filename of the data item”; Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n) Dennis does not expressly disclose a parameter store storing runtime information comprising private user information. Dennis further does not expressly disclose: generating, by at least one processor, a plurality of portions of compressed data files by splitting the one or more compressed data files into different sizes, wherein a split size decision is based on a maximum data portion size altered by subsequent encoding; wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; Dennis further does not expressly disclose selecting, by at least one processor, one or more portions of the plurality of portions. Dennis further does not expressly disclose encoding, by at least one processor, the one or more portions of the compressed data files, the encoding comprising assigning. Dennis further does not expressly disclose storing the encoded one or more portions of the compressed data files. However, Lin addresses some of these limitations by teaching the following: Lin teaches private user information. (Lin ¶ 0192: cloud services may be provided under a private cloud model in which the cloud computing system 900 is operated solely for a single organization and may provide cloud services for one or more entities within the organization) Lin teaches: generating, by at least one processor, a plurality of portions of compressed data files by splitting the one or more … data files into different sizes, wherein a split size decision is based on a maximum data portion size altered by subsequent encoding; (Lin FIG. 5, ¶ 0106: This section provides further details on determining the chunk size (see 502 in FIG. 5); Lin ¶ 0091-0095, see first ¶ 0092: At 502, a chunk size is calculated for a data vector … Calculating the chunk size includes the sub-steps 502a-502e. At 502a, an initial chunk size is selected ... The initial chunk size may be adjusted as desired according to the characteristics and the performance of the components of the IMDBS 100 [shows different sizes]; Lin also shows different sizes in ¶ 0094: At 502b, the data vector is partitioned into chunks (according to the initial chunk size) to form a data structure referred to as a node [also shows splitting]. The last chunk may be smaller than the chunk size if the data vector does not divide evenly into the chunks. The encoder component 134 may partition the data vector into the chunks; see importantly Lin ¶ 0130: To select a chunk size, the IMDBS 100 first determines some measurements of the average, minimum, and maximum compression ratio of different chunks within the data vector, R.sub.avg, R.sub.min and R.sub.max respectively. To do this, the IMDBS 100 selects some initial chunk size, and simulate the encoding scheme to compute the space required to encode each chunk using the best compression method [shows the claimed decision being based on a maximum data portion and also on subsequent encoding]) encoding, by at least one processor, the one or more portions of the compressed data files, (Lin FIG. 5, ¶ 0095: At 502c, a suitable compression type is selected for each chunk, each chunk is compressed using the selected compression type; Lin ¶ 0098-0103: At 504, the data vector is encoded according to the chunk size (calculated at 502) … The encoding component 134 (see FIG. 1) may encode the data vector. Encoding the data vector includes sub-steps 504a-504d ... each chunk is encoded into a transient data structure using a selected compression type) the encoding comprising assigning… (Lin FIG. 3, ¶ 0076: The tree has a root node 302, and may have a number of child nodes (also referred to as sub-nodes); shown here are child nodes 304, 306, 308, 310, 312 and 314. Each node of the UPT corresponds to segments of a data vector 320, and further uniformly partitions the data the data vector 320 refers to into fixed-size chunks; Lin FIG. 5, ¶ 0098-0103: At 504c, if a particular chunk is oversized (as further described below), an empty page is appended to the page chain, with a reference to a child node ... 504d, each oversized chunk is recursively stored by moving it from the transient data structure into a child node) Lin further teaches storing the encoded one or more portions of the compressed data files. (Lin FIG. 5, ¶ 0098-0103: At 504d, each oversized chunk is recursively stored by moving it from the transient data structure into a child node) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data chunk formation and compressed data storage of Dennis with the data vector partitioning and compressed data storage of Lin. In addition, both of the references (Dennis and Lin) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as division of data and compression of data. Motivation to do so would be to improve the functioning of Dennis performing operations over chunked data and compressed data with the ability in similar reference Lin also performing operations over chunked data and compressed data but with the improvement of adjusting data splitting based on characteristics and performance of networked components as in Lin ¶ 0093. Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to lower memory consumption as seen in Lin ¶ 0048. Dennis in view of Lin does not expressly disclose a parameter store storing runtime information. Dennis in view of Lin also does not expressly disclose splitting the one or more compressed data files. Dennis in view of Lin further does not expressly disclose selecting, by at least one processor, one or more portions of the plurality of portions. Dennis in view of Lin further does not expressly disclose: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; However, Yanovsky addresses this by teaching the following: Yanovsky teaches a parameter store storing runtime information comprising private user information. (Yanovsky ¶ 0098-0103 show the claimed 'private user information,' see ¶ 0098: A secret sharing scheme transforms sensitive data, called secret, into individually meaningless data pieces, called shares, and a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret; see ¶ 0100: data (secret) may be represented by original client's data or generated metadata, e.g. encryption keys; see ¶ 0102 in particular showing this information being utilized at runtime: Original data 103, e.g., files, produced by client applications 102, are distributed over a set of storage nodes 106, and original data 103 is available to client applications 102 upon request. Any system producing and receiving data on the client side can be considered as an instance of a client application 102) generating, by at least one processor, a plurality of portions of compressed data files by splitting the one or more compressed data files, based on specifics of the one or more compressed data files; (Yanovsky ¶ 0040-0043 shows splitting after compression as required by the claims: splitting data into segments ... the data of the plurality of files; optionally applying deduplication, compression and/or encryption to each segment; c. splitting each segment into k information multi-chunks and optionally applying data mixing to these information chunks to produce k systematic multi-chunks; FIG. 2, step 205, step 209 both occur after compression 204, 207, ¶ 0104-0106: Fragmentation may include data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data [shows being based on specifics of the data files] (e.g. derived using deterministic cryptographic hash) and predetermined values; see also relevant FIG. 3, ¶ 0108: a data segment 301 produced from one or several files is divided into v chunks and accompanied by k−v chunks containing supplementary inputs 305, where k≥v, supplementary inputs may be random, values derived from the data segment (e.g. derived using deterministic cryptographic hash) or have predetermined values [also shows being based on specifics of the data files]) Yanovsky teaches selecting, by at least one processor, one or more portions of the plurality of portions. (Yanovsky FIG. 3, ¶ 0113: Preprocessed data segment 301 is divided into v≤k input chunks 302, comprising t highly sensitive chunks 303 and v−t frequently demanded chunks 304, 0≤t≤v. Value of t is selected [shows selection resulting in portions of highly sensitive chunks] depending on the segment structure and the number of untrusted storage nodes; ¶ 0116-0117 also show selections: A client is allowed to request a file, a data segment or its part; Output chunks stored at untrusted storage nodes are of the same significance for data reconstruction. Chunks to download are selected depending on available network bandwidth) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data chunk formation and compressed data storage of Dennis as modified with the data segment division and compressed data storage of Yanovsky. In addition, both of the references (Dennis as modified and Yanovsky) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as division of data and compression of data. Motivation to do so would be to improve the functioning of Dennis as modified performing operations over chunked data and compressed data with the ability in similar reference Yanovsky to implement space reduction and file pre-processing to optimize output through various techniques such as deduplication, compression, and fragmentation (Yanovsky ¶ 0104-0107). Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to ensure data integrity, security, protection against data loss, compression and deduplication especially in a distributed storage system handling secret data (Yanovsky ¶ 0097-0101). Dennis in view of Lin and Yanovsky further does not expressly disclose: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; However, Branton addresses this by teaching: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, (Branton FIG. 11; Branton ¶ 0087: each filename can also contain a GUID of the user in question, indicating which user's actions are stored in that file. For example, user metadata file 1172 is given the filename "User1 Guid.user." The string "User1 Guid" can be replaced by a GUID, where the GUID is a unique data string generated by performing an MD5 hash on the username; Branton ¶ 0050-0051: GUIDs can be created by performing a mathematical operation such as an MD5 hash operation on an input string, such as a username, content of the file [relevant to the operation being of the one or more files], a filename, or a pathname including a filename, among other potential input strings. The resulting output can be directly used as a GUID ... GUIDs can be used as part of filenames) a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, (Branton ¶ 0087-0088: each filename can also contain both a GUID of the file being described and a GUID of the user that created the file. The use of both GUIDs allows for users to be able to each provide their own metadata about a file, and for user accesses to a file to be stored in a file that is unique to a single user ... configuration of client software can be used to permit a user to access only files that contain the user's own GUID in the filename; see Branton FIG. 11 and at least file statistics file 1182, "File1Guid_User1Guid.stat") and a suffix; (Branton ¶ 0087: User metadata files 1172, 1174 can be stored within user directory 1170, and pertain to information about users. Each user metadata file 1172, 1174 can be named according to a particular convention, such that the suffix ".user" appears at the end of each filename) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the filename management of Dennis as modified with the GUIDs and filename management of Branton. In addition, both of the references (Dennis as modified and Branton) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as filename management. Motivation to do so would be to improve the functioning of Dennis as modified performing data identification with names with the ability in similar reference Branton also performing data identification with names but with the improvement of uniquely identifying users and files. Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to enable synchronization, authorization, and storage functionality to be separated from the storage and retrieval functionality, allowing flexible and rapid deployment using existing cloud infrastructure as seen in Branton ¶ 0002-0008 and ¶ 0025-0026. Regarding claim 7, Dennis teaches: A system comprising: at least one programmable processor; and (Dennis ¶ 0061: method 300 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information)) a non-transitory machine-readable medium storing instructions that, when executed by the at least one programmable processor, cause the at least one programmable processor to perform operations comprising: (Dennis FIG. 8, ¶ 0100: The interconnection via system bus 75 allows the central processor 73 to communicate with each subsystem and to control the execution of instructions from system memory 72 or the storage device(s) 79 (e.g., a fixed disk, such as a hard drive or optical disk); The system memory 72 and/or the storage device(s) 79 may embody a computer readable medium) receiving one or more data files in a plurality of data files for backup and storage in a parameter store … the plurality of data files being received from one or more file systems; (Dennis FIG. 6, ¶ 0088: At 602, the backup system receives a first data stream from the server as part of a backup of a set of files stored on the server. The system may acquire a network stream that represents a file to be stored from the set of files; ¶ 0022 shows receipt from 'one or more file systems': Examples of a data system may include a file system; the data system can be configured to store the data items on one or more remote storage locations operatively connected to the data system, for example a remote data center, a remote cloud server or any other types of remote data storage) generating one or more compressed data files corresponding to the received one or more data files; (Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; see also relevant ¶ 0097 showing data items corresponding to files: nth data item type can be files) selecting one or more portions of … plurality of portions of the compressed data files; (Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; see also relevant ¶ 0097 showing data items corresponding to files: nth data item type can be files; FIG. 6, ¶ 0092: At 608, a first set of chunks is identified that correspond to the file to be stored. When all the blocks are finished being copied, a request can be made to the storage system of the staging area, where the request indicates the set of blocks that make up the single file) ... ...assigning a predetermined file name to each portion in the one or more portions of the compressed data files, (Dennis ¶ 0086 describes being 'compressed': a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; Dennis ¶ 0097 shows these are 'data files': nth data item type can be files; see Dennis ¶ 0047-0049: the header information for a given data item on the data system 106 may indicate ... any other identifying information that are dynamically updated when the data item is modified; the backup component 210 can be configured to divide a given data item that is to be backed up into multiple parts; An identification of each of the multiple part can be given by the backup component 210 to indicate a position of the part with respect to the entire data item [shows a name being assigned to each portion]; see relevant Dennis ¶ 0040 referring to file names: the data system 106 is a file system, the data system 106 may store files as data items. In that example, header information for a particular file may indicate a size of the file, a filename of the file) each assigned predetermined file name having at least one common sequence of characters identifying the received one or more data files, (Dennis ¶ 0049: a first part of the data item can be given an identification “part 1: filename of the data item”, a second part of the data item can be given an identification “part 2: filename of the data item”, a third part of the data item can be given an identification “part 3: filename of the data item” [the claimed 'common sequence of characters' is shown through Dennis describing 'filename of the data item' in its identifications]; see relevant Dennis ¶ 0040 referring to file names: the data system 106 is a file system, the data system 106 may store files as data items. In that example, header information for a particular file may indicate a size of the file, a filename of the file) storing the ... one or more portions of the compressed data files in the parameter store. (Dennis ¶ 0086 describes being 'compressed': a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n; FIG. 6, ¶ 0094 describes the claimed storing of portions: At 612, the first set of chunks and the corresponding hashes are stored in a persistent backup storage with the labels identifying the first set of chunks as corresponding to the first file; see this in light of ¶ 0021: A file item may be referred to a file resource on a file system for persistently storing information) Dennis teaches compressed data files to which the predetermined file name is assigned. (Dennis ¶ 0049: a first part of the data item can be given an identification “part 1: filename of the data item”, a second part of the data item can be given an identification “part 2: filename of the data item”, a third part of the data item can be given an identification “part 3: filename of the data item”; Dennis ¶ 0086: a data item may be encrypted and compressed by the backup system 102 before it is stored in one or more of file storage 108a-n) Dennis does not expressly disclose a parameter store storing runtime information comprising private user information. Dennis further does not expressly disclose: generating a plurality of portions of compressed data files by splitting the one or more compressed data files into different sizes, wherein a split size decision is based on a maximum data portion size altered by subsequent encoding; wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; Dennis further does not expressly disclose selecting one or more portions of the plurality of portions. Dennis further does not expressly disclose encoding the one or more portions of the compressed data files, the encoding comprising assigning. Dennis further does not expressly disclose storing the encoded one or more portions of the compressed data files. However, Lin addresses some of these limitations by teaching the following: Lin teaches private user information. (Lin ¶ 0192: cloud services may be provided under a private cloud model in which the cloud computing system 900 is operated solely for a single organization and may provide cloud services for one or more entities within the organization) Lin teaches: generating a plurality of portions of compressed data files by splitting the one or more … data files into different sizes, wherein a split size decision is based on a maximum data portion size altered by subsequent encoding; (Lin FIG. 5, ¶ 0106: This section provides further details on determining the chunk size (see 502 in FIG. 5); Lin ¶ 0091-0095, see first ¶ 0092: At 502, a chunk size is calculated for a data vector … Calculating the chunk size includes the sub-steps 502a-502e. At 502a, an initial chunk size is selected ... The initial chunk size may be adjusted as desired according to the characteristics and the performance of the components of the IMDBS 100 [shows different sizes]; Lin also shows different sizes in ¶ 0094: At 502b, the data vector is partitioned into chunks (according to the initial chunk size) to form a data structure referred to as a node [also shows splitting]. The last chunk may be smaller than the chunk size if the data vector does not divide evenly into the chunks. The encoder component 134 may partition the data vector into the chunks; see importantly Lin ¶ 0130: To select a chunk size, the IMDBS 100 first determines some measurements of the average, minimum, and maximum compression ratio of different chunks within the data vector, R.sub.avg, R.sub.min and R.sub.max respectively. To do this, the IMDBS 100 selects some initial chunk size, and simulate the encoding scheme to compute the space required to encode each chunk using the best compression method [shows the claimed decision being based on a maximum data portion and also on subsequent encoding]) encoding the one or more portions of the compressed data files, (Lin FIG. 5, ¶ 0095: At 502c, a suitable compression type is selected for each chunk, each chunk is compressed using the selected compression type; Lin ¶ 0098-0103: At 504, the data vector is encoded according to the chunk size (calculated at 502) … The encoding component 134 (see FIG. 1) may encode the data vector. Encoding the data vector includes sub-steps 504a-504d ... each chunk is encoded into a transient data structure using a selected compression type) the encoding comprising assigning… (Lin FIG. 3, ¶ 0076: The tree has a root node 302, and may have a number of child nodes (also referred to as sub-nodes); shown here are child nodes 304, 306, 308, 310, 312 and 314. Each node of the UPT corresponds to segments of a data vector 320, and further uniformly partitions the data the data vector 320 refers to into fixed-size chunks; Lin FIG. 5, ¶ 0098-0103: At 504c, if a particular chunk is oversized (as further described below), an empty page is appended to the page chain, with a reference to a child node ... 504d, each oversized chunk is recursively stored by moving it from the transient data structure into a child node) Lin further teaches storing the encoded one or more portions of the compressed data files. (Lin FIG. 5, ¶ 0098-0103: At 504d, each oversized chunk is recursively stored by moving it from the transient data structure into a child node) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data chunk formation and compressed data storage of Dennis with the data vector partitioning and compressed data storage of Lin. In addition, both of the references (Dennis and Lin) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as division of data and compression of data. Motivation to do so would be to improve the functioning of Dennis performing operations over chunked data and compressed data with the ability in similar reference Lin also performing operations over chunked data and compressed data but with the improvement of adjusting data splitting based on characteristics and performance of networked components as in Lin ¶ 0093. Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to lower memory consumption as seen in Lin ¶ 0048. Dennis in view of Lin does not expressly disclose a parameter store storing runtime information. Dennis in view of Lin also does not expressly disclose splitting the one or more compressed data files. Dennis in view of Lin further does not expressly disclose selecting one or more portions of the plurality of portions. Dennis in view of Lin further does not expressly disclose: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; However, Yanovsky addresses this by teaching the following: Yanovsky teaches a parameter store storing runtime information comprising private user information. (Yanovsky ¶ 0098-0103 show the claimed 'private user information,' see ¶ 0098: A secret sharing scheme transforms sensitive data, called secret, into individually meaningless data pieces, called shares, and a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret; see ¶ 0100: data (secret) may be represented by original client's data or generated metadata, e.g. encryption keys; see ¶ 0102 in particular showing this information being utilized at runtime: Original data 103, e.g., files, produced by client applications 102, are distributed over a set of storage nodes 106, and original data 103 is available to client applications 102 upon request. Any system producing and receiving data on the client side can be considered as an instance of a client application 102) generating a plurality of portions of compressed data files by splitting the one or more compressed data files, based on specifics of the one or more compressed data files; (Yanovsky ¶ 0040-0043 shows splitting after compression as required by the claims: splitting data into segments ... the data of the plurality of files; optionally applying deduplication, compression and/or encryption to each segment; c. splitting each segment into k information multi-chunks and optionally applying data mixing to these information chunks to produce k systematic multi-chunks; FIG. 2, step 205, step 209 both occur after compression 204, 207, ¶ 0104-0106: Fragmentation may include data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data [shows being based on specifics of the data files] (e.g. derived using deterministic cryptographic hash) and predetermined values; see also relevant FIG. 3, ¶ 0108: a data segment 301 produced from one or several files is divided into v chunks and accompanied by k−v chunks containing supplementary inputs 305, where k≥v, supplementary inputs may be random, values derived from the data segment (e.g. derived using deterministic cryptographic hash) or have predetermined values [also shows being based on specifics of the data files]) Yanovsky teaches selecting one or more portions of the plurality of portions. (Yanovsky FIG. 3, ¶ 0113: Preprocessed data segment 301 is divided into v≤k input chunks 302, comprising t highly sensitive chunks 303 and v−t frequently demanded chunks 304, 0≤t≤v. Value of t is selected [shows selection resulting in portions of highly sensitive chunks] depending on the segment structure and the number of untrusted storage nodes; ¶ 0116-0117 also show selections: A client is allowed to request a file, a data segment or its part; Output chunks stored at untrusted storage nodes are of the same significance for data reconstruction. Chunks to download are selected depending on available network bandwidth) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data chunk formation and compressed data storage of Dennis as modified with the data segment division and compressed data storage of Yanovsky. In addition, both of the references (Dennis as modified and Yanovsky) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as division of data and compression of data. Motivation to do so would be to improve the functioning of Dennis as modified performing operations over chunked data and compressed data with the ability in similar reference Yanovsky to implement space reduction and file pre-processing to optimize output through various techniques such as deduplication, compression, and fragmentation (Yanovsky ¶ 0104-0107). Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to ensure data integrity, security, protection against data loss, compression and deduplication especially in a distributed storage system handling secret data (Yanovsky ¶ 0097-0101). Dennis in view of Lin and Yanovsky further does not expressly disclose: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, and a suffix; However, Branton addresses this by teaching: wherein each assigned predetermined file name comprises a base name determined from a hash operation of the one or more files, (Branton FIG. 11; Branton ¶ 0087: each filename can also contain a GUID of the user in question, indicating which user's actions are stored in that file. For example, user metadata file 1172 is given the filename "User1 Guid.user." The string "User1 Guid" can be replaced by a GUID, where the GUID is a unique data string generated by performing an MD5 hash on the username; Branton ¶ 0050-0051: GUIDs can be created by performing a mathematical operation such as an MD5 hash operation on an input string, such as a username, content of the file [relevant to the operation being of the one or more files], a filename, or a pathname including a filename, among other potential input strings. The resulting output can be directly used as a GUID ... GUIDs can be used as part of filenames) a prefix indicating a purpose of a portion of the … data files to which the predetermined file name is assigned, (Branton ¶ 0087-0088: each filename can also contain both a GUID of the file being described and a GUID of the user that created the file. The use of both GUIDs allows for users to be able to each provide their own metadata about a file, and for user accesses to a file to be stored in a file that is unique to a single user ... configuration of client software can be used to permit a user to access only files that contain the user's own GUID in the filename; see Branton FIG. 11 and at least file statistics file 1182, "File1Guid_User1Guid.stat") and a suffix; (Branton ¶ 0087: User metadata files 1172, 1174 can be stored within user directory 1170, and pertain to information about users. Each user metadata file 1172, 1174 can be named according to a particular convention, such that the suffix ".user" appears at the end of each filename) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the filename management of Dennis as modified with the GUIDs and filename management of Branton. In addition, both of the references (Dennis as modified and Branton) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as filename management. Motivation to do so would be to improve the functioning of Dennis as modified performing data identification with names with the ability in similar reference Branton also performing data identification with names but with the improvement of uniquely identifying users and files. Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to enable synchronization, authorization, and storage functionality to be separated from the storage and retrieval functionality, allowing flexible and rapid deployment using existing cloud infrastructure as seen in Branton ¶ 0002-0008 and ¶ 0025-0026. Regarding claim 13, Dennis teaches: A computer program product comprising a non-transitory machine-readable medium storing instructions that, when executed by at least one programmable processor, cause the at least one programmable processor to perform operations comprising: (Dennis FIG. 8, ¶ 0100: The interconnection via system bus 75 allows the central processor 73 to communicate with each subsystem and to control the execution of instructions from system memory 72 or the storage device(s) 79 (e.g., a fixed disk, such as a hard drive or optical disk); The system memory 72 and/or the storage device(s) 79 may embody a computer readable medium) receiving one or more data files in a plurality of data files for backup and storage in a parameter store … the plurality of data files being received from one or more file systems; (Dennis FIG. 6, ¶ 0088: At 602, the backup system receives a first data stream from the server as part of a backup of a set of files stored on the server. The system may acquire a network stream that represents a file to be stored from the set of files; ¶ 0022 shows receipt from 'one or more file systems': Examples of a data system may include a file system; the data system can be configured to store the data items on one or more remote storage locations operatively connected to the data system, for example a remote data center, a remote cl
Read full office action

Prosecution Timeline

Oct 25, 2021
Application Filed
Mar 15, 2023
Non-Final Rejection — §103
Apr 25, 2023
Response Filed
May 02, 2023
Final Rejection — §103
Jun 09, 2023
Response after Non-Final Action
Jul 06, 2023
Response after Non-Final Action
Jul 06, 2023
Examiner Interview (Telephonic)
Aug 08, 2023
Request for Continued Examination
Aug 10, 2023
Response after Non-Final Action
Sep 07, 2023
Non-Final Rejection — §103
Dec 15, 2023
Response Filed
Apr 12, 2024
Final Rejection — §103
Jul 18, 2024
Request for Continued Examination
Jul 24, 2024
Response after Non-Final Action
Apr 02, 2025
Non-Final Rejection — §103
Jun 30, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585617
DYNAMIC SCRIPT GENERATION FOR AUTOMATED FILING SERVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12572502
LOAD-AWARE DIRECTORY MIGRATION METHOD AND SYSTEM IN DISTRIBUTED FILE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566672
LEVERAGING BACKUP PROCESS METADATA FOR CLOUD OBJECT STORAGE SELECTIVE DELETIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12517698
MAINTAINING STREAMING PARITY IN LARGE-SCALE PIPELINES
2y 5m to grant Granted Jan 06, 2026
Patent 12499120
Methods and Systems for Tracking Data Lineage from Source to Target
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
52%
Grant Probability
91%
With Interview (+39.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 220 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month