Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 2-21 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-20 of Patent No. 12/242356. Although the conflicting are not patentably distinct from each other because since the claims 1-20 of the Patent No. 12/242356 contains every element of the claims of the instant application, and as such, anticipate the claims of the instant application. (see table below).
Instant Application claim 1
Patent No. 12/242356 claim 1
A system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
for a backup process of a data object that has a size or an estimated backup time that satisfies a separate fetch data record criterion, accessing one or more markers for the data object and that each indicate an endpoint of a corresponding block from two or more blocks in the data object; and
providing, to each of two or more backup workers at least partially concurrently and using the one or more markers, instructions that cause the respective backup worker to fetch a respective block from the two or more blocks in the data object.
A computer-implemented method comprising: determining,
for a data object of a backup process for a source system, whether a size of the data object or an estimated backup time of the data object satisfies a criterion that, when satisfied, indicates that at least two blocks of the data object should be separately fetched from the source system by different workers;
determining one or more markers for end points of the at least two blocks using data from a prior backup of the data object; and
causing, at least partially concurrently for two or more blocks from the at least two blocks, a respective backup worker to fetch the respective block from the source system using at least one marker from the one or more markers that defines an end of the respective block
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 2-6, 11-13, 15-19, 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Brenner et al. (U.S. Pub. 2022/0121525A1).
1. (Canceled)
With respect to claims 2, 15, and 21, Brenner et al. discloses a system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
for a backup process of a data object that has a size or an estimated backup time that satisfies a separate fetch data record criterion (i.e., “ In order to overcome the issue of traversing millions and billions of files over a network mount, embodiments include two processes, a slicer and backup agent. The slicer breaks up the file system into slices (units of work) and the backup agent performs the backup work, in which a unit of work is backup data.’(0024) and file system is data object as claimed invention and “the different slicing techniques 302-306 of slicer 202 use certain threshold values for the number of files and/or the size of each file. These values define a minimum or maximum value that triggers the grouping of directories into a single slice or the slicing of a directory into sub-directories for forming smaller slices. Any appropriate size or number value may be used as the threshold value for the size, file count, and depth-based slicing methods, depending on system constraints and method”(0040)), accessing one or more markers for the data object and that each indicate an endpoint of a corresponding block from two or more blocks in the data object (i.e.,. “In order to overcome the issue of traversing millions and billions of files over a network mount, embodiments include two processes, a slicer and backup agent. The slicer breaks up the file system into slices (units of work) and the backup agent performs the backup work, in which a unit of work is backup data.” (0024) (slices is markers as claimed invention) , “Any number of techniques can be used to slice the file system. FIG. 3 illustrates three example slice techniques that can be used for the slicer 202, and each slice technique solves a particular problem that is faced in present file system crawlers. As shown in FIG. 3, the file system techniques include depth-based slicing 302, size-based slicing 304, and file count-based slicing 306.”(0032) and “For file systems that are large in size, size-based slicing 304 is used. In this method, the slicer 202 slices the file system by the size of files.”(0034) and “the slicer looks at previous backup runs and stores this knowledge of past runs so that it can switch slicing algorithms from one to another or use a combination of different slicing algorithms. The slicer makes the switch by looking at previous runs and following the set of rules associated with each algorithm’(0048)); and
providing, to each of two or more backup workers at least partially concurrently and using the one or more markers, instructions that cause the respective backup worker to fetch a respective block from the two or more blocks in the data object (i.e., “Once the slicing is completed, the backup agent initiates a backup of all slices in parallel on single or multiple proxy hosts”(0053) and “dynamically computing a number of parallel backup streams depending on a layout of the file system layout; and performing the slicing methods to provide for balanced file system recovery of the backed up data.”(claim 16) and multiple backup stream are used to fetch the sliced data from the source system based on the end point markers).
With respect to claims 3, 16, Brenner et al. discloses wherein: each block from the two or more blocks in the data object comprises one or more records from the data object; and providing the instructions causes the respective backup worker to fetch at least some of the one or more records in the respective block that is at least partially defined by a corresponding marker from the one or more markers (i.e., “For incremental backups the slicing data and backup agents are combined. For each incremental backup, the slicer can look at the previous backup and slice using one or more of the slicing techniques as described above”(0045) and “In an example where re-slicing is based on the size of files in a directory, if a directory and its sub-directories contain files with more than a few GBs (e.g., 100 GB), then the backup time of each directory will be very large. To reduce this backup time, on each directory, re-slice the directory based on size greater than average size of other directories”(0047)).
With respect to claims 4, 17, Brenner et al. discloses the system of claim 3, wherein providing the instructions causes the respective backup worker to fetch a subset of the one or more records that were included in a previously backed-up instance of the respective block (i.e., “the slicer looks at previous backup runs and stores this knowledge of past runs so that it can switch slicing algorithms from one to another or use a combination of different slicing algorithms. The slicer makes the switch by looking at previous runs and following the set of rules associated with each algorithm”(0048)).
With respect to claims 5,18, Brenner et al. discloses the system of claim 3, wherein providing the instructions causes the respective backup worker to fetch one or more new records that were not included in a previously backed-up instance of the respective block (i.e., “Any number of techniques can be used to slice the file system. FIG. 3 illustrates three example slice techniques that can be used for the slicer 202, and each slice technique solves a particular problem that is faced in present file system crawlers. As shown in FIG. 3, the file system techniques include depth-based slicing 302, size-based slicing 304, and file count-based slicing 306.”(0032) and each slice technique solves a particular problem so new records that have practical slice technique solve until the new record get the optimal size (step 604, fig. 6)).
With respect to claims 6,19, Brenner et al. discloses the operations comprising: before providing the instructions, computing the one or more markers for the data object so that a first block from the two or more blocks has a first size that satisfy a size similarity criterion for a second size of a second block in the two or more blocks (i.e., “defining a first threshold value for the defined total size of files in each slice with a first margin of deviation; and defining a second threshold value for the defined number of files in each slice with a second margin of deviation.’(claim 2) and each slice with a first margin of deviation and threshold with
With respect to claim 11, Brenner et al. discloses the system of claim 6, wherein computing the one or more markers uses data for a prior backup of the data object (i.e., “For incremental backups the slicing data and backup agents are combined. For each incremental backup, the slicer can look at the previous backup and slice using one or more of the slicing techniques as described above”(0045).
With respect to claim 12, Brenner et al. discloses the system of claim 6, wherein computing the one or more markers comprises computing the one or more markers that indicate record ranges for the blocks in the two or more blocks (i.e., “defining a first threshold value for the defined total size of files in each slice with a first margin of deviation;”(claim 2)).
With respect to claim 13, Brenner et al. discloses the operations comprising:
initiating a sequential backup of the two or more blocks in the data object (i.e., “Once the slicing is completed, the backup agent initiates a backup of all slices in parallel on single or multiple proxy hosts.”(0053)); and
after initiating the sequential backup, determining whether the data object has the size or the estimated backup time that satisfies the separate fetch data record criterion (i.e., “ In order to overcome the issue of traversing millions and billions of files over a network mount, embodiments include two processes, a slicer and backup agent. The slicer breaks up the file system into slices (units of work) and the backup agent performs the backup work, in which a unit of work is backup data.’(0024) and file system is data object as claimed invention and “the different slicing techniques 302-306 of slicer 202 use certain threshold values for the number of files and/or the size of each file. These values define a minimum or maximum value that triggers the grouping of directories into a single slice or the slicing of a directory into sub-directories for forming smaller slices. Any appropriate size or number value may be used as the threshold value for the size, file count, and depth-based slicing methods, depending on system constraints and method”(0040)),
wherein providing, to each of two or more backup workers at least partially concurrently and using the one or more markers (i.e., “the slicer could perform slicing not by depth or size, but rather by the file count 306. This addresses the challenge where the file system is very dense and may have millions or even billions of small-sized files. Directories with large number of files can be broken into multiple small slices and allow backup agents to run more threads in parallel during backup.’*(0035)), the instructions that cause the respective backup worker to fetch the respective block from the two or more blocks in the data object is responsive to determining that the data object has the size or the estimated backup time that satisfies the separate fetch data record criterion (“Once the slicing is completed, the backup agent initiates a backup of all slices in parallel on single or multiple proxy hosts”(0053) and “dynamically computing a number of parallel backup streams depending on a layout of the file system layout; and performing the slicing methods to provide for balanced file system recovery of the backed up data.”(claim 16) and multiple backup stream are used to fetch the sliced data from the source system based on the end point markers).
Allowable Subject Matter
Claims 7-10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the claimed wherein: the two or more blocks comprise three or more blocks including a first block subset and a second, non-overlapping block subset that includes at most a single block; and computing the one or more markers comprises computing two or more markers so that all blocks in the first block subset have corresponding sizes that satisfy the size similarity criterion for the sizes of the other blocks in the first block subset while any blocks in the second, non- overlapping block subset has a size that does not satisfy the size similarity criterion for the sizes of the blocks in the first block subset, wherein computing the one or more markers comprises: sorting a plurality of records in the data object according to a primary identifier; computing a predicted block size; selecting, for at least some blocks from the two or more blocks and using the predicted block size, a subset of consecutive sorted records from the plurality of records using the predicted block size and the size similarity criterion; and selecting, as a marker for a block in the two or more blocks, a primary identifier using a record at an end of the subset of consecutive sorted records for the respective block, wherein the primary identifier is an identifier for the record at the end of the subset of consecutive sorted records, wherein the primary identifier is an identifier for an adjacent record from another block, wherein the adjacent record is in the data object and next to the record at the end of the subset of consecutive sorted records.
Claims14 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the claimed the operations comprising: after providing the instructions that cause the respective backup worker to fetch the respective block from the two or more blocks in the data object, determining whether an updated estimated backup time for the data object satisfies the separate fetch data record criterion; and in response to determining that the updated estimated backup time for the data object does not satisfy the separate fetch data record criterion: determining to switch to sequential backup of remaining blocks for the data object; and sending, to a single backup worker, instructions to cause the single backup worker to fetch at least some of the remaining blocks for the data object.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG T VY whose telephone number is (571)272-1954. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at (571)272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG T VY/Primary Examiner, Art Unit 2163 January 9, 2026