Prosecution Insights
Last updated: April 19, 2026
Application No. 17/936,950

DYNAMIC SLICING USING A BALANCED APPROACH BASED ON SYSTEM RESOURCES FOR MAXIMUM OUTPUT

Final Rejection §103
Filed
Sep 30, 2022
Examiner
HARMON, COURTNEY N
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
72%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
262 granted / 425 resolved
+6.6% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
22 currently pending
Career history
447
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 425 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the Applicant’s Application filed on February 9, 2026. Claims 1 and 11 have been amended. Claims 2 and 12 have been cancelled. Claims 1 and 11 are independent. As a result claims 1, 3-11, and 13-20 are pending in this office action. Response to Arguments Applicant's argument filed February 9, 2026 regarding the rejection of claims 1-20 under 35 U.S.C 101, has been fully considered and is persuasive. Applicants argue in substance: Regarding claims 1-20, the applicants submit that the steps are being performed are directed to statutory subject matter under 101 because the claims as a whole integrates the exception into a practical application and are a technical improvement. The argument of claims 1-20 have been fully considered and is persuasive. Therefore, the 35 U.S.C. 101 rejection of claims 1-20 have been withdrawn. Applicant's arguments filed February 9, 2026 regarding the rejection of claims 1 and 11 under 35 U.S.C 103 have been fully considered but they are not persuasive. Applicant argues, regarding claims 1 and 11 Rakesh does not teach or suggest the following limitation, detecting a directory as disclosed in Applicants’ invention. Examiner respectfully disagrees with applicant’s assertions. With regards to a), Examiner appreciates the interpretation of the description given by Applicant in the response. In Fig. 4, para [0039], Rakesh teaches " The slicer 202 can slice the file system by the number of directories and files and based on depth of directories and files of the share. Using the file system directory example of FIG. 4, with a proposed slice of one level depth, the slicer would return five slices: Slices 1-4 would be the directories dir-1, dir-2, dir-3”, para [0040], Rakesh teaches “Using the FIG. 4 example file system layout, the slicer could propose seven slices. Assuming that the slicer is configured to slice at 100 GB boundary, the slices can be as follows: Slice 1 is directory 1; Slice 2 is directories dir-2 and dir-3; Slice 3 is directory-4-2; Slice 4 is directory-4-1-2; Slice 5 is directory-4-1-1-1; Slice 6 is directory-4-1-1-2 and directory-4-1-1-3”. Therefore, detecting a directory as a specific slice. Applicant argues, regarding claims 1 and 11 Rakesh does not teach or suggest the following limitation, generating a crawl job specific to the detected directory as disclosed in Applicants’ invention. Examiner respectfully disagrees with applicant’s assertions. With regards to b), Examiner appreciates the interpretation of the description given by Applicant in the response. In Fig. 1, Fig. 5, para [0037], Rakesh teaches “AS Backup agents 119 crawl the NAS share and create multiple slices of the entire share to backup these slices in parallel.”, para [0038], Rakesh teaches “The slicer 117 breaks up the file system into slices (units of work or sub-assets), and the backup agent 119 performs the backup tasks.”, para [0039], Rakesh teaches "The slicer 202 can slice the file system by the number of directories and files and based on depth of directories and files of the share. Using the file system directory example of FIG. 4, with a proposed slice of one level depth, the slicer would return five slices: Slices 1-4 would be the directories dir-1, dir-2, dir-3”. Therefore, generating backup task (crawl job) for corresponding slice with identified directory. Applicant argues, regarding claims 1 and 11 Rakesh does not teach or suggest the following limitation, adding the generated crawl job to the thread pool for execution as disclosed in Applicants’ invention. Examiner respectfully disagrees with applicant’s assertions. With regards to c), Examiner appreciates the interpretation of the description given by Applicant in the response. In Fig. 5, Figs. 7-8, para [0040], Rakesh teaches “FIG. 5, Backupset-1 corresponds to Slice 1 (dir-1), Backupset-2 corresponds to Slice 2 (dir-1, dir-3), Backupset-3 corresponds to Slice 3 (dir-4-2), Backupset-4 corresponds to Slice 4 (dir-4-1-2), Backupset-5 corresponds to Slice 5 (dir-4-1-1-1), Backupset-6 corresponds to Slice 6 (dir-4-1-1-2, dir-4-1-1-3), and Backupset-7 corresponds to Slice 7 (file-1, file-2, file-3).”, para [0048], Rakesh teaches “System 700 includes a file system slicer agent 706 that partitions or reorganizes (slices) the directories and files of file system 702 into appropriately sized slices, using one or more of the slicing techniques described above. A backup agent 708 then sends a list of the directories and files assigned to each slice to a crawl process 710”, para [0049], Rakesh teaches "The crawl process crawls the slices in parallel, so that, for example, if there are 16 slices, crawl process will run 16 threads for each slice. During an incremental backup, the crawl process detects whether a file has changed since a last backup, and if not, the file will be skipped”, para [0050], Rakesh “backup agents 708 use backup processes provided by a backup management process (e.g., 112 of FIG. 1). As such, they perform full, incremental, or differential backups with or without deduplication. A typical backup process comprises a full backup followed by a number of incremental backups, where the periodicity of full backups and intervening incremental backups is defined by a set backup schedule. A full backup is referred to as a ‘level 0’ backup. The backup agents perform certain tasks when working with level 0 backups”. Therefore, adding backups to threads. Applicant argues, regarding claims 1 and 11 Rakesh does not teach or suggest the following limitation, performing each crawl job in the thread pool to crawl a respective directory as disclosed in Applicants’ invention. Examiner respectfully disagrees with applicant’s assertions. With regards to d), Examiner appreciates the interpretation of the description given by Applicant in the response. In Figs. 7-8, para [0049], Rakesh teaches "The crawl process crawls the slices in parallel, so that, for example, if there are 16 slices, crawl process will run 16 threads for each slice. During an incremental backup, the crawl process detects whether a file has changed since a last backup, and if not, the file will be skipped”, para [0074], Rakesh “The NAS agent passes the consolidated backup file of the previous backup to the NAS agent. It also passes the bucket of sub-assets to be backed up, which is the same as that for full backup. Here, the lookup for elements in each separate sub-asset (running in a separate thread) would be performed on the consolidated backup file only”, para [0075], Rakesh teaches “Table 902 lists six example individual metadata files for sub-assets 1-6. These sub-assets are organized and named within an /ifs directory as dir-1, dir-2, dir-3, dir-4, dir-5, and dir-6. After backup operation, each sub-asset is referenced with a backup ID number, such as ‘12345’ for the backup operation, thus generating metadata files dir-x-12345 for each of the six sub-assets, as shown in table 902.”. Therefore, performing backups in thread, crawling respective directories. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Patle et al. (US 2021/0149703) (hereinafter Patle) in view of Rakesh et al. (US 2022/0334925)(hereinafter Rakesh). Regarding claim 1, Patle teaches a method, comprising: gathering system resource information regarding availability of hardware resources (see Fig. 3, para [0034-0035], discloses gathering CPU and memory resource information in allocations to a database system); based on the system resource information, identifying a thread pool size (see Fig. 3, para [0035], discloses based on CPU and memory resource limits, controlling the size of thread pool dynamically); starting a thread pool having the thread pool size, wherein the thread pool size is dynamically adjustable based on changes in the system resources (see para [0035-0036], discloses thread manager dynamically controlling the size of thread pool to reflect changes in a CPU limitation). Patle does not explicitly teach crawling a filesystem to traverse the filesystem, comprising: detecting a directory; and generating a crawl job specific to the detected directory; and adding the generated crawl job to the thread pool for execution; and performing each crawl job in the thread pool to crawl a respective directory. Rakesh teaches crawling a filesystem to traverse the filesystem, comprising: detecting a directory (see Fig. 4, para [0039-0040], discloses detecting a directory as a specific slice); and generating a crawl job specific to the detected directory (see Fig. 1, Fig. 5, para [0037-0039], discloses generating backup task (crawl job) for corresponding slice with identified directory); and adding the generated crawl job to the thread pool for execution (see Fig. 5, Figs. 7-8, para [0040], para [0048-0050], discloses adding backups to threads ); and performing each crawl job in the thread pool to crawl a respective directory (see Figs. 7-8, para [0049], para [0074-0075], discloses performing backups in thread, crawling respective directories). Patle/Rakesh are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Patle to identify crawl jobs to perform from disclosure of Rakesh. The motivation to combine these arts is disclosed by Rakesh as “enhances filesystem crawlers to make storing and searching for metadata scalable” (para [0005]) and identifying crawl jobs to perform is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claim 11, Patle teaches a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations (see para [0006], discloses medium and processor) comprising: gathering system resource information regarding availability of hardware resources (see Fig. 3, para [0034-0035], discloses gathering CPU and memory resource information in allocations to a database system); based on the system resource information, identifying a thread pool size (see Fig. 3, para [0035], discloses based on CPU and memory resource limits, controlling the size of thread pool dynamically); starting a thread pool having the thread pool size, wherein the thread pool size is dynamically adjustable based on changes in the system resources (see para [0035-0036], discloses thread manager dynamically controlling the size of thread pool to reflect changes in a CPU limitation). Patle does not explicitly teach crawling a filesystem by: detecting a directory; and generating a crawl job specific to the detected directory; and adding the generated crawl job to the thread pool for execution; and performing each crawl job in the thread pool to crawl a respective directory. Rakesh teaches crawling a filesystem by: detecting a directory (see Fig. 4, para [0039-0040], discloses detecting a directory as a specific slice); and generating a crawl job specific to the detected directory (see Fig. 1, Fig. 5, para [0037-0039], discloses generating backup task (crawl job) for corresponding slice with identified directory); and adding the generated crawl job to the thread pool for execution (see Fig. 5, Figs. 7-8, para [0040], para [0048-0050], discloses adding backups to threads ); and performing each crawl job in the thread pool to crawl a respective directory (see Figs. 7-8, para [0049], para [0074-0075], discloses performing backups in thread, crawling respective directories). Patle/Rakesh are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Patle to identify crawl jobs to perform from disclosure of Rakesh. The motivation to combine these arts is disclosed by Rakesh as “enhances filesystem crawlers to make storing and searching for metadata scalable” (para [0005]) and identifying crawl jobs to perform is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claims 3 and 13, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle further teaches wherein the thread pool size is dynamically adjusted based on a detected change in the system resources (see para [0035-0036], discloses detecting dynamic change in resource limits and automatically changing the thread size to reflect changes in CPU limitations). Regarding claims 4 and 14, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein performing one of the crawl jobs identifies a size of a directory to which the crawl job is directed. Rakesh teaches wherein performing one of the crawl jobs identifies a size of a directory to which the crawl job is directed (see Fig. 5, para [0042-0043], discloses identifying size of a directory to crawl in order to combine to achieve a directory of optimal size). Regarding claims 5 and 15, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein the filesystem resources comprise hardware and/or software resources available to perform crawl jobs. Rakesh teaches wherein the filesystem resources comprise hardware and/or software resources available to perform crawl jobs (see Fig. 2, Fig. 7 , para [0048], discloses backup agent sending list of directories and files to crawl). Regarding claims 6 and 16, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein when a filesystem node other than a directory is encountered by the crawling, a file count and a folder size are incremented for that filesystem node. Rakesh teaches wherein when a filesystem node other than a directory is encountered by the crawling, a file count and a folder size are incremented for that filesystem node (see Fig. 3, Fig. 5, para [0043], para [0050, 0054], discloses file count and subdirectory sizes for incremental backups). Regarding claims 7 and 17, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein the thread pool comprises a wait queue Rakesh teaches wherein the thread pool comprises a wait queue (see para [0049], discloses a queue for slices). Regarding claims 8 and 18, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein one of the crawl jobs is performed while the crawling is ongoing. Rakesh teaches wherein one of the crawl jobs is performed while the crawling is ongoing (see Fig. 7, para [0049], discloses crawl process crawls slices in parallel). Regarding claims 9 and 19, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein one of the crawl jobs comprises slicing data in a directory according to one or more criteria. Rakesh teaches wherein one of the crawl jobs comprises slicing data in a directory according to one or more criteria (see Fig. 6, para [0043], discloses slicing directory to accomplish optimal size). Regarding claims 10 and 20, Patle/Rakesh teach a method of claim 1 and a medium of claim 11. Patle does not explicitly teach wherein a data slice, of a directory, created by one of the crawl jobs is re-sliced in response to a change detected in a size of the directory in the filesystem. Rakesh teaches wherein a data slice, of a directory, created by one of the crawl jobs is re-sliced in response to a change detected in a size of the directory in the filesystem (see Fig. 6, para [0042-0043], discloses re-slicing until optimal size directory is reached). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY HARMON whose telephone number is (571)270-5861. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 571-272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Courtney Harmon/Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Sep 30, 2022
Application Filed
Nov 06, 2025
Non-Final Rejection — §103
Jan 29, 2026
Interview Requested
Feb 09, 2026
Applicant Interview (Telephonic)
Feb 09, 2026
Examiner Interview Summary
Feb 09, 2026
Response Filed
Mar 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602439
SEARCH EXPERIENCE MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12566772
SYSTEMS AND METHODS FOR DATA INGESTION FOR SUPPLY CHAIN OPTIMIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561310
METADATA REFRESHMENT FOR A WEB SERVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547612
ATOMIC AND INCREMENTAL TARGET STATE DEFINITIONS FOR DATABASE ENTITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12536157
REPORT MANAGEMENT SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
72%
With Interview (+10.4%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 425 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month