Prosecution Insights
Last updated: April 19, 2026
Application No. 18/155,786

ADAPTIVE PARAMETERIZATION OF PARALLELIZED FILE SYSTEM OPERATIONS

Non-Final OA §102§103
Filed
Jan 18, 2023
Examiner
AGHARAHIMI, FARHAD
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
85%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
194 granted / 275 resolved
+15.5% vs TC avg
Moderate +14% lift
Without
With
+14.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
26 currently pending
Career history
301
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
63.5%
+23.5% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on January 18, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement (IDS) submitted on March 6, 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation It is the position of the Examiner that the computer readable storage medium of Claims 17-20 is limited to non-transitory computer readable storage media in view of paragraph [0023] of the Specification. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-6, 10, 12, 13, 17, 19, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li (PG Pub. No. 2018/0351816 A1). Regarding Claim 1, Li discloses a computer-implemented method of facilitating processing within a computing environment, the computer-implemented method comprising: obtaining a parameterization for a parallelized file system operation of a file system of the computing environment (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds; see also paragraph [0079], where cloud tuning service 106 can provide software client agent for popular subject systems, such as the Windows Operating System, the IBM GPFS Cluster File System); executing, and determining performance of, the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises … reading the state values and parameter values (values of said parameters) of the subject system at intervals); using machine learning to adjust one or more parameters of the parameterization based on performance of the parallelized file system operation to obtain a tuned parameterization (see Li, paragraph [0027], where parameter tuning logic has three sets of inputs: the tuning goal, the state values, and the parameter values … there are many ways to implement the parameter tuning logic … neural networks and reinforcement learning, and other similar machine learning and artificial intelligence methods can also be used); and executing the parallelized file system operation with the tuned parameterization, wherein the adjusting one or more parameters of the parameterization enhances performance of the parallelized file system operation within the computing environment (see Li, paragraph [0029], where parameter tuning instructions are transmitted back to the subject system periodically; the parameter tuning instructions are then executed to change the according parameters periodically). Regarding Claim 3, Li discloses the computer-implemented method of Claim 1, wherein the parameterization comprises an initial parameterization based on one or more expected average parameter values (see Li, paragraph [0006], where a parameter can be set to a certain value, and the value of these parameters change how the systems behave and offer a means to customize the system to meet different user requirements). Regarding Claim 4, Li discloses the computer-implemented method of Claim 1, wherein determining performance of the parallelized file system operation with the parameterization comprises collecting data associated with the executing of the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds), and storing the collected data (see Li, paragraph [0036], where system can further comprise storing the state values, the parameter values, and parameter tuning instructions in a data store). Regarding Claim 5, Li discloses the computer-implemented method of Claim 4, wherein using machine learning to adjust one or more parameters of the parameterization comprises determining a parameter delta for a parameter of the one or more parameters of the parameterization, and based on the parameter delta exceeding a parameter delta threshold, adjusting the parameter of the one or more parameters of the parameterization to, at least in part, obtain the tuned parameterization (see Li, paragraph [0073], where configuration options for each parameter depend on the subject system and can include, but are not limited to, name of the parameter, how it should be set, valid range or set of values, collection interval, time limit of changing the parameter, conditions in which the parameter needs being tuned, and preprocessing instructions … the conditions in which the parameter needs being tuned is a collection of conditions, and when they are met, the parameter needs being tuned; samples for such conditions include, but are not limited to, when a certain state value is lower or higher than a threshold, the value of a certain parameter is lower or higher than a threshold, a job of a certain name has started, etc., or any combination of them). Regarding Claim 6, Li discloses the computer-implemented method of Claim 5, wherein using machine learning to adjust one or more parameters of the parameterization comprises using a machine learning model for mapping the collected data to the one or more parameters of the parameterization, the one or more parameters effecting a desired operational characteristic for executing the parallelized file system operation (see Li, paragraph [0037], where computing of parameter tuning instructions can further comprise: using one or more machine learning or artificial intelligence methods to analyze the state values and the parameter values at intervals; and training one or more models using the state values, the parameter values, and the parameter tuning instructions from one or more subject systems at intervals; and generating parameter tuning instructions at intervals). Regarding Claim 10, Li discloses a computer system for facilitating processing within a computing environment, the computer system comprising: a memory (see Li, Claim 16, wherein the parameter tuning logic comprises … a memory coupled with and readable by the processor); and at least one processor in communication with the memory (see Li, wherein the parameter tuning logic comprises a processor), wherein the computer system is configured to perform a method, the method comprising: obtaining a parameterization for a parallelized file system operation of a file system of the computing environment (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds; see also paragraph [0079], where cloud tuning service 106 can provide software client agent for popular subject systems, such as the Windows Operating System, the IBM GPFS Cluster File System); executing, and determining performance of, the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises … reading the state values and parameter values (values of said parameters) of the subject system at intervals); using machine learning to adjust one or more parameters of the parameterization based on performance of the parallelized file system operation to obtain a tuned parameterization (see Li, paragraph [0027], where parameter tuning logic has three sets of inputs: the tuning goal, the state values, and the parameter values … there are many ways to implement the parameter tuning logic … neural networks and reinforcement learning, and other similar machine learning and artificial intelligence methods can also be used); and executing the parallelized file system operation with the tuned parameterization, wherein the adjusting one or more parameters of the parameterization enhances performance of the parallelized file system operation within the computing environment (see Li, paragraph [0029], where parameter tuning instructions are transmitted back to the subject system periodically; the parameter tuning instructions are then executed to change the according parameters periodically). Regarding Claim 12, Li discloses the computer system of Claim 10, wherein determining performance of the parallelized file system operation with the parameterization comprises collecting data associated with the executing of the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds), and storing the collected data (see Li, paragraph [0036], where system can further comprise storing the state values, the parameter values, and parameter tuning instructions in a data store), and wherein using machine learning to adjust one or more parameters of the parameterization comprises determining a parameter delta for a parameter of the one or more parameters of the parameterization, and based on the parameter delta exceeding a parameter delta threshold, adjusting the parameter of the one or more parameters of the parameterization to, at least in part, obtain the tuned parameterization (see Li, paragraph [0073], where configuration options for each parameter depend on the subject system and can include, but are not limited to, name of the parameter, how it should be set, valid range or set of values, collection interval, time limit of changing the parameter, conditions in which the parameter needs being tuned, and preprocessing instructions … the conditions in which the parameter needs being tuned is a collection of conditions, and when they are met, the parameter needs being tuned; samples for such conditions include, but are not limited to, when a certain state value is lower or higher than a threshold, the value of a certain parameter is lower or higher than a threshold, a job of a certain name has started, etc., or any combination of them). Regarding Claim 13, Li discloses the computer system of Claim 12, wherein using machine learning to adjust one or more parameters of the parameterization comprises using a machine learning model for mapping the collected data to the one or more parameters of the parameterization, the one or more parameters effecting a desired operational characteristic for executing the parallelized file system operation (see Li, paragraph [0037], where computing of parameter tuning instructions can further comprise: using one or more machine learning or artificial intelligence methods to analyze the state values and the parameter values at intervals; and training one or more models using the state values, the parameter values, and the parameter tuning instructions from one or more subject systems at intervals; and generating parameter tuning instructions at intervals). Regarding Claim 17, Li discloses a computer program product for facilitating processing with a computing environment, the computer program product comprising: one or more computer readable storage media and program instructions collectively stored on one or more computer readable storage media (see Li, paragraph [0123], where machine-executable instructions may be stored on one or more machine readable mediums) to perform a method comprising: obtaining a parameterization for a parallelized file system operation of a file system of the computing environment (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds; see also paragraph [0079], where cloud tuning service 106 can provide software client agent for popular subject systems, such as the Windows Operating System, the IBM GPFS Cluster File System); executing, and determining performance of, the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises … reading the state values and parameter values (values of said parameters) of the subject system at intervals); using machine learning to adjust one or more parameters of the parameterization based on performance of the parallelized file system operation to obtain a tuned parameterization (see Li, paragraph [0027], where parameter tuning logic has three sets of inputs: the tuning goal, the state values, and the parameter values … there are many ways to implement the parameter tuning logic … neural networks and reinforcement learning, and other similar machine learning and artificial intelligence methods can also be used); and executing the parallelized file system operation with the tuned parameterization, wherein the adjusting one or more parameters of the parameterization enhances performance of the parallelized file system operation within the computing environment (see Li, paragraph [0029], where parameter tuning instructions are transmitted back to the subject system periodically; the parameter tuning instructions are then executed to change the according parameters periodically). Regarding Claim 19, Li discloses the computer program product of Claim 17, wherein determining performance of the parallelized file system operation with the parameterization comprises collecting data associated with the executing of the parallelized file system operation with the parameterization (see Li, paragraph [0022], where the method comprises: identifying status values of the target system to collect, parameters of the subject system to tune, a tuning goal, and a parameter tuning logic; and deploying the parameter tuning logic in one or more clouds), and storing the collected data (see Li, paragraph [0036], where system can further comprise storing the state values, the parameter values, and parameter tuning instructions in a data store), and wherein using machine learning to adjust one or more parameters of the parameterization comprises determining a parameter delta for a parameter of the one or more parameters of the parameterization, and based on the parameter delta exceeding a parameter delta threshold, adjusting the parameter of the one or more parameters of the parameterization to, at least in part, obtain the tuned parameterization (see Li, paragraph [0073], where configuration options for each parameter depend on the subject system and can include, but are not limited to, name of the parameter, how it should be set, valid range or set of values, collection interval, time limit of changing the parameter, conditions in which the parameter needs being tuned, and preprocessing instructions … the conditions in which the parameter needs being tuned is a collection of conditions, and when they are met, the parameter needs being tuned; samples for such conditions include, but are not limited to, when a certain state value is lower or higher than a threshold, the value of a certain parameter is lower or higher than a threshold, a job of a certain name has started, etc., or any combination of them). Regarding Claim 20, Li discloses the computer program product of Claim 19, wherein using machine learning to adjust one or more parameters of the parameterization comprises using a machine learning model for mapping the collected data to the one or more parameters of the parameterization, the one or more parameters effecting a desired operational characteristic for executing the parallelized file system operation (see Li, paragraph [0037], where computing of parameter tuning instructions can further comprise: using one or more machine learning or artificial intelligence methods to analyze the state values and the parameter values at intervals; and training one or more models using the state values, the parameter values, and the parameter tuning instructions from one or more subject systems at intervals; and generating parameter tuning instructions at intervals). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Li as applied to Claims 1, 3-6, 10, 12, 13, 17, 19, and 20 above, and further in view of HoreKa (“NHR@KIT User Documentation”, https://www.nhr.kit.edu/userdocs/horeka/filesystems_performance, January 22, 2022). Regarding Claim 2, Li discloses the computer-implemented method of Claim 1, wherein: Li does not disclose using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system. HoreKa discloses using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system (see HoreKa, Improving Performance on parallel file systems … when you are designing your application you should consider that performance of parallel file systems is generally better if data is transferred in large blocks and stored in few files). Li and Horeka are directed toward performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Horeka as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Regarding Claim 11, Li discloses the computer system of Claim 10, wherein: Li does not disclose using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system. HoreKa discloses using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system (see HoreKa, Improving Performance on parallel file systems … when you are designing your application you should consider that performance of parallel file systems is generally better if data is transferred in large blocks and stored in few files). Li and Horeka are directed toward performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Horeka as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Regarding Claim 18, Li discloses the computer program product of Claim 17, wherein: Li does not disclose using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system. HoreKa discloses using machine learning to adjust one or more parameters of the parameterization comprises changing the one or more parameters heuristically based on a structure of the file system (see HoreKa, Improving Performance on parallel file systems … when you are designing your application you should consider that performance of parallel file systems is generally better if data is transferred in large blocks and stored in few files). Li and Horeka are directed toward performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Horeka as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Li as applied to Claims 1, 3-6, 10, 12, 13, 17, 19, and 20 above, and further in view of Helmich (PG Pub. No. 2019/0312926 A1) and Landis (PG Pub. No. 2007/0061441 A1). Regarding Claim 7, Li discloses the computer-implemented method of Claim 1, wherein the file system is a clustered file system and the executing the parallelized file system operation with the parameterization comprises: Li does not disclose: obtaining multiple partitions of directories of the file system; distributing partitions of the multiple partitions of directories of the file system among available processing threads of the computing environment; and storing undistributed partitions in a common storage. Helmich discloses: obtaining multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)); and distributing partitions of the multiple partitions of directories of the file system among available processing threads of the computing environment (see Helmich, paragraph [0020], where partition balancing can be as simple as allocating based on processor workload caused by the database system). Li and Helmich are both directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Helmich as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Li in view of Helmich does not disclose storing undistributed partitions in a common storage. Landis discloses storing undistributed partitions in a common storage (see Landis, paragraph [0452], where partition states are in three basic categories: uninstalled, inactive, and active … the inactive {stopped, saved (hibernate)} and active {starting, running, paused (standby)} categories correspond to the provisioning and operating stages). Li, Helmich, and Landis are directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li and Helmich with Landis as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Regarding Claim 14, Li discloses the computer system of Claim 10, wherein the file system is a clustered file system and the executing the parallelized file system operation with the parameterization comprises: Li does not disclose: obtaining multiple partitions of directories of the file system; distributing partitions of the multiple partitions of directories of the file system among available processing threads of the computing environment; and storing undistributed partitions in a common storage. Helmich discloses: obtaining multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)); and distributing partitions of the multiple partitions of directories of the file system among available processing threads of the computing environment (see Helmich, paragraph [0020], where partition balancing can be as simple as allocating based on processor workload caused by the database system). Li and Helmich are both directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Helmich as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Li in view of Helmich does not disclose storing undistributed partitions in a common storage. Landis discloses storing undistributed partitions in a common storage (see Landis, paragraph [0452], where partition states are in three basic categories: uninstalled, inactive, and active … the inactive {stopped, saved (hibernate)} and active {starting, running, paused (standby)} categories correspond to the provisioning and operating stages). Li, Helmich, and Landis are directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li and Helmich with Landis as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Claims 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li, Helmich, and Landis as applied to Claims 7 and 14 above, and further in view of Banyai (PG Pub. No. 2020/0136943 A1). Regarding Claim 8, Li in view of Helmich and Landis discloses the computer-implemented method of Claim 7, wherein the executing further comprises: Li does not disclose: determining whether an average execution time of the parallelized file system operation exceeds an average execution time threshold; and based on the average execution time of the parallelized file system operation exceeding the average execution time threshold, spawning one or more new processing threads to facilitate execution of the parallelized file system operations with the parameterization on any remaining undistributed partitions in the common storage. Banyai discloses: determining whether an average execution time of the parallelized file system operation exceeds an average execution time threshold (see Banyai, paragraph [0124], where dynamic and transparent scaling in response to pressure conditions and performance thresholds that provide an indication of performance degradation is on a per-Kubernetes service level based on defined performance thresholds; see also paragraph [0160 - 0161], where a compute node is under pressure if the compute node 1904 is experiencing a high resource utilization that is impacting the performance of a workload 1708 running on the compute node 1904 … orchestrator/scheduler 102 also monitors and stores workload metrics 1912; workload metrics 1912 include: number of clients, average response latency; see also paragraph [0195], Timed Workload Timer) and based on the average execution time of the parallelized file system operation exceeding the average execution time threshold, spawning one or more new processing threads to facilitate execution of the parallelized file system operations with the parameterization on any remaining undistributed partitions in the common storage (see Banyai, paragraph [0182], where another name for shared-nothing architecture is sharding … each shard is stored in a separate database server instance, to spread load). Li and Banyai are both directed to performance optimization and tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Banyai as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Regarding Claim 15, Li in view of Helmich and Landis discloses the computer system of Claim 14, wherein the executing further comprises: Li does not disclose: determining whether an average execution time of the parallelized file system operation exceeds an average execution time threshold; and based on the average execution time of the parallelized file system operation exceeding the average execution time threshold, spawning one or more new processing threads to facilitate execution of the parallelized file system operations with the parameterization on any remaining undistributed partitions in the common storage. Banyai discloses: determining whether an average execution time of the parallelized file system operation exceeds an average execution time threshold (see Banyai, paragraph [0124], where dynamic and transparent scaling in response to pressure conditions and performance thresholds that provide an indication of performance degradation is on a per-Kubernetes service level based on defined performance thresholds; see also paragraph [0160 - 0161], where a compute node is under pressure if the compute node 1904 is experiencing a high resource utilization that is impacting the performance of a workload 1708 running on the compute node 1904 … orchestrator/scheduler 102 also monitors and stores workload metrics 1912; workload metrics 1912 include: number of clients, average response latency; see also paragraph [0195], Timed Workload Timer) and based on the average execution time of the parallelized file system operation exceeding the average execution time threshold, spawning one or more new processing threads to facilitate execution of the parallelized file system operations with the parameterization on any remaining undistributed partitions in the common storage (see Banyai, paragraph [0182], where another name for shared-nothing architecture is sharding … each shard is stored in a separate database server instance, to spread load). Li and Banyai are both directed to performance optimization and tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Banyai as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). Claims 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Li, Helmich and Landis as applied to Claims 7 and 14 above, and further in view of Abe (PG Pub. No. 2017/0169051 A1). Regarding Claim 9, Li in view of Helmich, and Landis discloses the computer-implemented method of Claim 7, wherein: the parameterization comprises multiple parameters, the multiple parameters comprising a maximum number of processing threads for use in executing the parallelized file system operation (see Li, paragraph [0006], where a computer system can have hundreds of tunable parameters; see also paragraph [0049], where example of parameters are … number of worker threads). Li does not disclose a horizontal limit of directories for the multiple partitions of directories of the file system, and a vertical limit of sub-directories for the multiple partitions of directories of the file system. Li in view of Helmich and Abe discloses a horizontal limit of directories (see Abe, paragraph [0051], where in some embodiments of the present invention, configuration and procedures described above effectively (more fully) use the capacity of a tape because a new virtual tape is created and used when: (i) a single tape cartridge is used; (ii) the maximum number of directories and files (W) has been reached; and (iii) unused partitions are available) for the multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)), and a vertical limit of sub-directories (see Abe, paragraph [0051], where in some embodiments of the present invention, configuration and procedures described above effectively (more fully) use the capacity of a tape because a new virtual tape is created and used when: (i) a single tape cartridge is used; (ii) the maximum number of directories and files (W) has been reached; and (iii) unused partitions are available) for the multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)). Li and Helmich are both directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Helmich as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li and Helmich with Abe for the benefit of making additional partitions available when directory limits are reached (see Abe, paragraph [0025]). Regarding Claim 16, Li in view of Helmich, and Landis discloses the computer program of Claim 14, wherein: the parameterization comprises multiple parameters, the multiple parameters comprising a maximum number of processing threads for use in executing the parallelized file system operation (see Li, paragraph [0006], where a computer system can have hundreds of tunable parameters; see also paragraph [0049], where example of parameters are … number of worker threads). Li does not disclose a horizontal limit of directories for the multiple partitions of directories of the file system, and a vertical limit of sub-directories for the multiple partitions of directories of the file system. Li in view of Helmich and Abe discloses a horizontal limit of directories (see Abe, paragraph [0051], where in some embodiments of the present invention, configuration and procedures described above effectively (more fully) use the capacity of a tape because a new virtual tape is created and used when: (i) a single tape cartridge is used; (ii) the maximum number of directories and files (W) has been reached; and (iii) unused partitions are available) for the multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)), and a vertical limit of sub-directories (see Abe, paragraph [0051], where in some embodiments of the present invention, configuration and procedures described above effectively (more fully) use the capacity of a tape because a new virtual tape is created and used when: (i) a single tape cartridge is used; (ii) the maximum number of directories and files (W) has been reached; and (iii) unused partitions are available) for the multiple partitions of directories of the file system (see Helmich, paragraph [0019], where in one embodiment, a partition balancing tool measures database usage on the organization and the partition level to provide partition to node mapping with improved partition-level load balancing; see also paragraph [0024], the driver operates to query the database system for orgID to partition mapping information and stores that information in the framework (e.g., in a Hadoop embodiment, within the Hadoop Distributed File System, or HDFS)). Li and Helmich are both directed toward load balancing and performance tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li with Helmich as it amounts to combining prior art methods according to known methods to yield predictable results (see MPEP 2143(I)(A)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Li and Helmich with Abe for the benefit of making additional partitions available when directory limits are reached (see Abe, paragraph [0025]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARHAD AGHARAHIMI whose telephone number is (571)272-9864. The examiner can normally be reached M-F 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FARHAD AGHARAHIMI/Examiner, Art Unit 2161 /APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Nov 08, 2023
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602424
PROACTIVE PERSONALIZATION OF MULTIMEDIA CONTENT AND DIALOG CONTENT THROUGH UTILIZATION OF LARGE LANGUAGE MODEL(S)
2y 5m to grant Granted Apr 14, 2026
Patent 12586347
SCALABLE PIPELINE FOR MACHINE LEARNING-BASED BASE-VARIANT GROUPING
2y 5m to grant Granted Mar 24, 2026
Patent 12541556
DISTRIBUTED GRAPH EMBEDDING-BASED FEDERATED GRAPH CLUSTERING METHOD, APPARATUS, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12530410
Systems and Methods for Clustering with List-Decodable Covers
2y 5m to grant Granted Jan 20, 2026
Patent 12511307
Systems and Methods for Exploring Quantifiable Trends in Line Charts
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
85%
With Interview (+14.5%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month