Prosecution Insights
Last updated: April 19, 2026
Application No. 18/368,854

Fractionalized Task Distribution and Throttling Framework for High-Volume Transactions

Non-Final OA §101§103§112
Filed
Sep 15, 2023
Examiner
HUARACHA, WILLY W
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Servicenow Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
4y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
300 granted / 410 resolved
+18.2% vs TC avg
Strong +53% interview lift
Without
With
+53.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
28 currently pending
Career history
438
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
26.3%
-13.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are currently pending and have been examined. Claim Objections Claim 2 and 16 are objected to because of the following informalities: Re-claim 2, references the terms “second request”, “second plurality of parallelizable jobs”, “second fractionalized tasks distributor” in an ordinal form. However, there’s no reference to “a first request”, “first plurality of parallelized jobs” or “first fractionalized tasks distributor” in claim 1 (e.g. an ordinal reference “first” must be used before using reference “second” to avoid confusion). Claim 16, has similar issues as claim 2. Appropriate correction is necessary. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/15/2023 and 11/22/2024 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Form PTO-1449 is signed and attached hereto. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-14 are directed to methods and fall within the statutory category of processes; Claims 15-19 are directed to a non-transitory computer-readable medium and fall within the statutory category of articles of manufacture; and Claim 20 is directed to a system and falls within the statutory category of machines. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Step 2A Prong 1: Claims 1, 15 and 20:The limitations “assigning, to the fractionalized task distributor, a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads, and wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs”, which as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can think and observe, judge and evaluate a schedule of worker thread availability and tasks not included a plurality of parallelizable jobs and mentally assign threads to a fractionalized task distributor. Therefore, Yes, claims 1, 15 and 20 recite judicial exceptions. The claims have been identified to recite judicial exceptions, Step 2A Prong 2 will evaluate whether the claims are directed to the judicial exception. Step 2A Prong 2: Claims 1, 15 and 20: The judicial exception is not integrated into a practical application. In particular, the claim recites the following additional elements-“A non-transitory computer readable medium having stored thereon program instructions that, upon execution by a computing system, cause the computing system to perform operations” and “a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry” and “one or more processors; and memory, containing program instructions that, upon execution by the one or more processors, cause the system to perform operations” which are merely recitations of generic computing components and functions being used as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, claims 1, 15 and 20 recite the following additional elements –“directing the fractionalized task distributor to execute the plurality of parallelizable jobs via the plurality of worker threads” which amount to mere instructions to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, claims 1, 15 and 20 recite the following additional elements –“receiving a request relating to a plurality of parallelizable jobs; obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads”, which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. After having evaluating the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1, 15 and 20 not only recite a judicial exception but that the claim is directed to the judicial exception as the judicial exception has not been integrated into practical application. Step 2B: Claims 1, 15 and 20: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and mere instructions to apply an exception and/or field of use/technological environment, and insignificant extra-solution data gathering activity which do not amount to significantly more than the abstract idea. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, Claims 1, 15 and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 2 and 16, they recites additional mental processes and additional elements “assigning, to the second fractionalized task distributor, a second plurality of worker threads for execution of the second plurality of parallelizable jobs, wherein the second plurality of worker threads is based on the second predefined number of worker threads, and wherein assigning the second plurality of worker threads is according to the schedule, the plurality of parallelizable jobs, and the one or more tasks not included in the plurality of parallelizable jobs or the second plurality of parallelizable jobs” which as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can think and observe, judge and evaluate a schedule of worker thread availability and tasks not included a plurality of parallelizable jobs and mentally assign threads to a fractionalized task distributor. Claims 2 and 16 recite the additional elements “directing the second fractionalized task distributor to execute the second plurality of parallelizable jobs via the second plurality of worker threads at least partially concurrently with the fractionalized task distributor executing the plurality of parallelizable jobs via the plurality of worker threads”, which amount to mere instructions to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate the judicial exception into practical application. Claims 2 and 16 recite the additional elements “receiving a second request relating to a second plurality of parallelizable jobs, wherein the schedule of worker thread availability is also with respect to a second fractionalized task distributor, and wherein the second fractionalized task distributor is operable according to a second predefined number of worker threads”, which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” With regard to claim 3, recites additional abstract ideas “wherein a sum of the predefined number of worker threads and the second predefined number of worker threads is greater than a count of worker threads from the schedule of worker thread availability, and wherein a sum of the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor is less than or equal to the count of worker threads” which as drafted, is merely a basic mathematical comparison, which merely compares a sum of a predefined number of worker threads and the second predefined number of worker threads as being greater than a count of worker threads from a schedule, and further compares a sum of the plurality of worker threads assigned to the fractionalized task distributor and a second plurality of worker threads assigned to a second fractionalized task distributor as being less than or equal to the count of worker threads”. The claim does not recite additional elements that integrate a judicial exception into practical application or amount to significantly more. With regard to claim 4, recites additional abstract ideas “wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are both at least 1” which as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can think and observe, judge and evaluate a quantity of threads assigned to a first and second tasks and mentally determine as being at least one. The claim does not recite additional elements that integrate a judicial exception into practical application or amount to significantly more. With regard to claims 5 and 17, recite additional abstract ideas “wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are based on respective priorities of the fractionalized task distributor and the second fractionalized task distributor” which as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can think and observe, judge and evaluate the priorities of the fractionalized task distributor and second fractionalized task distributor and mentally assign a plurality of tasks at least based the priorities. The claims do not recite additional elements that integrate a judicial exception into practical application or amount to significantly more. With regard to claim 6, recites additional elements “wherein the predefined number of worker threads corresponds to a maximum number of worker threads that can be assigned to the fractionalized task distributor” which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” With regard to claim 7, recites additional abstract ideas “wherein the plurality of worker threads is less than or equal to the predefined number of worker threads” which as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe, judge and evaluate and compare a quantity of worker threads and mentally determine as being greater than a predefined number of worker threads. The claim does not recite additional elements that integrate a judicial exception into practical application or amount to significantly more. With regard to claim 8, recites additional elements “wherein directing the fractionalized task distributor to execute the plurality of parallelizable jobs comprises directing the fractionalized task distributor to execute the plurality of parallelizable jobs at least partially in parallel with one another” which amount to mere instructions to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application nor amount to significantly more. With regard to claim 9 and 18, recites additional elements “wherein the plurality of parallelizable jobs relate to reception of a data object into a computing platform that executes the fractionalized task distributor, and wherein the parallelizable jobs respectively relate to reception of nonoverlapping portions of the data object”, which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” With regard to claim 10, recites additional elements “wherein reception of the data object into the computing platform comprises writing representations of the non-overlapping portions of the data object into entries of one or more database tables of the computing platform” which is merely a recitation of insignificant data storage activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity … iv. Storing and retrieving information in memory” With regard to claims 12 and 19, recites additional elements “wherein the plurality of parallelizable jobs relate to responding, by a computing platform that executes the fractionalized task distributor, to a query for a data object, and wherein the parallelizable jobs respectively relate to non-overlapping portions of the data object”, which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” With regard to claim 13, recites additional elements “wherein responding to the query for the data object comprises reading representations of the non-overlapping portions of the data object from entries of one or more database tables of the computing platform”, which is merely a recitation of insignificant data gathering activity (see MPEP § 2106.05(g)) which does not integrate the judicial exception into practical application. Further, the insignificant extra-solution data gathering activity is Well-Understood, Routine, and Conventional (WURC), see MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data” Therefore, Claims 1-20 do not recite patent eligible subject matter under 35 U.S.C. §101. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The following claim languages are not clearly understood and indefinite: As per claims 1, 15 and 20, line 3, recite “obtaining a schedule of worker thread availability”. However, it is not clearly defined as to what constitutes “a schedule of worker thread availability” For purposes of examination, it is interpreted as determining and obtaining a list of available threads that are not being used. Lines 7-9, further recites “wherein assigning the plurality of worker threads is according to … one or more tasks not included in the plurality of parallelizable jobs”. However, it is uncertain and not clearly understood as to what constitute said “one or more tasks not included in the plurality of parallelized jobs”, nor it is clear how the one or more tasks contribute to assigning the plurality of worker threads. As per claims 2-14, 16-19, they are rejected as being dependent on rejected claims 1, 15 and 20. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-8, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over George et al. (U.S. Pub. No. 20170031723 A1) in view of Chandrasekhar et al. (U.S. Patent No. 12367074 B1), and further in view of Goldman et al. (U.S. Patent No. 9170848 B1). As per claim 1, George teaches the invention as claimed including a method comprising: a schedule of worker thread availability [thread pool] … (par. 0015 Each of the schedulers [e.g. OnDemand scheduler, Fig. 1, 204] is implemented as an agent having a respectively configured thread pool. It is noted, for example the OnDemand scheduler agent has its own thread pool of available threads). receiving a request relating to a plurality of parallelizable jobs (par. 0034 the IR application [parallelizable job] may have many types of concurrently executing jobs at any particular time; par. 0100 The on-demand agent specifically is designed to cater to dynamic job requests from an online user; par. 0046 Database 210 may also include job entries 218. The job entries 218 represent jobs to be executed. An entry for a particular job may be in database 210 as database entries and/or in one or more job queues; par. 0060 Each job manager [scheduler] may have a number of job runners concurrently processing jobs [run jobs in parallel]); assigning, to the fractionalized task distributor [a high priority scheduler] a plurality of worker threads for execution of the plurality of parallelizable jobs, wherein the plurality of worker threads is based on the predefined number of worker threads … (par. 0008 The on-demand job scheduler can include at least a high priority scheduler [fractionalized task distributor] and a low priority scheduler, where the high priority scheduler is configured with job processing capabilities greater than the low priority scheduler. The on-demand job scheduler can be configured to allocate [assign] a first number of threads to the high priority job scheduler and a second number of threads to low priority scheduler; par. 0058 A maximum concurrency level 516 is specified for each type of manager. It is noted, a maximum concurrency level corresponds to a maximum predefined number of threads e.g. for the high priority scheduler); and directing the fractionalized task distributor [high priority job scheduler/manager] to execute the plurality of parallelizable jobs via the plurality of worker threads (par. 0009 The high priority job scheduler [fractionalized task distributor] and the low priority job scheduler can be each respectively configured to access a job entry in the job database, add a job corresponding to the retrieved job entry to an in-memory queue, and cause servicing of jobs from the in-memory queue; par. 0058 represent whether a manager is a high priority job manager [scheduler] or a low priority job manager. Each manager [scheduler] may also be configured with a job queue 514 to which it has access to, and from where it retrieved jobs for execution. A maximum concurrency level 516 is specified for each type of manager. The concurrency level represents the number of concurrent threads the manager is authorized (or is configured) to start. It is noted, that the high priority job manager/scheduler can run the jobs via the first number of threads allocated to the high priority job scheduler [as described in par. 0008]). George does not expressly disclose: obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads. However, analogous prior art, Chandrasekhar teaches: obtaining a schedule of worker thread availability with respect to a fractionalized task distributor, wherein the fractionalized task distributor is operable according to a predefined number of worker threads (col. 3, lines 31-35 The resource controller module determines a number of available threads associated with the job category of the NMS (e.g., a number of threads associated with the job category not allocated for other jobs associated with the job category) at a second time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of determining a number of available threads associated with a job category of Chandrasekhar with the method of scheduling tasks using a plurality of schedulers of George resulting in a system and method of assigning the available threads to a first scheduler [fractionalized task distributor] and executing jobs at least based on the determined number of threads available. One of ordinary skill in the art would have been motivated to make this combination in order to enable resources (e.g., threads) to be shared by multiple tenants and allow the resources to be fairly allocated based on real-time needs of individual tenants. This promotes efficient use of the resources of the NMS (col. 3, lines 62-67). George and Chandrasekhar do not expressly describe: wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs. However, Goldman teaches: wherein assigning the plurality of worker threads is according to the schedule and one or more tasks not included in the plurality of parallelizable jobs (col. 7, lines 37-41 processing system 300 may be used to process input data 302 [parallelizable job] which may be divided into input data blocks … multiple workers 304 may apply mapping operations to the data blocks; col. 7, lines 51-60 The workers 304, 308 include one or more process threads which can be invoked based on the particular task assigned to it by the master process 320. For example, each worker process 304 invokes a map thread to handle an assigned map task. In some implementations, the workers 304, 308 include one or more additional threads. For example, a worker may be assigned multiple map tasks in parallel and a distinct thread may be assigned to process each map task. In another example, a distinct thread may be used to receive remote procedure calls [other task]. It is noted, Workers may be assigned a parallel job comprising a plurality of map tasks, wherein each map task of the plurality of map tasks is assigned a respective thread, additionally, another task [for receiving remote procedure calls] maybe assigned a respective distinct thread). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of assigning respective distinct threads to each map task and other tasks of Goldman with the methods/system of George and Chandrasekhar resulting in a system and method of assigning a plurality of available threads in accordance a parallel job comprising a plurality of task and others tasks and available threads. One of ordinary skill in the art would have been motivated to make this combination for the purpose reducing execution time for task operations in which the number of workers is sufficient to handle all the tasks at once (col. 8, lines 52-54). Further, it would provide for reducing the amount of [job] data to be shuffled, thereby increasing throughput and saving resources (col. 4, lines 41-42). As per claim 2, George further teaches: receiving a second request relating to a second plurality of parallelizable jobs, wherein the schedule of worker thread availability is also with respect to a second fractionalized task distributor, and wherein the second fractionalized task distributor is operable according to a second predefined number of worker threads (par. 0008 The on-demand job scheduler can include at least a high priority scheduler [first] and a low priority scheduler [second fractionalized task distributor]; par. 0100 The on-demand agent specifically is designed to cater to dynamic job requests from an online user; par. 0046 Database 210 may also include job entries 218. The job entries 218 represent jobs to be executed. An entry for a particular job may be in database 210 as database entries and/or in one or more job queues; par. 0060 Each job manager may have a number of job runners concurrently processing jobs [run jobs in parallel]. It is noted that the job runners can jobs in parallel, thus jobs are parallelizable). assigning, to the second fractionalized task distributor, a second plurality of worker threads for execution of the second plurality of parallelizable jobs, wherein the second plurality of worker threads is based on the second predefined number of worker threads … (par. 0008 The on-demand job scheduler can include at least a high priority scheduler and a low priority scheduler [second fractionalized task distributor], where the high priority scheduler is configured with job processing capabilities greater than the low priority scheduler. The on-demand job scheduler can be configured to allocate a first number of threads to the high priority job scheduler and a second number of threads; par. 0058 A maximum concurrency level 516 is specified for each type of manager); and directing the second fractionalized task distributor to execute the second plurality of parallelizable jobs via the second plurality of worker threads at least partially concurrently with the fractionalized task distributor executing the plurality of parallelizable jobs via the plurality of worker threads (par. 0009 The high priority job scheduler [fractionalized task distributor] and the low priority job scheduler can be each respectively configured to access a job entry in the job database, add a job corresponding to the retrieved job entry to an in-memory queue, and cause servicing of jobs from the in-memory queue; par. 0058 represent whether a manager is a high priority job manager [or job scheduler] or a low priority job manager. Each manager [scheduler] may also be configured with a job queue 514 to which it has access to, and from where it retrieved jobs for execution. A maximum concurrency level 516 is specified for each type of manager. The concurrency level represents the number of concurrent threads the manager is authorized (or is configured) to start.). Goldman further teaches: wherein assigning the second plurality of worker threads is according to the schedule, the plurality of parallelizable jobs, and the one or more tasks not included in the plurality of parallelizable jobs or the second plurality of parallelizable jobs (par. 0008 The on-demand job scheduler can include at least a high priority scheduler [first] and a low priority scheduler [second fractionalized task distributor]; par. 0100 The on-demand agent specifically is designed to cater to dynamic job requests from an online user; par. 0046 Database 210 may also include job entries 218. The job entries 218 represent jobs to be executed. An entry for a particular job may be in database 210 as database entries and/or in one or more job queues; par. 0060 Each job manager may have a number of job runners concurrently processing jobs [run jobs in parallel]. It is noted that the job runners can jobs in parallel, thus jobs are parallelizable). As per claim 3, George further teaches: the predefined number of worker threads and the second predefined number of worker threads (par. 0058 A maximum concurrency level 516 is specified for each type of manager. It is noted, a maximum concurrency level may correspond to a maximum predefined number of threads for each scheduler) … count of worker threads from the schedule of worker thread availability (par. 0015 Each of the schedulers [e.g. OnDemand scheduler, Fig. 1, 204] is implemented as an agent having a respectively configured thread pool), and plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor (par. 0008 The on-demand job scheduler can include at least a high priority scheduler [fractionalized task distributor] and a low priority scheduler … The on-demand job scheduler can be configured to allocate [assign] a first number of threads to the high priority job scheduler and a second number of threads). George and Goldman do not expressly teach: wherein a sum of the predefined number of worker threads and the second predefined number of worker threads is greater than a count of worker threads from the schedule of worker thread availability, and wherein a sum of the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor is less than or equal to the count of worker threads. However, Chandrasekhar further teaches: wherein a sum of … worker threads is greater than a count of worker threads [a minimum number of threads] from the schedule of worker thread availability, and wherein a sum of the plurality of worker threads … [maximum number of threads] is less than or equal to the count of worker threads (col. 6, lines 39-50 The particular group of threads associated with the job category may include a total number of threads associated with the job category that is greater than or equal to the minimum number of threads associated with the job category to perform the job and less than or equal to the maximum number of threads associated with the job category to perform the job). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique allocating threads to perform a jobs of Chandrasekhar with the method of scheduling tasks using a plurality of schedulers of George resulting in a system and method of assigning the available threads to a first scheduler and [fractionalized task distributor] and second scheduler [second fractionalized distributor] and executing jobs at least based on the determined number of threads available, a predefined maximum number of threads, and count of number of threads. One of ordinary skill in the art would have been motivated to make this combination in order to enable resources (e.g., threads) to be shared by multiple tenants and allow the resources to be fairly allocated based on real-time needs of individual tenants. This promotes efficient use of the resources of the NMS (col. 3, lines 62-67). As per claim 5, George further teaches: wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are based on respective priorities of the fractionalized task distributor and the second fractionalized task distributor (par. 0008 The on-demand job scheduler can be configured to allocate a first number of threads to the high priority job scheduler and a second number of threads, less than the first number of threads, to the low priority job scheduler.). As per claim 6, George further teaches: wherein the predefined number of worker threads corresponds to a maximum number of worker threads that can be assigned to the fractionalized task distributor (par. 0058 A maximum concurrency level 516 is specified for each type of manager). As per claim 7, George further teaches: wherein the plurality of worker threads is less than or equal to the predefined number of worker threads (par. 0060 The number of job runners started by a job manager may be limited to the maximum concurrency level configured for that job manager). As per claim 8, George further teaches: wherein directing the fractionalized task distributor to execute the plurality of parallelizable jobs comprises directing the fractionalized task distributor to execute the plurality of parallelizable jobs at least partially in parallel with one another (par. 0009 The high priority job scheduler and the low priority job scheduler can be each respectively configured to access a job entry in the job database, add a job corresponding to the retrieved job entry to an in-memory queue, and cause servicing of jobs from the in-memory queue; par.0060 Each job manager may have a number of job runners concurrently processing jobs). As per claim 15, it is a non-transitory computer-readable medium having similar limitations as claim 1. Thus, claim 15 is rejected for the same rationale as applied to claim 1. George further teaches: a non-transitory computer-readable medium (par. 0076 computer-readable medium). As per claim 16, it is a non-transitory computer-readable medium having similar limitations as claim 2. Thus, claim 16 is rejected for the same rationale as applied to claim 2. As per claim 17, it is a non-transitory computer-readable medium having similar limitations as claim 5. Thus, claim 17 is rejected for the same rationale as applied to claim 5. As per claim 20, it is a system having similar limitations as claim 1. Thus, claim 20 is rejected for the same rationale as applied to claim 1. George further teaches: one or more processors; and memory, containing program instructions (par. 0106 and Fig. 14 describe CPU 1421 and Memory 1422; par. 0107, The software program instructions and data may be stored on computer-readable storage medium). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over George in view of Chandrasekhar and Goldman, and further in view of Kumar et la. (U.S. Pub. No. 20070061808 A1). As per claim 4, George further teaches: wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor … (par. 0008 The on-demand job scheduler can be configured to allocate a first number of threads to the high priority job scheduler and a second number of threads, less than the first number of threads, to the low priority job scheduler). George, Chandrasekhar and Goldman do not expressly teach: wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are both at least 1. However, Kumar teaches: wherein the plurality of worker threads assigned to the fractionalized task distributor and the second plurality of worker threads assigned to the second fractionalized task distributor are both at least 1 (par. 0022 the microengines 310-1 through 310-N may comprise one or more threads and each thread may perform a sub-task. One or more threads of a microengine may execute a micro-block; claim 12, in page 5, allocating at least one thread of the plurality of threads to each microblock of the plurality of microblocks). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of allocating threads to each microservice of Kumar with the methods/systems of George, Chandrasekhar and Goldman, resulting in a system and method of assigning at least 1 available thread to each of a first scheduler [fractionalized task distributor] and second scheduler [second fractionalized distributor] and executing jobs at least based on the allocated number of available threads, a predefined maximum number of threads, and count of number of threads. One of ordinary skill in the art would have been motivated to make this combination because it would provide efficient utilization of the processor resources by saving the processor cycles spent on reading the invalid data and bandwidth (par. 0037). Claims 9-14 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over George in view of Chandrasekhar and Goldman, and further in view of Little et al. (U.S. Pub. No. 20120011347 A1). As per claim 9, George further teaches: wherein the plurality of parallelizable jobs relate to reception of a data object into a computing platform that executes the fractionalized task distributor (par. 0034 jobs may include obtaining, processing and delivering financial information). George and Chandrasekhar and Goldman do not expressly describe: wherein the parallelizable jobs respectively relate to reception of nonoverlapping portions of the data object. However, Little teaches: wherein the parallelizable jobs respectively relate to reception of nonoverlapping portions of the data object (par. 0004 Parallel processors may receive instructions and/or data from the controller and may return a result to the controller; par. 0114 SPMD command 900 may increase processing performance by dividing large data sets into pieces, and by providing each piece to different resources; par. 0025 In another implementation, parallel programming may refer to data parallel programming, where data (e.g., a data set) is parsed into a number of portions that are executed in parallel using two or more software units of execution). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of dividing large data sets into pieces of Little with the system and methods of George, Chandrasekhar and Goldman resulting in a system and method for receiving jobs as large data sets, dividing into plurality of pieces and assigning a plurality of available threads to schedulers for executing the plurality of pieces as in Little. A person of ordinary skill would have been motivated to make this combination because the dividing jobs of large data sets into pieces or portions would provide for increasing processing performance (par. 0114). As per claim 10, George further teaches: wherein reception of the data object into the computing platform comprises writing representations of the … [jobs] of the data object into entries of one or more database tables of the computing platform (par. 0046 Database 210 may also include job entries 218. The job entries 218 represent jobs to be executed. An entry for a particular job may be in database 210 as database entries and/or in one or more job queues). Little further teaches: non-overlapping portions of the data (par. 0025 data (e.g., a data set) is parsed into a number of portions). As pe claim 11, Little further teaches: wherein reception of the data object into the computing platform comprises breaking the data object into the non-overlapping portions of the data object, wherein the plurality of parallelizable jobs are respectively associated with processing of the non-overlapping portions of the data object, and wherein executing the plurality of parallelizable jobs via the plurality of worker threads comprises transforming the non-overlapping portions of the data object into a storage format supported by the computing platform (par. 0114 SPMD command 900 may increase processing performance by dividing large data sets into pieces, and by providing each piece to different resources. Each resource may execute the same program on its piece of data, and the results may be collected). As per claim 12, it is a method having similar limitations as claim 9. Thus claim 12, is rejected for the same rationale as applied to claim 9. Little further teaches: responding … to a query for the data object (page, 15, claim 39, where the query is: related to the parallel processing, sent after the unit of execution has commenced parallel processing, and sent from the unit of execution; receiving an answer to the query). As pe claim 13, it is a method having similar limitations as claim 10. Thus claim 13 is rejected for the same rationale as claim 10. As per claim 14, it is a method having similar limitations as claim 11. Thus claim 14 is rejected for the same rationale as applied to claim 11. As per claim 18, it is a non-transitory computer-readable medium having similar limitations as claim 9. Thus, claim 18 is rejected for the same rationale as applied to claim 9. As per claim 19, it is a non-transitory computer-readable medium having similar limitations as claim 12. Thus, claim 19 is rejected for the same rationale as applied to claim 12. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Willy W. Huaracha whose telephone number is (571)270-5510. The examiner can normally be reached on M-F 8:30-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached on (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WH/ Examiner, Art Unit 2195 /BRADLEY A TEETS/ Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §103, §112
Apr 06, 2026
Interview Requested
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547427
DESERIALIZATION METHOD AND APPARATUS, AND COMPUTING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541390
SYSTEM SUPPORT REPLICATOR
2y 5m to grant Granted Feb 03, 2026
Patent 12504993
HIGH-THROUGHPUT CONFIDENTIAL COMPUTING METHOD AND SYSTEM BASED ON RISC-V ARCHITECTURE
2y 5m to grant Granted Dec 23, 2025
Patent 12455753
CLOUD BASED AUDIO / VIDEO OPERATING SYSTEMS
2y 5m to grant Granted Oct 28, 2025
Patent 12443440
METHOD FOR EXECUTING DATA PROCESSING TASK IN CLUSTER MIXED DEPLOYMENT SCENARIO, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+53.4%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month